#kubernetes (2022-11)
Archive: https://archive.sweetops.com/kubernetes/
2022-11-02
![Mallikarjuna M avatar](https://avatars.slack-edge.com/2022-09-14/4084407994659_2f4fc1666d8b6feab4f9_72.png)
Hi Team, can someone help me with creating a service account in Kubernetes with a test namespace and access the resources based on service account kubeconfig file.
2022-11-03
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
How to construct a trust policy for allowing role assumption from multiple / all clusters in one account?
This is the docs example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:default:my-service-account",
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
}
}
}
]
}
This is coupled to one particular OIDC provider i.e. one cluster.
I there are a way to make it cluster independent?
2022-11-07
2022-11-08
![Nenad Strainovic avatar](https://secure.gravatar.com/avatar/3e7bf7bd3b08955e0467d3430bb89b58.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0008-72.png)
Hi everyone,
I’m trying to create K8s secret for Service Account (1.24+), with kubectl but I’m getting the following error:
error: failed to create secret Secret "admin2" is invalid: metadata.annotations[[kubernetes.io/service-account.name](http://kubernetes.io/service-account.name)]: Required value
This is commanand:
kubectl create secret generic admin2 --type='[kubernetes.io/service-account-token](http://kubernetes.io/service-account-token)'
Do you have any idea where to look? I didn’t find a way how to set annotations from the kubectl beside kubectl annotate which can be used on already created objects.
kubectl version 1.25.3 k8s version 1.24.7
Thanks!
![James avatar](https://avatars.slack-edge.com/2023-03-02/4894855683041_f81e53db84fffaf707b8_72.jpg)
Hey Guys - I’m walking to the learning path of K8s and there’s one thing I need to understand.
In your own experience/idea, what is the use case of running multiple schedulers in the real-world?
2022-11-15
2022-11-19
![Jim Park avatar](https://secure.gravatar.com/avatar/e166c478c5b78e93a5fb116d92a2dc7e.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Not sure who might want this in the future, but here’s something I put together to export a kubernetes namespace to disk.
2022-11-29
![Talal Ashraf avatar](https://avatars.slack-edge.com/2022-05-20/3554242473989_e8259a479f362c59d2e9_72.png)
Hey Folks. Wondering if people using EKS have tried using Karpenter ? Can I simply replace the autoscaler with this ? The autoscaler unfortunately doesn’t consider volume node affinities
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
(re: affinities, we use EFS for this reason; not suitable for all workloads, but suitable for quite a lot)
![Hao Wang avatar](https://secure.gravatar.com/avatar/aa01de6ab42f1576bbb56a203c660939.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
I used Karpenter, much faster than HPA didn’t use volume affinity, it should support
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![attachment image](https://external-preview.redd.it/hvWTsR4_gVy1-MBC_qjE9pA1ywF02JcVZbr-KcetZF4.png?overlay-align=bottom,left&crop=1080:565.445026178,smart&overlay-height=15p&overlay=%2Fwatermark%2Ft5_33f68.png%3Fs%3D7de1260c03e96c56871bc66c78819a1b2668d0fb&width=1080&height=565.445026178&auto=webp&s=70c7266224d86cd558f941fcbac1159bb5e48fd3)
Posted in r/kubernetes by u/xrothgarx • 182 points and 44 comments
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Karpenter is rad, but I wouldn’t say it’s just as easy as replacing the autoscaler if you want to do it in a production configuration.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
You’ll still need compute capacity to run karpenter itself
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
We provision fargate profiles to run operators, then run karpenter on fargate, which manages the rest of the cluster.
![Talal Ashraf avatar](https://avatars.slack-edge.com/2022-05-20/3554242473989_e8259a479f362c59d2e9_72.png)
EFS will become cost inhibitive for us. off the top of your head what are some consideration when swapping out autoscaler ?