#kubernetes (2020-10)
Archive: https://archive.sweetops.com/kubernetes/
2020-10-01
Any other users of https://github.com/cloudposse/terraform-aws-eks-cluster having trouble with first spin up and the aws-auth configmap already being created? I’ve run into it twice now and have been forced to import. Wondering what I’m doing wrong there.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
@Matt Gowie you need to add this code to your top-level module where you call the EKS module
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
to prevent the race condition when applying the auth map
Awesome — I knew you’d know the deal @Andriy Knysh (Cloud Posse) but didn’t want to bug you . Thanks man!
2020-10-02
2020-10-06
Hi all - I use AKS and have several deployments that use azureFile
volumes to mount storage accounts into pods. One of the recommendations from Microsoft and CIS is to rotate storage account access keys periodically. One of these keys is stored in a k8s secret object to allow the pod to mount the share, however when the key that is being used is regenerated, the mount breaks (this would be expected; I see Host is down
when I execute ls
in the pod in the mounted directory). When I update the secret value, it doesn’t cause the mount to fix itself and I am forced to restart the pod to have it remount using the new secret value. Is this the expected behavior? Is there anyway to have it “hot remount” when the secret value changes?
2020-10-09
Hi all, I’ve enabled the additional internal load balancer in the nginx ingress controller but I think there is some description missing in: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#additional-internal-load-balancer
controller:
ingressClass: nginx-internal
service:
externalTrafficPolicy: Local
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
By adding the values controller.service.internal.enabled=true
and controller.service.internal.annotations
it will create a second load balancer (internal)
My question is how should be annotated Ingress resources in order to use the internal load balancer?
I tried to use the value [kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): "nginx-internal"
in the ingress resource however, it always uses the external load balancer.
Did you folks were in a similar situation?
Whats the best FaaS in k8s?
2020-10-12
Anyone have experience using the Istio CNI plugin? On EKS? https://istio.io/latest/docs/setup/additional-setup/cni/
Install and use Istio with the Istio CNI plugin, allowing operators to deploy services with lower privilege.
anyone tried the varying serverless / function platforms? which is best? is there a clear front runner with wide adoption?
i see there’s Fission, OpenFaaS, Kubeless, others..
(knative may even work here, i’m not sure.)
I am also curious about this topic, but anyone can share it ?
2020-10-13
“But can it run ~Crysis~om?”
Not really, I get it working out of the box, however the navigation in the map and key strokes did not worked well for me
2020-10-14
does anyone use something like thousandeyes for loss/latency/jitter and maybe path monitoring of endpoints? i was thinking that molo.ch is a good example of an open source project that does similar things, but molo.ch is actually way too much data. thousandeyes is just too expensive for what i want and i could probably write what i want in a weekend but i have to imagine since molo.ch exists there must be something that’s open source/similar and trimmed down that maybe i could contribute to instead
2020-10-15
Kubernetes doesn’t support native SAML integration. Learn how to configure SAML single sign on (SSO) for Kubernetes clusters with user impersonation.
2020-10-16
Join us for the launch of Coffee & Containers!”
C&C will be a monthly gathering to interact and discuss various aspects of platform engineering (IaC, containers, k8s, and more).
This month, come hear Kelsey Hightower, a leader in the PE space, discus:
- State of Kubernetes
- Problem areas for App Lifecycle Management in Kubernetes
- How developers and platform teams can be successful on their Kubernetes journey
In addition, Kelsey will be giving his real, on-the-spot first impressions of the Shipa framework for Kubernetes.
We are looking forward to seeing you there.
https://us02web.zoom.us/webinar/register/WN_FHZRUM_QQ8WFJ5WnQDH6PQ
Sound cool. I’ve heard great things about Shipa, I’m excited to see his reaction
For container security, who ever used Prisma Cloud? It needs open policy agent installed right?
2020-10-17
Anybody happen to know how to set the log level for cni-metrics-helper?
Doing a poc of eks over ecs. What’s a good eks setup with helm, daemonset, and repos?
Wr have a single elb with listeners and target groups for ecs. I was thinking it would be nice to convert one of those services to a helm chart on k8s and swap the ecs for kube
That question is a very open one. As many daemon sets as you need? :). Prometheus node exporter. What else?
Haha yes it is pretty open. We were doing a fargate poc with fluentbit and datadog so looking into putting the same into our daemonset for each pod
I was looking at the cloud posse charts repo as an example for how we can structure our helm charts and deployment
The “Cloud Posse” Distribution of Kubernetes Applications - cloudposse/charts
Ok. Well good luck on your research .
All I can say is. ECS gives me joy. Kubernetes gives me work.
@RB Few things:
- I’d suggest checking out CP’s monochart. My current client’s project that I inherited already used Helm’s chart generator to create ~20 charts and it just leads to really ugly, not DRY charts, which now need to be slowly refactored. CP’s monochart pattern is not documented well AFAIK, but I’m sure Erik would be happy to chat about it during #officehours.
- Check out #helmfile for orchestrating Helm. Again client went with Flux / Helm Operator and I believe I would have preferred we had gone with Helmfile if I had that decision.
- In regards to logging, I’m not too experienced in this area but I’ll share my thoughts: I setup DD as a sidecar on my client’s Fargate Charts/Pods. I couldn’t get it working as smoothly as I wanted though and it’s basically a hack right now that I’m planning on replacing. I would suggest going FluentBit => an aggregator like FluentD or Firehose (CP does Firehose) => external sources (DD Logs / AWS S3 / Others). That looks like the most flexible route — though I’m sure others with more experience in K8s might have other opinions. If you want to chat more in depth about anything in particular then feel free to ping me as I’d be happy to help.
Haven’t checked out mono chart specifically. Will do thanks. But previously I’ve done my own pattern where I built a chart that could support multiple apps since they were all very similar. Really helped avoid a lot of duplication and drift. One chart for all services. Just a matter of passing a few key values to decide which service to do.
Is that something like mono chart?
Can you join us for office hours this week? If so we can talk about it #office-hours
Awesome! I’ll look into the mono chart this wrrk and will try to join office hours erik. Thanks for the info matt!
2020-10-18
An interesting tidbit from the Kubernetes Slack, #kubecon channel:
we will be announcing plans at kubecon next month, but EU will be virtual next year in Q2 and we expect a hybrid event in LA in Q4
So the May-ish KubeCon+CloudNativeCon EU will be virtual, and the November-ish KubeCon+CloudNativeCon US will be hybrid
2020-10-19
2020-10-21
Any opinions on jenkins-x v/s argocd v/s other kubernetes native ci/cd tools ?
we use argocd for some months now, we’re really happy with, de deploy between 1500 and 2000 applications, it means until 4000 pods. We’re reaching the maximum capacity for now, but everything works well
Nice ! do you also do feature branching ?
yeah, 1 project means 1 feature branch for us, we use some logic in gitlab-ci for deploying our 40+ microservices in a feature-branch
Gotcha ! Thats nice to know, did you also evaluate jenkins-x before choosing argo-cd ?
Nope, we went directly on argocd, we tested it and it applied all our criterias
So are you all in on gitops lssif? How do you promote staging to production? Pull request by devs?
We went with Spinnaker because Jenkins x was very new, conflated CI and CD concerns, have regular Jenkins already and moving to gitlab CI because don’t want to manage Jenkins (no good config as code at time, plugin hell etc).
Have run basic argocd at home and simplicity is nice. Haven’t yet looked at enterprise style features of it, notifications etc.
@kskewes pretty much we are. our main env is targeting master branches with auto-sync, so we keep a fresh staging everytime. For production, we target also the master branches but auto-sync is disabled for now. ArgoCD let us know easily what changes will be applied and we just have to “click on Sync”. For now, we prefer take precautions and not auto-sync prod, it will be soon when our new cluster will be ready. for notifications, we use the official argocd-notifications-controllers, it posts event to slack channels, with fields we chose
Any views or opinions about tekton? It is Kubernetes native pipeline solution (also used by many other ci/CD projects e.g. jx) Really want to more about challenges for tekton adoption
2020-10-22
anyone using argo-rollouts or flagger and have strong opinions on one vs the other?
2020-10-23
Regarding eks, How do you folks use namespaces and clusters per aws account? Do you put multiple stages in the same cluster with different namespaces? Or perhaps a cluster for each environment and each namespace for each app?
I use a separate cluster for each environment, and different name space for each app
Yes! I figure this was a good approach. Might i ask, how many clusters per account do you have, and how many apps per env?
As always that depends on your use case and requirements. Here’s a good summary: https://learnk8s.io/how-many-clusters
If you use Kubernetes as your application platform, one of the fundamental questions is: how many clusters should you have? One big cluster or multiple smaller clusters? This article investigates the pros and cons of different approaches.
cool, thanks for sharing. i like this image a lot.
One aws account for development (ci, qa, dev EKS cluster), where mostly deployments from ci/cd pipeline but still DevOps have full access. Other aws account for staging and production . Only pipeline through development and very restricted access
Is there any good alternatives to running statefulsets on fargate (eks) ? We have a bunch of statefulset and daemonset deployments that need to stay in tact in our move from on-prem to cloud. The only option I can currently consider is just pure EKS though Fargate has a lot of nice perks.
You can try for hybrid approach Managed Nodes for baseline deployment and for extra loads/spikes fargate
So you’re saying pure EKS for stateful deployments and fargate for all the stateless?
A combo of the two would be a lot of overhead I’d think, two clusters to maintain over one.
The point of moving off of on-prem is to reduce maintenance.
So daemon set can easily deploy on managed nodes and other deployment you can configure fargate profile ( matching namespace or label)
One EKS can have both: managed node and fargate profile
Ahh, I see. Hmm
This topic helps you to get started running pods on AWS Fargate with your Amazon EKS cluster.
I’ve already added that part, question is how to add a second group for self-managed
yesss! v2 of the alb ingress controller came out today https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.0.0
Our first GA release for AWS Load Balancer Controller (aka. AWS ALB Ingress Controller v2) Documentation Image: [docker.io/amazon/aws-alb-ingress-controller//docker.io/amazon/aws-alb-ingress-controller:v2.0.0) Action Required Please follow o…
2020-10-26
Is anybody aware of an admission controller that automatically adds default imagePullSecrets to a workload? Because of course we’re pushing out DockerHub credentials 4 days before API limits are imposed
A simple Kubernetes client-go application that creates and patches imagePullSecrets to service accounts in all Kubernetes namespaces to allow cluster-wide authenticated access to private container …
I knew that one must exist somewhere… I’ll take a look at it, thanks @zadkiel
2020-10-27
2020-10-28
2020-10-29
Write-up of an outage we experienced with a misconfigured webhook.