#kubernetes (2022-01)
Archive: https://archive.sweetops.com/kubernetes/
2022-01-05
2022-01-07
hi there - does anyone know if there’s an approved terraform provider for creating the helm operator?
here is the module we use https://github.com/cloudposse/terraform-aws-helm-release
Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role
which uses this helm provider https://registry.terraform.io/providers/hashicorp/helm/latest/docs
sweet i’ll check it out thx @RB
make sure to enable the experiment https://registry.terraform.io/providers/hashicorp/helm/latest/docs#experiments
np!
2022-01-10
hi there!
anyone have experienced with EKS ? I have the problem with some nodes, on two nodes, in each 3 minutes, node is reporting that is unhealthy because of the PLEG. I don’t know if it can help in troubleshooting but I have problem with executing command “df -h” only on those nodes - it stuck. it looks like maybe some EFS has not been mounted properly or sth like that
are you mounting EFS directly to the nodes or using the EFS controller with pods?
thanks for your reply. I use EFS controller but to be honest I dunno what happened because I restarted node and no more errors appear. I am afraid that it will happen in the future
2022-01-11
2022-01-13
Will a CKA without professional experience still be able to get a job?
It is a start but I’d be prepared to know kube beyond the testing materials for the interview process
knowing various ways to get workloads into a cluster (helm, kustomize, whatever) and knowing some common shared services one might deploy to a cluster (prometheus, grafana, et cetera) would be a good start
try out a few ingress controllers as well and you will be cooking with gas
good luck!
Just want to also note I don’t have a developer background
come from ops?
No Ops background. Just familiar with Linux and Kubernetes
2022-01-14
has anyone run into this error in EKS Cannot enforce AppArmor: AppArmor is not enabled on the host
we’re using aws linux 2 ami. I’ve read that appArmor (by default) is supported in ubuntu and not RHEL distribution. This is causing pods to fail with status blocked
you can’t use app armor in AL2, its a centos derivative
thx i just realize that after reading further into appArmor
2022-01-15
2022-01-19
Hi, what kind of storage drivers do you use in EKS? The new efs-csi driver is not working for us due to chown-errors.
CLI or Go SDK kubeconfig setup for remote cluster with K8 Service Account
Anyone willing to jump on this thread and help unblock me?
• I have a kubernetes cluster running in Azure to connect to.
• I have no issues connecting for myself using the azure cli login workflow.
• Now I need to use a Kubernetes service account instead, which likely means I’d be using kubectl or the Go client api directly. I need this to setup the config so the remainder of my pipeline actions such as Pulumi can connect to the cluster correctly.
I’m not finding much clarity on how to (via cli) setup the connection to the kubernetes cluster using just the service account token secret I’ve already obtained (and if the ca.crt is required or optional for that). Most workflows assume already connected and have this in kubeconfig or use azure or aws cli to setup connnection. With Kubernetes service credentials directly setting up the connection… I’m finding a dearth of info.
Anyone willing to jump on this thread and help unblock me?
2022-01-21
Hi all - wondering if anyone can assist with the following error one of our pod
Readiness probe failed: Get "<http://x.x.x.x:9999/health?ready>": dial tcp x.x.x.x:9999: connect: connection refused
Readiness probe failed: HTTP probe failed with statuscode: 503
Yep the ip is itself and when I exec into the pod I can curl the endpoint
2022-01-24
https://helmwave.github.io/ <– docker-compose for helm charts. Maybe this one was tossed out there already. Not certain if anyone has used it yet though (seems like something that could be accomplished with artful terraform depends_on constructs too))
I’d be interested in an intersection between this and a generalized interface to multiple container runtimes for spinning up multi-service labs that involve kubernetes along with outside resources (so perhaps a vault service or dynamodb container external to a kind container cluster in the same network). It would be nice to have a declarative manifest for such situations that doesn’t end up devolving into bash script hackery
I mentioned it once in the helmfile channel:) https://sweetops.slack.com/archives/CE5NGCB9Q/p1640848445057600
I’ve just come across this tool which seems to be an alternative to helmfile. https://github.com/helmwave/helmwave
Ever end up using it?
No, haven’t tried yet. And I don’t know if it happens since we put a lot of effort to embrace helmfile) Just for fun and for some side-projects maybe someday
2022-01-26
anyone using EKS and replacing the AWS VPC CNI?
for greater pod density?
Listening. We hate the pod density.
VPC CNI did get some changes to support increased density FYI
the catch is that its only for pods that don’t need the ENI + SG attachments
but, you can run both ‘modes’ together
Thanks @Zach How do you specify the ENI requirements in a pod in that mixed scenario?
been awhile since I looked at that aspect but here’s the blog post on it https://aws.amazon.com/blogs/containers/amazon-vpc-cni-increases-pods-per-node-limits/
As of August 2021, Amazon VPC Container Networking Interface (CNI) Plugin supports “prefix assignment mode”, enabling you to run more pods per node on AWS Nitro based EC2 instance types. To achieve higher pod density, the VPC CNI plugin leverages a new VPC capability that enables IP address prefixes to be associated with elastic network […]
the ‘ENI prefix’ part is the new feature, you have to allocate ENI prefixes in your vpc to use this
the EKS lead on github told me though that you can use this feature + the pod ENI at the same time
correct github link https://github.com/aws/containers-roadmap/issues/138#issuecomment-889208751
I asked about clarifying whether this new density stuff works with pod SGs.
Can pods getting branch interfaces/security groups co-exist with pods getting ENI prefix IP addresses in the same cluster and even on the same node? Yes
Is the prefix assignment launch related to the pod security groups feature at all, and does prefix assignment help increase pod density if you are only running pods with branch network interfaces/dedicated security groups? No
Thanks a ton! Will dive into it
It complicates the setup a bit, we played with it for a couple of days and didn’t quite get it working
Hmm
Admittedly I am not strong in networking
i dont think im going to replace the CNI but would utilize the increase using prefix assignment mode and go smaller on instance size (double the size of the cluster)
AWS support recommends purchasing commercial support if you replace the CNI ( they suggest a few )
2022-01-27
2022-01-28
Currently I am having issues deploying a new cluster with the latest TF EKS module cloudposse/eks-cluster/aws:0.45.0
I am getting the following “interesting” error after the nodes are up and then tf wants to set up the auth_config.
╷
│ Error: Post "<https://jsonplaceholder.typicode.com/api/v1/namespaces/kube-system/configmaps>": x509: certificate signed by unknown authority
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│ on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│ 115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│
Any idea or hint what’s wrong. Google or StackOverflow does not help here…
looks like for some reason your system thinks that the cert is not valid on https://jsonplaceholder.typicode.com
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
you can try setting it to some other URL (or to null
to test if it fixes the issue)
Will try thank you for the hint
@Andriy Knysh (Cloud Posse) sadly setting dummy_kubeapi_server = null
did not help either. I reverted to 0.42.1
to get a working stand using the following settings for the kubernetes provider:
kube_exec_auth_enabled = true
kube_exec_auth_aws_profile_enabled = true
kube_exec_auth_aws_profile = "<my-custom-aws-profile>"