#kubernetes (2022-04)
Archive: https://archive.sweetops.com/kubernetes/
2022-04-04
https://github.com/kris-nova/kaar <– This looks pretty nifty. Perhaps useful as a helm post-processing tool to bundle the output of complex/multi-image helm charts to a local registry as a singular image? Dunno but keeping it on my radar for sure.
Kubernetes Application Archive
ah, by Kris Nova - she writes a number of cool tools.
Kubernetes Application Archive
2022-04-11
Hi folks,
I’m looking for a K8s(EKS) solution for mounting an S3 bucket as a volume. I’ve found the Dataset CRD and kube-s3. Leaning towards Datashim based on community/documentation. Need a relatively close to POSIX fs and IO performance of the volume is not important. The S3 objects are just runtime configs.
Is there another tool out there that I am missing?
We’ve used Goofys back in the day as a sidecar mounting s3
But also begs the question - what is the original business case you are solving?
uhh I was waiting/afraid for that question. A little background, we(https://www.streetshares.com/) are a SaaS platform that interact with third party integrations(ex. TransUnion) that require a lot of secure connection practices. So like passwords + ssl certs + other stuff.
Handling all the certs can be a pain to manage. Getting out a MVP asap we tossed them in S3 and just use boto to grab them. We got to the point where we needed to get the files on the pod and cache them.
I think it’s easier from dev perspective to have the certs local to the pod. Thanks for asking about the business case, I’d love to hear ideas that are better then what we came up with.
That’s interesting! Thanks for adding the context. I see why you want to go this route. I think I would look at something that can sync/replicate those to kubernetes as native resources. That project you found (datashim) aligns with what I would look at doing today (having never heard of it before). Not crazy about kube-s3 from the project description.
Now this is assuming you want/need S3.
If you on the other hand used SSM/ASM there are too many projects to list.
external-secrets
being the dominant controller
Integrate external secret management systems with Kubernetes
What I like about using external-secrets are the tangential benefits you can use it now for everything else too.
Also, it writes secrets, which on EKS support envelope encryption
datashim
doesn’t support Secrets
so that could be a non-starter, since it would use ConfigMaps
I do like external-secrets. Recently had to migrate ours from the godaddy days. I’ll double back on my side, because I can’t remember why we didn’t go with asm before. I think it was just not having a tool or internal library to update the secrets.
Thank you for your feedback and suggestions! I did not expect such a detailed response and it’s much appreciated.
fwiw, we sometimes use chamber
just to make it easy to update SSM
2022-04-12
2022-04-14
2022-04-26
Digging GKE Autopilot turning off a cluster with no running pods:
but when is that ever the case?
even system critical services are pods
GKE Autopilot killed all the nodes when kube-system namespaces are all that’s running. kubectl still works, so they must have some smart stand-in service?
Budget chart went to zero, so I was impressed.
That makes sense!
2022-04-27
So I ran into the max IP limit for instances with EKS today. Seems fair, but it’s causing pods to fail but not be rescheduled on another node? Is there some sort of taint or config flag i can add to push it to schedule on another node if its close to that limit?
I found a flag that would let stale ips from old pods to be released faster but the ip use is legitimate so I don’t feel that’s necessary.
I’ve used this to increase the number of ip addrs: https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/
Learn how to significantly increase the number of IP addresses that you can assign to pods on each Amazon EC2 node in your cluster.
Sorry, I have plenty of free ip addresses in the pool, the problem is the limit of ip addresses allowed per instance. Rather than change the limit, is it not better for k8s to recognise that the node is “full” when scheduling and place the container elsewhere?
ok - what’s the instance type of your nodes and how many nodes are in your cluster?
t3.large and 9-10 depending on the type of day