#kubernetes (2022-04)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2022-04-04

Zachary Loeber avatar
Zachary Loeber

https://github.com/kris-nova/kaar <– This looks pretty nifty. Perhaps useful as a helm post-processing tool to bundle the output of complex/multi-image helm charts to a local registry as a singular image? Dunno but keeping it on my radar for sure.

kris-nova/kaar

Kubernetes Application Archive

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah, by Kris Nova - she writes a number of cool tools.

kris-nova/kaar

Kubernetes Application Archive

2022-04-11

Van Johnson avatar
Van Johnson

Hi folks,

I’m looking for a K8s(EKS) solution for mounting an S3 bucket as a volume. I’ve found the Dataset CRD and kube-s3. Leaning towards Datashim based on community/documentation. Need a relatively close to POSIX fs and IO performance of the volume is not important. The S3 objects are just runtime configs.

Is there another tool out there that I am missing?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve used Goofys back in the day as a sidecar mounting s3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But also begs the question - what is the original business case you are solving?

this1
Van Johnson avatar
Van Johnson

uhh I was waiting/afraid for that question. A little background, we(https://www.streetshares.com/) are a SaaS platform that interact with third party integrations(ex. TransUnion) that require a lot of secure connection practices. So like passwords + ssl certs + other stuff.

Handling all the certs can be a pain to manage. Getting out a MVP asap we tossed them in S3 and just use boto to grab them. We got to the point where we needed to get the files on the pod and cache them.

I think it’s easier from dev perspective to have the certs local to the pod. Thanks for asking about the business case, I’d love to hear ideas that are better then what we came up with.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s interesting! Thanks for adding the context. I see why you want to go this route. I think I would look at something that can sync/replicate those to kubernetes as native resources. That project you found (datashim) aligns with what I would look at doing today (having never heard of it before). Not crazy about kube-s3 from the project description.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Now this is assuming you want/need S3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you on the other hand used SSM/ASM there are too many projects to list.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

external-secrets being the dominant controller

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
external-secrets/kubernetes-external-secrets

Integrate external secret management systems with Kubernetes

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What I like about using external-secrets are the tangential benefits you can use it now for everything else too.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, it writes secrets, which on EKS support envelope encryption

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

datashim doesn’t support Secrets so that could be a non-starter, since it would use ConfigMaps

1
Van Johnson avatar
Van Johnson

I do like external-secrets. Recently had to migrate ours from the godaddy days. I’ll double back on my side, because I can’t remember why we didn’t go with asm before. I think it was just not having a tool or internal library to update the secrets.

Thank you for your feedback and suggestions! I did not expect such a detailed response and it’s much appreciated.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, we sometimes use chamber just to make it easy to update SSM

2022-04-12

2022-04-14

2022-04-26

Jim Park avatar
Jim Park

Digging GKE Autopilot turning off a cluster with no running pods:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but when is that ever the case?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

even system critical services are pods

Jim Park avatar
Jim Park

GKE Autopilot killed all the nodes when kube-system namespaces are all that’s running. kubectl still works, so they must have some smart stand-in service?

1
Jim Park avatar
Jim Park

Budget chart went to zero, so I was impressed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That makes sense!

2022-04-27

jedineeper avatar
jedineeper

So I ran into the max IP limit for instances with EKS today. Seems fair, but it’s causing pods to fail but not be rescheduled on another node? Is there some sort of taint or config flag i can add to push it to schedule on another node if its close to that limit?

jedineeper avatar
jedineeper

I found a flag that would let stale ips from old pods to be released faster but the ip use is legitimate so I don’t feel that’s necessary.

uncanny-edition avatar
uncanny-edition
uncanny-edition avatar
uncanny-edition
Increase the amount of available IP addresses for your Amazon EC2 nodes - Amazon EKS

Learn how to significantly increase the number of IP addresses that you can assign to pods on each Amazon EC2 node in your cluster.

jedineeper avatar
jedineeper

Sorry, I have plenty of free ip addresses in the pool, the problem is the limit of ip addresses allowed per instance. Rather than change the limit, is it not better for k8s to recognise that the node is “full” when scheduling and place the container elsewhere?

uncanny-edition avatar
uncanny-edition

ok - what’s the instance type of your nodes and how many nodes are in your cluster?

jedineeper avatar
jedineeper

t3.large and 9-10 depending on the type of day

2022-04-29

    keyboard_arrow_up