#kubernetes (2022-06)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2022-06-03

Gabriel avatar
Gabriel

Hi, I am interested in how you people deal with memory volatile applications running in Kubernetes. Like the application has different memory requirements based on request data. So sometimes it needs 10Gi to successfully execute a request and sometimes it needs 1Gi. Are there any other, more efficient solutions except setting the resource request to 10Gi? I wonder if Vertical Pod Autoscaler would be a better solution here …

Reinholds Zviedris avatar
Reinholds Zviedris

Why don’t you se both - requests and limits? If minimum is 1GB then it goes as request, but limit is set to 10GB - so - in case things are growing they could grow, but limit also will not allow to outgrow unlimited and you’ll get OOM event.

Reinholds Zviedris avatar
Reinholds Zviedris

Set respective alert and tune the requests/limits occasionally.

Gabriel avatar
Gabriel

Yes. If I understand correctly, the requests are 1Gi, the scheduler schedules a pod on a node that has 5Gi left. At one point the app needs 10Gi but on the node only a couple of Gi’s are left. What happens?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Are there any other, more efficient solutions except setting the resource request to 10Gi?
What follows is not exactly an answer to your question, but I think a viable alternative:

Consider using Karpenter and have pods request the upper limit (e.g. 10Gi) of what they need. Karpenter will spin up nodes right sized to what you actually need based on existing capacity. This is different from the traditional auto scaler, in that it can manage a fleet of heterogeneous instances outside of an autoscale group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws/karpenter

Kubernetes Node Autoscaling: built for flexibility, performance, and simplicity.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I guess the financial feasibility of this sort of depends on how many concurrent pods will be making this request

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That can also be mitigated by using spot instances if the lifetime is short or interruptable

Gabriel avatar
Gabriel

Yes, thanks. Karpenter does “bin packing” which is good. But we still need to request to the upper limit of memory even if most of the time only a fraction of it is used by the app.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will bring up on #office-hours today

2022-06-08

Andy avatar

Any teams out there running PHP apps in kubernetes? We’re currently running them as fat containers with php-fpm and nginx bundled together. A while back we attempted to use roadrunner (2.4.1) but it caused developers pain and they gave up trying.

For a bit more context:

• One such php-fpm & nginx service runs with ~20pods at peak serving 2k requests per min

• Another peaks at around ~15 pods Just curious as to what is considered best-practice here.

• Do fat containers in k8s matter?

• Is splitting nginx out into it’s own deployment and service worth doing? (I guess nginx and php-fpm could then scale independently and get better resource usage)

Gabriel avatar
Gabriel

splitting them would be more efficient since you don’t really need as many nginx instances as you need backend instances. if you split it into two deployments you can scale them independently.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will bring up on #office-hours today

2022-06-12

Andrey Taranik avatar
Andrey Taranik

hi guys ! hi @Erik Osterman (Cloud Posse) (may be you remember me) ! Short question but seems it wasn’t discussed here.

Anyone have a real experience with secure container runtimes in production ? All that stuff you probably heard - kata containers, firecracker-containerd, gVisor and cloud hypervisor. There are no problem setup secure runtime for simple tests, but for real workload it could be tricky or ever impossible. Would be great discuss real working cases here.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey @Andrey Taranik - not something that’s come up for us

rontron avatar
rontron

haven’t played with this too much but if i needed to further secure/harden our container environments, i would personally dig into distroless images

https://github.com/GoogleContainerTools/distroless

GoogleContainerTools/distroless

Language focused docker images, minus the operating system.

2022-06-14

    keyboard_arrow_up