#kubernetes (2022-06)
Archive: https://archive.sweetops.com/kubernetes/
2022-06-03
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Hi, I am interested in how you people deal with memory volatile applications running in Kubernetes. Like the application has different memory requirements based on request data. So sometimes it needs 10Gi to successfully execute a request and sometimes it needs 1Gi. Are there any other, more efficient solutions except setting the resource request to 10Gi? I wonder if Vertical Pod Autoscaler would be a better solution here …
![Reinholds Zviedris avatar](https://secure.gravatar.com/avatar/348ae8e0fd10fc2e78acbe448dd598b2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Why don’t you se both - requests and limits? If minimum is 1GB then it goes as request, but limit is set to 10GB - so - in case things are growing they could grow, but limit also will not allow to outgrow unlimited and you’ll get OOM event.
![Reinholds Zviedris avatar](https://secure.gravatar.com/avatar/348ae8e0fd10fc2e78acbe448dd598b2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Set respective alert and tune the requests/limits occasionally.
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Yes. If I understand correctly, the requests are 1Gi, the scheduler schedules a pod on a node that has 5Gi left. At one point the app needs 10Gi but on the node only a couple of Gi’s are left. What happens?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Are there any other, more efficient solutions except setting the resource request to 10Gi?
What follows is not exactly an answer to your question, but I think a viable alternative:
Consider using Karpenter and have pods request the upper limit (e.g. 10Gi) of what they need. Karpenter will spin up nodes right sized to what you actually need based on existing capacity. This is different from the traditional auto scaler, in that it can manage a fleet of heterogeneous instances outside of an autoscale group.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Kubernetes Node Autoscaling: built for flexibility, performance, and simplicity.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
I guess the financial feasibility of this sort of depends on how many concurrent pods will be making this request
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
That can also be mitigated by using spot instances if the lifetime is short or interruptable
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
Yes, thanks. Karpenter does “bin packing” which is good. But we still need to request to the upper limit of memory even if most of the time only a fraction of it is used by the app.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Will bring up on #office-hours today
2022-06-08
![Andy avatar](https://avatars.slack-edge.com/2020-05-21/1161682414896_20498c74fddfeb29e652_72.jpg)
Any teams out there running PHP apps in kubernetes? We’re currently running them as fat containers with php-fpm and nginx bundled together. A while back we attempted to use roadrunner (2.4.1) but it caused developers pain and they gave up trying.
For a bit more context:
• One such php-fpm & nginx service runs with ~20pods at peak serving 2k requests per min
• Another peaks at around ~15 pods Just curious as to what is considered best-practice here.
• Do fat containers in k8s matter?
• Is splitting nginx out into it’s own deployment and service worth doing? (I guess nginx and php-fpm could then scale independently and get better resource usage)
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
splitting them would be more efficient since you don’t really need as many nginx instances as you need backend instances. if you split it into two deployments you can scale them independently.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Will bring up on #office-hours today
2022-06-12
![Andrey Taranik avatar](https://avatars.slack-edge.com/2022-06-12/3669397553761_977b72e1ce855abe47df_72.png)
hi guys ! hi @Erik Osterman (Cloud Posse) (may be you remember me) ! Short question but seems it wasn’t discussed here.
Anyone have a real experience with secure container runtimes in production ? All that stuff you probably heard - kata containers, firecracker-containerd, gVisor and cloud hypervisor. There are no problem setup secure runtime for simple tests, but for real workload it could be tricky or ever impossible. Would be great discuss real working cases here.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Hey @Andrey Taranik - not something that’s come up for us
![rontron avatar](https://secure.gravatar.com/avatar/9849d86452d4ecbeb1523a6c6ff72296.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
haven’t played with this too much but if i needed to further secure/harden our container environments, i would personally dig into distroless images
Language focused docker images, minus the operating system.