#kubernetes (2021-08)
Archive: https://archive.sweetops.com/kubernetes/
2021-08-03
Hi folks! I’ve got a kubernetes cluster running on an old machine which I’m using for a home server. Kubernetes is undoubtedly an overkill for my purposes, but I’m in this for the learning. My problem is the following:
I currently have a docker-compose file with a series of containers I use. One of them is a VPN container that I use to put in front of other containers (from a network perspective). When I want a container to be behind this containerized VPN I’ve created, I just add the following network_mode: service:vpn
to the docker-compose file. At the moment, I’m trying to migrate these pure docker containers to my kubernetes cluster but I currently got no clue on how to do a similar thing with this VPN situation.
Any ideas?
2021-08-12
Couldn’t find any discussion on this in archives. Anyone using this here and have a quick set of pointers for a complete k8 n00b like myself? About to play around with provisioning some applications using k8’s other manage and setting up DAPR & k8s locally with Lens too. Any other essentials or quick 101 articles would be welcome if you found something invaluable. I know that’s a broad thread topic, but as I’ve not used k8s before, I’m sure your experience might point me to a few resources that are not obvious to me
2021-08-24
the great article that help me today https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/
See how pods vs containers differ. Learn how sidecar containers help multi-container pods w/ sidecar container examples. Pod to pod communication info.
2021-08-27
Hello, what’s your strategy to size Kubernetes node_pool ?
• do you use a node_pool for system and multiple node_pool ?
• how much pod do you set per node ? ◦ for the default node pool ◦ for other node_pool ◦ maximum looks to be 110, is this a recommendation or a hard limit ? https://kubernetes.io/docs/setup/best-practices/cluster-large/ In AKS there is an option to taint default node pool for system only, I guess it’s mendatory for production to have it enabled https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#only_critical_addons_enabled
Size of node pool should be determined by auto scaler
We use Spot Ocean to automatically scale the node pools, and it will even launch heterogenous nodes based on actual resource requirements of pods scheduled
Thanks @Erik Osterman (Cloud Posse) I’ll look into it
Speaking of GCP I can tell that we are relying on a cluster autoscaler, indeed. In some cases we use different node pools based on apps requirements. We also hit the limit of 110 in the past, the solution for us was to use smaller nodes.
As an alternative there is GKE Autopilot which handles node management for you hence you don’t need to even think about either the size of a node pool or the node type: https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview
Thank you for the extended answer on this topic in the devops Office hours at minutes 23:49 https://youtu.be/XooJLvzfdnY