#kubernetes (2024-10)
Archive: https://archive.sweetops.com/kubernetes/
2024-10-10
Does anyone have any good resources on the considerations between having an ArgoCD instance per cluster vs a single ArgoCD orchestration instance for multiple clusters? I found this vid. https://www.youtube.com/watch?v=bj9qLpomlrs
I’ve heard once that someone tried to delete argo deployed “accidentally” into the default namespace, which in turn messed up entire cluster.
Not sure does it fit into discussion, but maybe worth to think something along those lines. And what about separation of lifecycle between prod and non-prod environments?
Too funny. The more people hype this magical kit, the more it looks exactly the same as everything else. It’s not like those external influences to your internal operational mandates are changing – re: availability, reliability, security, compliance, vendor evaluations, etc.
Pretty much yeah. Despite this, I think I will go down this road ‘cause I need those Argo features like canary releases, instrument releases/lifecycle with running e2e testing even before merging to master, running them again after release - and some other tests, rollbacks, custom workflows etc
It’s fine. Enough retrospectives get folks into shape over time. Leaning on learned lessons and industry standards isn’t important. I love an innovator, but that activity belongs in the application layer, not anywhere near ops reliability concerns.
The thought of having an argo per LOB/Business driven separation of concern isn’t bad.
Have you seen this article? https://codefresh.io/blog/a-comprehensive-overview-of-argo-cd-architectures-2024/
Planning to deploy Argo CD and support a lot of Kubernetes clusters? In this article, we’ll cover the different deployment strategies and architectures used along with their pros and cons. A Comprehensive Overview of Argo CD / GitOps Architectures – 2024 Hub and Spoke Standalone Split-Instance Control Plane Guidelines for Scaling with Argo CD First […]
2024-10-16
2024-10-17
RESOLVED Hey guys, I hope everyone is having a nice day. Has anyone here used cloudnative-pg operator to run your postgres cluster before? I have a running cluster but I can’t seem for the life of me figure out how to properly expose it for outside use for a few days now, I am able to connect to it using kubectl port forwarding. I am running on AWS EKS and here is my config, it’s bare minimum and it even manages to create the load balancer with the external IP, but I just can’t able to connect to it, even using telenet I’m not able to ping it:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-example
spec:
instances: 3
storage:
size: 1Gi
managed:
services:
disabledDefaultServices: ["ro", "r"]
additional:
- selectorType: rw
serviceTemplate:
metadata:
name: cluster-example-rw-lb
annotations: # this is the fix!
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
type: LoadBalancer
what is the output of kubectl get svc
in the namespace
@Hao Wang hey man! Long time seeing you! Haha!
Here’s what I’m getting (redacted some of it):
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster-example-rw ClusterIP Internal_IP <none> 5432/TCP 55m
cluster-example-rw-lb LoadBalancer Internal_IP LoadBalancer_DNS 5432:30594/TCP 55m
kubernetes ClusterIP Internal_IP <none> 443/TCP 185d
yeah, it’s been a long time. :slightly_smiling_face: seems the security group of LoadBalancer_DNS
doesn’t allow your access
Hey @Hao Wang! How did you know based on the get svc
? I would also like to know (my bad still noob at this)
@Hao Wang thanks for the help again, I have managed to expose it properly this time! But still I don’t know how you figured it was the security group based on the kubectl get svc
got the answer in private:
LB is showing the external IP so it’s safe to assume it’s working fine and the security group was the issue
2024-10-18
2024-10-19
2024-10-28
Anyone using CAST AI to scale node pools and use spot instances? Looking for something opensourced but theres not quite something like it
How about Karpenter?
I think it lacks the intelligence to check if spot instances are available again and move the workloads back there?
yeah, this is the missing part, AWS may have a workaround for that
GKE is my only playground
This may be what you’re looking for, https://aws.amazon.com/blogs/compute/applying-spot-to-spot-consolidation-best-practices-with-karpenter/
This post is written by Robert Northard – AWS Container Specialist Solutions Architect, and Carlos Manzanedo Rueda – AWS WW SA Leader for Efficient Compute Karpenter is an open source node lifecycle management project built for Kubernetes. In this post, you will learn how to use the new Spot-to-Spot consolidation functionality released in Karpenter v0.34.0, […]
now karpenter is > 1.0, this feature is supported since 0.34, so it should be stable
or almost
Karpenter is developed by AWS and now Azure starts to support it
Ohh I see GKE is not supported
GKE may have another autoscaler
2024-10-29
2024-10-31
Hi guys. I am launching KubeWhisper - AI CLI kubectl assistant tomorrow. Waitlist is open until the end of day today. Sign up if you would like to try for free. I’d appreciate any feedback, commends, concerns, etc…
Waitlist: https://brankopetric.com/kubewhisper
Joined!
Thanks. I’ll share installation guide shortly. :)