#kubernetes (2024-11)
Archive: https://archive.sweetops.com/kubernetes/
2024-11-13
Hello, has anyone here managed to deploy Apache Kafka using Strimzi operator in AWS EKS? I’ve managed to deploy a cluster but I need to expose the consumer’s port to the outside world but I can’t find an example that I could follow
I’ve used it before. I don’t recall using it for that particular use case though. You should be able to expose the service externally. Do you not see an option with their provided CRDs?
Does this not work? https://doc.crds.dev/github.com/strimzi/strimzi-kafka-operator/kafka.strimzi.io/Kafka/[email protected]#spec-kafka-listeners-type
Automatic documentation for your CustomResourceDefinitions.
Hi @venkata.mutyala, could I see your configs on how you exposed it and how you enabled the secured connection (like having scram-sha-512 password based Auth because sasl_plaintext is not supported) please? I have been trying to look for guides but I’m having a really tough time looking for one :( I feel overwhelmed by their documentation
I didn’t expose it secured like you are trying too. We used it internally within the clsuter only.
Ohh so there’s no password of any kind and simply connect to it directly within the cluster? If the Devs need access they would use port forward?
Our use case was primarily to replicate data between databases via kafka + kafka connect + debuzium connectors. If you are publicly exposing you will definitely want to limit access
Even in our use case we probably should have had auth but we didn’t.
here is something that i was able to find off github for your situation: https://github.com/ibm-mas/ansible-devops/blob/1792e98b77b32185c486da66a31c4c89fb1[…]bfcf/ibm/mas_devops/roles/kafka/templates/redhat/masuser.yml.j2
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: maskafka-credentials
namespace: "{{ kafka_namespace }}"
data:
username: "{{ kafka_user_name | b64encode }}"
password: "{{ kafka_user_password | b64encode }}"
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: "{{ kafka_user_name }}"
labels:
strimzi.io/cluster: "{{ kafka_cluster_name }}"
namespace: "{{ kafka_namespace }}"
spec:
authentication:
type: scram-sha-512
password:
valueFrom:
secretKeyRef:
name: maskafka-credentials
key: password
Let me know how it goes.
looks like they committed that file in the past 3 months so decent shot it could be a working example
Thanks man! Strimzi also have this resource User (forgot the exact name haha), I think it’s connected to enabling the security
2024-11-14
2024-11-26
I notice the k3s installer is primarily just a bash script that does everything. Does anyone here use k3s in prod? If so, how did you deploy it? Their official bash script? or did you roll your own automation that created all the nessecary configs/files (e.g. systemd units)?
Dang wasn’t even thinking about nix. That’s an interesting idea
Strongly recommend it!
2024-11-27
Hi, quick question any ideas on how people manage updating , databases caches configs for apis and similar for urls etc? getting tired of writing config maps any way people automated that?
Hey - we last implemented a test library for our k8s/OCP clusters in like 2019, it’s a bunch of python script/ansible modules which perform assertions using the k8s api and sometimes the cloud provider apis. We generally run the tests on a schedule a few times a day on our prod clusters, and run them before and after upgrades (less useful than the monitoring we have, but still handy for more niche cases that monitoring can’t cover). Seems an old fashioned way to do it but it works.
Time’s come to update it, maybe reimplement with some other tech.
What are some other approaches people have seen or are using? Any recommendations welcome!