#kubernetes (2021-01)
Archive: https://archive.sweetops.com/kubernetes/
2021-01-01
2021-01-02
2021-01-04
2021-01-05
2021-01-06
TIL … https://medium.com/better-programming/amazon-eks-is-eating-my-ips-e18ea057e045, was wondering where all our IP addresses were going
Understand how AWS EKS manages IP addresses and what you can do about it
Easy workaround: just use IPv6-only k8s clusters and AWS VPCs.
Understand how AWS EKS manages IP addresses and what you can do about it
Oh, wait.
I got in touch with support - they also recommended tuning MINIMUM_IP_TARGET
and WARM_IP_TARGET
on the aws-node
daemonset as another option. Downside of this option is the risk of the EC2 api calls may get throttled if not set properly or there is a lot of pod churn.
kind of related.. i thought i’d posted this in here but i don’t see it? https://ec2throughput.info/ https://github.com/jfreeland/ec2-network-monitor <- i use datadog and these metrics from ethtool aren’t being exposed by dd agent yet
Interesting - is it possible to get number of IPs used from these metrics?
I don’t see that in ethtool
I might just not be looking hard enough though
no i don’t think you can get the number of ip’s unfortunately
it’s only “kind of” related
but relevant to ec2/eks networking in general
and something that bit me in december and amazon only announced that ethtool is surfacing these metrics on december 10
2021-01-07
2021-01-08
2021-01-10
2021-01-11
I just took the CKAD certification exam! Ask me anything! (The material is under NDA, so I can’t be specific, but general is okay)
Way to go. Did you work through Linux foundation training course? I’m wondering what gaps I might have as just started study for it (bought course and exam on discount). Been running managed k8s for company couple years now and kubespray etc at home too.
yeah, what was your study program, if any?
Recently I took the Certified Kubernetes Application Developer (CKAD). I used a number of things to study:
- using my O’Reilly subscription I went through Ben Muschko’s prep course a couple times:
- course: https://learning.oreilly.com/learning-paths/learning-path-certified/9781492061021/
- sample code https://github.com/bmuschko/ckad-prep
Microservices architecture is one of the hottest areas of application development today, particularly for cloud-based enterprise-scale applications. The benefits of building applications using small, single-purpose services are well documented and…
Exercises demonstrated as part of the video course “Certified Kubernetes Application Developer (CKAD) Prep Course” published by O’Reilly Media. - bmuschko/ckad-prep
- also the Linux Foundation course LFD259 https://trainingportal.linuxfoundation.org/learn/course/kubernetes-for-developers-lfd259/
-
people on Reddit gave me valuable real-world info about the test. You only interact with the proctor via text window. It’s pretty intense. It’s a game of knowing which questions to spend your time on and when to punt and come back later. Be super organized!
- lastly I worked through dgkanatsios exercises. These were very helpful! https://github.com/dgkanatsios/CKAD-exercises
The Linux Foundation online learning classes
A set of exercises to prepare for Certified Kubernetes Application Developer exam by Cloud Native Computing Foundation - dgkanatsios/CKAD-exercises
You have to be FAST. I wasn’t enough, I didn’t pass Which is cool, I want my passing grade to mean something when I re-take it
@kskewes if you study AND you’ve been using K8s for a while you might be okay
I’m now building another Study Path, focusing on about 50/50 AWS and Kubernetes, with focus on real-world developer/devops experience. Yesterday was AWS Lightsail. Did you know you can now use it with containers? For me Lightsail isn’t useful but I was very happy to learn what it’s useful for in comparison with other tools.
Thanks heaps John!
2021-01-12
2021-01-14
Hello, can someone share experience on ArgoCD ? or similar product ?
Hello, yes we did. https://medium.com/qonto-way/how-we-scaled-our-staging-deployments-with-argocd-a0deef486997
How do we deploy a full environment composed of ~100 containers in around 3 minutes?
Thanks, will read. How do people promote? Separate MR with semver to use? Seems… toilsome?
thanks @Issif
@kskewes we deploy last version in master branch, the ref is the commit id
2021-01-15
2021-01-18
In an environment with high churn pods, Prometheus metrics might get bloated with lots of quite “temporarily pod-labeled” metrics… does anyone else do something to tackle this or just live with it?
I feel a per-pod metric, outside of its deduplication purpose, is of very little interest really except possibly in a very narrow “live” monitoring sense…
Meanwhile it’s not as impactful for statefulsets, can give them a not-so-dynamic number label instead…
Prometheus is, yeah? …
Was wondering if anyone did something creative outside of just keeping retention low in the first layer Prometheus (due to the potential amplification of number of time series caused by pod names being unique).
Have been on Aurora too historically where it was never an issue because every instance
(“pod”) was numbered rather than uniquely named so number of time series never really had a unique contributor like it does when including a pod name (which potentially can have a really high churn).
Node IPs would be a factor if those are included in labels but that’s also a bit of a limited problem because nodes are usually from a limited IP pool and thus would also see re-use rather than being uniquely fabricated.
2021-01-19
2021-01-20
Hi All - I am trying to setup dual stack IPv6/IPv4 setup.. I am trying to use below kubeadm config file. When try to run kubeadm init with this config file I get below error. Could anyone please help on sorting out this issue and What am I doing wrong?
A word of advice: If you use Shortcuts -> Create a text snippet it will let the snippet shrink so the readers aren’t overwhelmed by a wall of text.
The Shortcuts button is the one that looks like a lightning bolt when you are writing a message.
Updated it.Thanks for the suggestion I was not aware of this option
Was this config written by you or provided as an example somewhere?
Normally when setting out on doing something I’ve never done before (in this case, the dual ipv4/ipv6 stuff), I’ll start from an example on a docs website, medium article, etc that I know works (or is supposed to at least) and work my way up from there.
I am not able to get any proper working config. I have built it myself by referring multiple links. Thats why wanted to confirm is this config correct?
sorry, no idea
hmm no probs thanks for looking into it.
A quick search resulted in this article that has an example kubeadm config file (though the config file looks really small, but maybe that’s all you need to get a working cluster started that you can then add to)
After spending three sleepless nights trying to get my Kubernetes cluster to handle IPv4 and IPv6 connections, and since there’re…
I stumbled upon this one as well. Managed to deploy but during K8s Service deployment did not get IPv6..
2021-01-22
Has anyone used this flag new-pod-scale-up-delay
with the cluster-autoscaler?
Anyone using pritunl? Any feedback on this tool / the paid options?
I’m getting pushback from a client’s auditing team that Tailscale is not PCI compliant (still working through that with them). But I’m looking for any ammo from real experience on why not to use pritunl . I feel like I’ve heard folks discuss it here before and weigh in on pros / cons, but can’t seem to find it.
Pritunl is great. I only used the OVPN side of it though. Didn’t touch Pritunl Zero.
Anyone using pritunl? Any feedback on this tool / the paid options?
I’m getting pushback from a client’s auditing team that Tailscale is not PCI compliant (still working through that with them). But I’m looking for any ammo from real experience on why not to use pritunl . I feel like I’ve heard folks discuss it here before and weigh in on pros / cons, but can’t seem to find it.
Using it and deploying it. Pritunl solves one key issue when handling CA’s. How to send the secret in a secure way and avoid sending it in email. One time click link with eventually pin and also MFA. Would like to test ENT solution as when using SSO it’s far better. The bad side. I would like to get better insight over logs and monitoring. I noticed most of the files are in temp but need more work.
Tailscale rely on wireguard. I didn’t like the feature when you are disconnected it’s not showing it! While pritunl have OpenVPN may be not so fast as wireguard but you see when you get disconnected. Beside it support wireguard too.
Also pritunl you manage the whole scope VS Tailscape. This can turn into risk if you want to have failover solution. You will need to manage a mongodb cluster. I would rather like to have postgresql or mysql as backend.
2021-01-25
Hi, I’m trying out EKS Fargate, to get an impression on how it behaves. I’m experiencing:
• Pending
to ContainerCreating
around the 60/70s mark. A bit slower than I anticipated but seems ok based on what I read online.
• ContainerCreating
to Pending
takes over 4 minutes. Very slow. Pod consists of 3 containers totalling ~800Mb (compressed, based on ECR data). Not ‘tiny’ but far from outrageous I’d say.
Nothing strange in namespace or pod events, although it looks like it’s always the same container of which pull takes excessive long. It’s a node.js app so number of files is a possible cause I can think of. Then again, on regular EC2 workers this doesn’t seem to be a problem.
Does this very slow image download sound familiar to anyone? Has anyone got tips to diagnose or improve it?
2021-01-26
Hey folks — Has anyone had any luck with contacting AWS support / AWS engineering asking for an EKS Platform Version bump? They seem to do it on a rolling basis and I have a production account that I cannot use Fargate Logging in since it’s on 1.18.9 eks.2 instead of eks.3
Amazon EKS platform versions represent the capabilities of the cluster control plane, such as which Kubernetes API server flags are enabled, as well as the current Kubernetes patch version. Each Kubernetes minor version has one or more associated Amazon EKS platform versions. The platform versions for different Kubernetes minor versions are independent.
2021-01-28
concept question use case: the devs don’t want to update 3 times the same env (and they don’t have defaults (yet) in their code) (i use kustomize to manage per env changes) lets says i add the same env variable twice
in the next example if someone sets
common_env_vars:
FOO_VAR: 1
per_env_vars:
FOO_VAR: 1234
######
env:
{{- range $name, $value := .Values.common_env_vars }}
{{- if not (empty $value) }}
- name: {{ $name | quote }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
{{- range $name, $value := .Values.per_env_vars }}
{{- if not (empty $value) }}
- name: {{ $name | quote }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
will the second always be the winner? or is there a chance that every deployment change another one will be set?
Hello, I have a question regarding the cluster-autoscaler. It is possible to tune it to not to scale too quickly? Currently with the default settings its spawns 4 nodes at the same time for just a couple of new gitlab-runner jobs…
Did you set resource requests for runners accordingly?
Yes I have
In this side is not a problem
Just interesting why it spawns more servers as required
And no HPA running, just resource.requests
I am wondering if anyone solved the issue of having ephemeral storage via EBS on AWS EKS. I would love to use https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes - however EKS is still at 1.18. Use case here is, that pods will handle (large file uploads - up to 10GB) - and I don’t want to pollute the host storage volumes with it. Was actually hoping to leverage PVCs that get thrown away, but haven’t found a good solution so far - or maybe I was looking into the wrong direction here? Also considering to use secondary volumes on the cluster nodes - any experiences with using secondary volumes in managed nodegroups?
This document describes ephemeral volumes in Kubernetes. Familiarity with volumes is suggested, in particular PersistentVolumeClaim and PersistentVolume. Some application need additional storage but don’t care whether that data is stored persistently across restarts. For example, caching services are often limited by memory size and can move infrequently used data into storage that is slower than memory with little impact on overall performance. Other applications expect some read-only input data to be present in files, like configuration data or secret keys.
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!
Introduced lens to my team so they don’t have to learn k8s. Has been working well!
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!
doesnt seem to work on microk8s in ubuntu? any how to?
hmm, worked with minikube, can you test minikube on ubuntu?
i dont have minikube but i have a docker desktop that have a running test cluster on it. its not working either, just saying something about proxy i/o timeout . thanks anyway
I’m a huge fan of Lens. My largest client thanks me for introducing them to it all the time.
check this awesome add-on https://www.youtube.com/watch?v=X-bhVwmp2l4&feature=youtu.be
@charlesz tested today on kind, also working fine
hmm then it must be me haha. it auto detects my config file located in /Home/sab/.kube dir.
confirmed working with microk8s
What I did on macOS:
• install multipass
• install microk8s
• add kubeconfig to ~/.kube/config
• choose microk8s from lens
i dont know what else to do to make this work, i am using ubuntu, so probably something blocking on 6443 port that i was not able to communicate in API?
2021-01-29
Hey folks — I’m investigating implementation of service meshes / advanced deployment strategies (Blue / green + Canary) for a client. I’m going to start reading heavily into this next week, but before doing so I figured I’d reach out to a couple communities for opinions before going heads down.
To give a bit of context before I dive into questions — Client is on AWS / EKS utilizing Fargate for all internal services with Flux v1 / Helm Operator as their CD tool. I’ll also be evaluating Argo vs FluxV2 in the coming weeks as well if that matters in this discussion.
On to questions —
- Any definitive, must read resources on this topic that I should know about?
- I see the options today being either AWS App Mesh OR Istio as it seems the service mesh tools are the best route towards implementing advanced deployment strategies in a SOA architecture. Is there any other tooling I should take into account for this decision?
- AFAICT, App Mesh at a high level means lower complexity + less capabilities + more AWS lock in (not sure if I care about this as this point) vs Istio at a high level means OSS + higher complexity + more capabilities. Any other high level thoughts here that I should be aware of?
- Any gotchas / catches I should watch out for in regards to implementing a service mesh?
I’d avoid Istio unless you can definitively say “this is why we need Istio”. Complexity is high and I’d not want to put that burden on a client. I like Flux’s patterns better than Argo - but I’ve seen that Argo tends to have a bigger knowledge pool and might be a bit easier for a team to reason with.
These are opinions only and YMMV
we have similar use case, but using AWS Ecosystem only, for Blue/green deployment using codebuid/codedeploy with codepipeline I can help you with that. even for cross account deployments
@Mohammed Yahya - interested in knowing a little more how you do blue/green via code* tools with EKS - any good resources on that?
@Patrick Jahns I did it for ecs fargate, if you want to learn more, I can send you the path I follow
I think that should cover your use case
Would just like to mention linkerd as an option. Havent tried it myself but I’ve heard only good things with it.
Thanks for the mention of Linkerd — I’ve heard of that project, but don’t know much about it. Will read up.
I’m not a big fan of the Codestar tooling and client isn’t using that so far so I don’t think that’ll be an option.
Good to have another voice say “Hey Istio is complex and likely not worth it unless absolutely necessary”
I’ve seen that a lot and definitely will try to avoid unless I for some reason think this client’s use-case fits, which I highly doubt.
Agree with avoiding the codestar route if you’re not already invested in it - it’s unfriendly to get started with in any real production grade usage from what I’ve found.
on the other hand, Istio is much more mature than Istio and is more likely to implement features you might need down the road compared to AppMesh.
I’d recommend to go through AppMesh Issues/Roadmap on Github, and in general do enough research to understand if it already implements what you need.
2021-01-30
Awesome tool - a must use https://github.com/ahmetb/kubectx
Faster way to switch between clusters and namespaces in kubectl - ahmetb/kubectx
i also recomend installing fzf to make the tool better
Faster way to switch between clusters and namespaces in kubectl - ahmetb/kubectx
An old tool I’ve made years ago //github.com/claranet/kcs>
Select which kubeconfig.yaml to use in an easy way. KCS means kubeconfig switcher. - claranet/kcs
I’m using https://github.com/sbstp/kubie - similar tool - however I prefer the approach as it puts the context in a sub-shell.
Personally this allows me to be able to connect to more than a single cluster at the same time from the shell
A more powerful alternative to kubectx and kubens. Contribute to sbstp/kubie development by creating an account on GitHub.
Kubie looks interesting. Introduction paragraph ticks all the boxes I need (and currently have some custom bash for):
• Independent shells
• Multiple config files (we pull per cluster from paramater store)
IT works quite well for me - so I can work in different projects and switch between clusters and don’t need to worry on the context of that shell, as it’s always scoped to a single cluster
Didn’t like the global nature of kubectx - at least back when I used it
Totally agree. Even with prompts indicating current context… you’ll only know after your command that context was changed in another shell.
Mixing config definition and state in kube config hasn’t been best decision imo. But in hindsight everything is easy and it will be hard to change now with all tooling out there.
is there a tool to read multiple config files from ~/.kube/
Depends on definition of ‘read’ I suppose. Kubie claims to handle separate configs.
Bash tool I have for same purpose assumes config files being organized like ~/.kube/<aws-account-alias>/<cluster-name>.conf
and simply sets KUBECONFIG
env var accordingly, enabling only one at a time.
I assume you can provide multiple config files separated by :
which might accomplish what you want.
(kubectx
ships with #geodesic https://github.com/cloudposse/geodesic)
Geodesic is a DevOps Linux Distro. We use it as a cloud automation shell. It's the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloud…