#kubernetes (2020-03)
Archive: https://archive.sweetops.com/kubernetes/
2020-03-02
2020-03-03
At a Rancher rodeo (mini-conference) today. @ me if you have any questions you’d like answered.
2020-03-04
Hi all, let me know if you have any tips on providing k8s credentials (namespace specific) to Jenkins, for deploying a couple of applications
I admin a bunch of EKS clusters and I’m not sure whether I should provide certificates, add “every” AWS user needed to the “aws-auth” configmap, or what else…
I say “provide credentials to Jenkins” but it could be some developers too in the future
docs are pretty vast, so a quick insight of what works for you in a similar situation, might be all I need to get started
Hey Andrea, how I have solved for something similar is to use roles to give out EKS access to users. Basically you create a bunch of groups/roles in your IDP and they map them to AWS roles (via SAML etc., we had SSO setup). Populate your aws-auth configmap with references to the AWS roles. Add your users to the appropriate IDP group and see the magic work ..
alright, I’ll investigate that - thanks for your input!
maybe already mentioned but in a pinch k2tf is a nifty way to convert existing kubernetes deployments/resources into kubernetes terraform provider ready tf manifests: https://github.com/sl1pm4t/k2tf
Kubernetes YAML to Terraform HCL converter. Contribute to sl1pm4t/k2tf development by creating an account on GitHub.
2020-03-05
Here are 15 interesting takeaways from the CNCF annual survey.
hello guys. I’m trying to add CSI driver for EKS like described here https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html but I’m getting error like this
Warning ProvisioningFailed 7m57s persistentvolume-controller storageclass.storage.k8s.io "ebs-sc" not found
Normal ExternalProvisioning 2m2s (x25 over 7m55s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Normal Provisioning 72s (x9 over 6m47s) ebs.csi.aws.com_ebs-csi-controller-f89d5544-wd646_e578127d-8c7b-4ac6-8aac-065a4165b629 External provisioner is provisioning volume for claim "default/ebs-claim"
Warning ProvisioningFailed 62s (x9 over 6m37s) ebs.csi.aws.com_ebs-csi-controller-f89d5544-wd646_e578127d-8c7b-4ac6-8aac-065a4165b629 failed to provision volume with StorageClass "ebs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Does anyone know what is that?
It looks like you’re trying to use the ebs-sc storageclass before it’s been defined.
I don’t think so, bacause it is defined here
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
That URL returns a 404
here
it is example app to test that my driver works
But if you used the kubectl
command you posted above, it won’t work as that is not a valid URL… so kubectl apply
would try to apply the 404 response from github.. which clearly isn’t going to work
Looking at the github repo, I’m guessing what you want is:
kubectl apply -k <https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/deploy/kubernetes/overlays/stable/kustomization.yaml>
CSI driver for Amazon EBS https://aws.amazon.com/ebs/ - kubernetes-sigs/aws-ebs-csi-driver
command above is working
Huh, no idea then.
I’m continuing to digging
2020-03-09
Kubernetes apps, the easy way . Contribute to alexellis/arkade development by creating an account on GitHub.
Looks interesting
yeh saw that last week - i’ve been using k3sup for my home pi cluster project and it’s pretty nice
Hah, I just looked a bit closer. It literally uses a go package for every chart. https://github.com/alexellis/arkade/tree/master/cmd/apps
Kubernetes apps, the easy way . Contribute to alexellis/arkade development by creating an account on GitHub.
2020-03-11
Hi Guys,
Is anybody here having issues creating EKS Clusters using terraform ?
We are seeing this error:
module.eks_cluster.aws_eks_cluster.eks_cluster: Still creating... [11m20s elapsed]
module.eks_cluster.aws_eks_cluster.eks_cluster: Still creating... [11m30s elapsed]
Error: unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)
on ../../../../modules/eks/eks_control_plane/main.tf line 405, in resource "aws_eks_cluster" "eks_cluster":
405: resource "aws_eks_cluster" "eks_cluster" {
AWS just released EKS v1.15 last night and we think it maybe related.
The only thing noteworthy of your attention is to enable the secrets encryption—right below the network settings—and do remember that, at the current juncture, you can only set this at the cluster creation time (that is, not supported via cluster config updates)
can you redeploy a sidecar container in the pod w/o redeploying the entire pod?
i.e. updating a waf sidecar agent version in our ingress daemonset
alternatively, a zero downtime rolling deploy of the daemonset would work too :P
2020-03-12
TL;DR: Azure and Digital Ocean don’t charge for the compute resources used for the control plane, making AKS and DO the cheapest for running many, smaller clusters. For running fewer, larger clusters GKE is the most affordable option. Also, running on spot/preemptible/low-priority nodes or long-term committed nodes makes a massive impact across all of the platforms.
Hello Peeps, is there any way to run a shell in a failing container on k8s ?
CHange the entrypoint to just run:
/bin/sh -c "sleep inf"
and disable probes
just saw this, thanks
will try it out next time
so far all that I was able to find was to run a shell on a running one
Not really afaik. One way of debugging the container is to run an infinite loop as the entrypoint instead of the intended startup script. then you can run the startup script in the shell and see what is happening
2020-03-14
So I ran across this nifty repo of all the kubernetes schemas https://github.com/instrumenta/kubernetes-json-schema I don’t know what I’m using it for yet but its a nice resource to be aware of regardless
Schemas for every version of every object in every version of Kubernetes - instrumenta/kubernetes-json-schema
2020-03-24
How would i configure a stateful set (mongo replicaset) with 3 replicas with statically created PV’s?
My best guess is create 3 PV’s with a label usage: mongo
and then use ReadWriteOnce and
selector:
matchLabels:
usage: mongo
2020-03-25
Anyone has tried this - https://bf.eralabs.io/learnkubernetesbybuilding10projects.html Is it worth getting it ?
Get huge savings and learn new technologies. Deals up to 80% OFF.
Hmm…id be interested in seeing if anyone else has read through that
2020-03-27
Adding @discourse_forum bot
@discourse_forum has joined the channel
2020-03-29
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!
@Erik Osterman (Cloud Posse) - they managed to opensource it
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!
Nifty. This looks a bit like infra.app, yeah?
thanks @Chris Fowles!
Looks nice. Might end up switching back and forth between this and k9s a lot since k9s makes it easy to manage port-forwards
2020-03-30
looks like loghouse (https://github.com/flant/loghouse) has updated their kubernetes logging solution recently. Its worth looking into as an elk alternative (it uses clickhouse for the database). I’ve tested it out at one time and it worked well enough but I wasn’t able to get it into the project as they were asking for Elastic so I gave ‘em EFK instead (boo).
Ready to use log management solution for Kubernetes storing data in ClickHouse and providing web UI. - flant/loghouse
curious what peoples thoughts are on ambassador edge stack? https://www.getambassador.io/docs/ I used ambassador api gateway before and loved it (often said it was my fave ingress controller). toyed around with ambassador edge stack this weekend, didn’t get terribly far but I wasn’t super pleased - lots of bells and whistles, seemingly yet another cli tool you need to install (edgectl). I appreciated the simplicity of their original api gateway.
Hrmmm haven’t tried it. Is GumGum using it?
@Erik Osterman (Cloud Posse) maybe? I know when @Corey Gale poc’d it, edge stack hadn’t come out yet.
We are considering it for putting Google auth in front of our services but haven’t put it in prod yet
Btw @btai see that the oidc proxy project moved to its own org?
Wonder if that makes the future more or less certain
@Erik Osterman (Cloud Posse) i ended up writing my own
How come?
@Erik Osterman (Cloud Posse) it was couple hundred lines of code and does exactly what i need
2020-03-31
Can someone explain why envvars
are leaking between pods in the same namespace? When I run kubectl exec -n default -it $somepod -- bash -c set
I see envvars for ALL the pods in that namespace
I’m very concerned but I’m having trouble finding information on this behavior
I’d look at the deployment before the pod
deployments will have the replicasets which will have the pods (generically)
you certain that they aren’t simply being mapped into the deployments via a configmap?
that would be a reasonable explanation
I don’t think so
I’m looking at each envvar and seeing how they are defined right now
Nope, def not config maps
hi, i’m logging on a way to “enable maintainance mode” on application running in EKS and using AWS ALB, in the past i used ansible to with-list some ip addresses and display a maintainance page for ip’s not in the list, can this be done in K8s, from what i was search i did not find anything like this maybe someone can give me a hint on what to test