#kubernetes (2018-08)
Archive: https://archive.sweetops.com/kubernetes/
2018-08-01
@Phil has joined the channel
@my-janala has joined the channel
2018-08-03
eksctl - a CLI for Amazon EKS
a CLI for Amazon EKS
2018-08-05
@jylee has joined the channel
/kind bug What happened: Hi, I'm an engineer at Let's Encrypt. I think you may also have heard from my colleague @cpu. We're finding that a lot of our top clients (21 out of 25, by log …
2018-08-08
@pericdaniel has joined the channel
@Michael Holt has joined the channel
2018-08-15
@Dylan has joined the channel
2018-08-16
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management
2018-08-18
@Daren @michal.matyjek are you using this? I didn’t know about this.
I do not believe we do, first time I see this
Documentation for Helm - The Kubernetes Package Manager.
2018-08-20
@Erik Osterman (Cloud Posse) No. Id wait for an official upgrade path., but it will be welcome
2018-08-21
@tarrall has joined the channel
2018-08-24
Kind of an edge case scenario, but lead me to finding some (older) issues for nginx-ingress, but other folks might run into this as well.. There is a known race condition in the 0.11 release of nginx-ingress that leads to a race condition that causes the mechanism that retrieves secrets to fail, if and only if, you are creating multiple ingresses w/ TLS enabled.
this is with kube-lego?
@rohit.verma
correct
btw, rohit contributed cert manager
but I haven’t tried it out yet
i haven’t gotten around to moving to cert manager yet either
From the git issues I was seeing, the main guy that works on nginx-ingress (git user: aledbf) fixed it pretty quickly in later releases, apparently should be stable in >= 0.13
I think this issue is what led @dave.yu and @jonathan.olson to ultimately switch to ACM
less fancy and not dynamic
@dave.yu has joined the channel
yeah, i think that for staging (especially unlimited staging env) cert-manager or kube-lego is fine, but for prod, ACM (for now) is the way to go
yea
have you set that up?
in the process of
Also, learned a bit about this, this morning, very cool. https://cilium.io/ Saw it featured on the TGI Kube thing that Joe Beda/Heptio does
Linux-Native, API-Aware Networking and Security for Containers. Open source project, Fork me on Github
FWIW: Updated to use nginx-ingress 0.15.0 and all my services are still available and the race condition is fixed
what Upgrade nginx-ingress to use docker image 0.15.0 why Fix race condition when using TLS and kube-lego references https://sweetops.slack.com/archives/CBW699XE0/p1535138518000100
thanks!
@Erik Osterman (Cloud Posse) We have even stopped using certmanager and nginx ingress controller , we are currently using ACM and kube-aws-ingress-controller
thanks for the update
This is a better setup if we are not require to rewrite targets
I would also recommend to switch to CoreDns
how was kube-aws-ingress-controller
to setup? is this the same are the original coreos alb ingress?
(does kube-aws-ingress-controller
use ALBs?)
yes kube-aws-ingress-controller use ALB
nice
did you need to specify the subnets?
but its very different than coreos alb ingress
or security groups
no, it doesn’t work that way
it takes only region
(the coreos one didn’t autodiscover a lot - at least when we tried it, and ultimate felt it wasn’t worth it)
but then discover autodiscover all components
sweet!
do you have a helmfile
you can share?
coreos had only single alb ingress controller, they have another forwarder component as skipper
i don’t have helmfile but can share my manifest.
it automatically discover the acm also which we have created as part of root modules
another cool thing would be to enable aws-iam-authenticator by default
so you’re not using the zalando helm chart?
nopes
how come?
my policy is, if the setup is not too complicated kubectl apply is preferred
this a very simple setup
and if the chart didn’t work in 3 trials , use kubectly lack of time
(haha, you had me googling kubectly
)
yea, a bit of time goes to maintaining charts
interested to see what other engines will be introduced in helm 3
i am awaiting when helm3 will be launched, specially as of namespace limitation
i couldn’t use helm to manage our internal service
hey did you got chance to try with .dockerenv in xx.cloudpose.co modules
We’re doing something similar for Caltech. Building a kubernetes-in-a-box distro with geodesic as the base image
And a pretty setup menu.
We’re writing the env to /localhost/.geodesic/env
and then sourcing it on load.
It’s similar in design, but not using a .dockerenv
nice
also, we’ve got a poc running #geodesic containers in CI/CD
(codefresh)
lots of PRs in the past week to polish up the geodesic base image for that
if I got it right, you are saying you did setup a ci/cd for complete dev.cp.co environment, correct?
that is very cool, i was trying to samething at some point of time,
Yes, we got it as far as running init-terraform
and terraform plan
nothing precluding apply
but doing that with Codefresh, isn’t unsecure, cause geodesic require nearly adminacess to setup env
also, going to start adding some testing
CD of infrastructure requires admin access pretty much
either that be CodeBuild, Jenkins, or any other system.
our strategy is multipronged
use multiple pipelines for different kinds of CI and CD
and use multiple codefresh accounts, one per stage
I agree but I will rely on instance profile more than a exposing admin credentials
Yea, just codebuild is more teadious iMO to work with
codefresh enterprise supports running agents on prem
also, the way we’re currently pursuing this is still running aws-vault
inside of codefresh
to generate short-lived sessions
that is much better, i would probably continue with codebuild since we are not using aws-vault also
yea… you’re diverging but that’s cool
once you are finished, if you decided to openup I will translate to a codebuild pipeline for you
that would be cool
anyways since this is very limited build minutes, it would be easily covered under build plane
by the way do you come across this project https://github.com/GoogleContainerTools/kaniko
kaniko - Build Container Images In Kubernetes
Haven’t seen it
Similar tools include:
img
orca-build
umoci
buildah
FTL
Bazel rules_docker
is this something you’re researching to implement?
yes, we are using ci agents in kubernetes. Since most of them use dind and rely on host docker
its insecure as well as we experienced that its not good to have more than 1 agent per machine
docker build is actually a single threaded command and use some intrinsic locking
if we are able to use these projects with our agents, we can actually run many agents with our cluster
yea, good point
just came across img
the other day
not something though we’re optimizing for
honestly, there’s too much to solve
i’d rather not also have to solve building
just checking
have you used probot/hubot?
nope, never heard actually
we are mostly on gitlab
ok
i like the way the kubernetes/charts PRs work. how authorized users can issue commands via comments.
want to do that via slack and github
this is also very cool, github is actually coolest of all. I wish my org wasn’t such a miser to use the free gitlab
haha, these days it feels like i just hear how awesome gitlab is
Geodesic “Kiosk” Mode
Using dialog
we’ve created a simple menu system to spin up kubernetes clusters with the full ML stack they need.
One-click create/destroy cluster
Uses helmfile to install all charts
2018-08-28
what are some interview questions that come up for k8?
I’m not quite sure - haven’t interviewed
but I can share the line of questioning I use
I like to start broad - first ask about the kubernetes architecture. what are all the daemons used to create a kubernetes cluster. the objective here is to see if the candidate is just a user or operator.
if they don’t know, then they are a user. if they answer correctly (e.g. api server, controller, kubelet, proxy, etc), then I dig down into each one of those to see what level of depth they have.
if they are just a user, that’s cool too - with kops, you barely need to know the underlying services any more
then I ask them to rattle off as many resource types as they can. it shows what they’ve used.
then when to use what kind of resource type and when.
I like to ask what kinds of problems they’ve encountered in the past and how they solved them.
I like to ask what kinds of apps they’ve deployed and how they deployed it. if they deployed stateful apps, i’m always curious if the risks are well understood.
i like to know what integrations they’ve used with kubernetes. for example, if they haven’t used helm, that would be a redflag.
thank you!
this helps!
Interesting….
<https://github.com/kubernetes/charts>
now redirects to <https://github.com/helm/charts>