#kops (2019-06)
Discussions related to kops for kubernetes
Archive: https://archive.sweetops.com/kops/
2019-06-12
is any one using the [dns.alpha.kubernetes.io/internal](http://dns.alpha.kubernetes.io/internal)
annotation on nodes with success?
Not sure about that annotation
What does it do?
Adds a dns record with the internal (vpc ip) of the nodes
No way to filter nodes though
Not the solution im after, have a few other ideas
What is the solution you are after?
A maintained a record listing the vpc ips of all instance in a set instance group
Cassandra dedicated nodes
Exploring a similar solution using external-dns and host port
2019-06-19
Tim and I are going to start pushing PR’s up to you
I think time might already have one in for fixing your race condition with multi vpc peering
wow, that would be cool!
We are looking at upgrading from Kops 1.11 to 1.12. The upgrade instructions mention that it is a disruptive upgrade without going into details of how much. Is there anyone who has gone through it and can share their experience? cc @Jeremy G (Cloud Posse)
@btai @rohit.verma @Jan
@btai has joined the channel
@rohit.verma has joined the channel
unfortunately still on 1.11.9 for our kops clusters
also im not sure if i will run into this issue even when i do upgrade to 1.12 as my clusters are ephemeral (i would spin up a new 1.12 cluster and deploy/cutover to it)
Technically there is no usable upgrade path from etcd2 to etcd3 that supports HA scenarios, but kops has enabled it using etcd-manager. Nonetheless, this remains a higher-risk upgrade than most other kubernetes upgrades - you are strongly recommended to plan accordingly: back up critical data, schedule the upgrade during a maintenance window, think about how you could recover onto a new cluster, try it on non-production clusters first.
it almost sounds to me that spinning up a new cluster is prob the safest way forward, but im trying to imagine the way you guys are terraforming the cluster/env might make it hard to do that type of blue/green cutover?
blue/green is difficult for us right now. We are already on etcd 3, but no TLS or etcd-manager. We also use calico
Can you use route53 to route traffic to both cluster?
that would give you a fall back plan
or use external CDN (e.g. cloudflare) with multiple origins
We still use VPC peering to bridge kops and our backend vpc
oh, but you can peer the VPC to both k8s clusters
so you create a new kops vpc
Yes, I said “difficult” not impossible
haha, true
though this could be a good capability to support
even for future upgrades
yes
at the rate k8s moves, this won’t be the last breaking change
thats exactly what we do
kops cluster in its own vpc, peered to our database vpc, new cluster comes up will also vpc peer into db.
How are you provisioning the kops vpc peering connection?
i would suggest using terraform. cloudposse has an example thats pretty good
some examples of VPC peering https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops-legacy-account-vpc-peering
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
allows for quick route53 cutover/ you can also do a weighted cutover via route 53 and you have a pretty fast rollback strategy (point route53 back to old cluster)
Discussion of some of the options for upgrading etcd (none great): https://gravitational.com/blog/kubernetes-and-offline-etcd-upgrades/
Proud new Kubernetes cluster owners are often lulled into a false sense of operational confidence by its consensus database’s glorious simplicity. In this Q&A, we dig into the challenges of in-place upgrades of etcd beneath autonomous Kubernetes clusters running within air-gapped environments.
Step-by-step instructions for upgrading kops
cluster by replacing it. Probably best for 1.11 to 1.12 upgrade. (I’ve never tried it. I have not had to upgrade a cluster from 1.11 to 1.12 yet.) https://www.bluematador.com/blog/upgrading-your-aws-kubernetes-cluster-by-replacing-it
How to use kops to quickly spin up a production-ready Kubernetes cluster to replace your old cluster in AWS.
2019-06-20
Are people using kops for GCP or Azure? or are people using Kubespray for more multi platform
last i checked (late 2018) kops didnt support azure, but AKS has been plenty good.
I wouldn’t use kops even for GCP
however azure postgres (and azure as a whole) have not had great uptime imo
I feel like the best option is to use the best tool for the platform. using any kind of generalized tool will likely not give you all the extra jazz provided by the platform.
AKS has been chugging along though but if you have other dependencies within azure
e.g. on google, I’d prefer to operate GKE over GCP+Kubernetes
same, AKS is great cause it feels to me fully managed (as opposed to EKS which is highly configurable and even the generic case takes more effort to spin up)