#kops (2019-06)

Discussions related to kops for kubernetes Archive: https://archive.sweetops.com/kops/

2019-06-20

pericdaniel avatar
pericdaniel

Are people using kops for GCP or Azure? or are people using Kubespray for more multi platform

btai avatar

last i checked (late 2018) kops didnt support azure, but AKS has been plenty good.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wouldn’t use kops even for GCP

btai avatar

however azure postgres (and azure as a whole) have not had great uptime imo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I feel like the best option is to use the best tool for the platform. using any kind of generalized tool will likely not give you all the extra jazz provided by the platform.

btai avatar

AKS has been chugging along though but if you have other dependencies within azure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. on google, I’d prefer to operate GKE over GCP+Kubernetes

btai avatar

same, AKS is great cause it feels to me fully managed (as opposed to EKS which is highly configurable and even the generic case takes more effort to spin up)

2019-06-19

Jan avatar

Tim and I are going to start pushing PR’s up to you

Jan avatar

I think time might already have one in for fixing your race condition with multi vpc peering

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow, that would be cool!

Daren avatar
Daren

We are looking at upgrading from Kops 1.11 to 1.12. The upgrade instructions mention that it is a disruptive upgrade without going into details of how much. Is there anyone who has gone through it and can share their experience? cc @Jeremy (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@btai @rohit.verma @Jan

btai avatar
btai
07:38:50 PM

@btai has joined the channel

rohit.verma avatar
rohit.verma
07:38:50 PM

@rohit.verma has joined the channel

btai avatar

unfortunately still on 1.11.9 for our kops clusters

btai avatar

also im not sure if i will run into this issue even when i do upgrade to 1.12 as my clusters are ephemeral (i would spin up a new 1.12 cluster and deploy/cutover to it)

btai avatar
Technically there is no usable upgrade path from etcd2 to etcd3 that supports HA scenarios, but kops has enabled it using etcd-manager. Nonetheless, this remains a higher-risk upgrade than most other kubernetes upgrades - you are strongly recommended to plan accordingly: back up critical data, schedule the upgrade during a maintenance window, think about how you could recover onto a new cluster, try it on non-production clusters first.
btai avatar

it almost sounds to me that spinning up a new cluster is prob the safest way forward, but im trying to imagine the way you guys are terraforming the cluster/env might make it hard to do that type of blue/green cutover?

Daren avatar
Daren

blue/green is difficult for us right now. We are already on etcd 3, but no TLS or etcd-manager. We also use calico

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you use route53 to route traffic to both cluster?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that would give you a fall back plan

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or use external CDN (e.g. cloudflare) with multiple origins

Daren avatar
Daren

We still use VPC peering to bridge kops and our backend vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, but you can peer the VPC to both k8s clusters

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so you create a new kops vpc

Daren avatar
Daren

Yes, I said “difficult” not impossible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

haha, true

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

though this could be a good capability to support

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

even for future upgrades

Daren avatar
Daren

yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

at the rate k8s moves, this won’t be the last breaking change

btai avatar

thats exactly what we do

btai avatar

kops cluster in its own vpc, peered to our database vpc, new cluster comes up will also vpc peer into db.

Daren avatar
Daren

How are you provisioning the kops vpc peering connection?

btai avatar

i would suggest using terraform. cloudposse has an example thats pretty good

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

btai avatar

allows for quick route53 cutover/ you can also do a weighted cutover via route 53 and you have a pretty fast rollback strategy (point route53 back to old cluster)

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Discussion of some of the options for upgrading etcd (none great): https://gravitational.com/blog/kubernetes-and-offline-etcd-upgrades/

The Horrors of Upgrading Etcd Beneath Kubernetes

Proud new Kubernetes cluster owners are often lulled into a false sense of operational confidence by its consensus database’s glorious simplicity. In this Q&A, we dig into the challenges of in-place upgrades of etcd beneath autonomous Kubernetes clusters running within air-gapped environments.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

Step-by-step instructions for upgrading kops cluster by replacing it. Probably best for 1.11 to 1.12 upgrade. (I’ve never tried it. I have not had to upgrade a cluster from 1.11 to 1.12 yet.) https://www.bluematador.com/blog/upgrading-your-aws-kubernetes-cluster-by-replacing-it

Upgrading Your AWS Kubernetes Cluster By Replacing It attachment image

How to use kops to quickly spin up a production-ready Kubernetes cluster to replace your old cluster in AWS.

2019-06-12

Jan avatar

is any one using the [dns.alpha.kubernetes.io/internal](http://dns\.alpha\.kubernetes\.io/internal) annotation on nodes with success?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure about that annotation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What does it do?

Jan avatar

Adds a dns record with the internal (vpc ip) of the nodes

Jan avatar

No way to filter nodes though

Jan avatar

Not the solution im after, have a few other ideas

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What is the solution you are after?

Jan avatar

A maintained a record listing the vpc ips of all instance in a set instance group

Jan avatar

Cassandra dedicated nodes

Jan avatar

Exploring a similar solution using external-dns and host port

    keyboard_arrow_up