Discussions related to kops for kubernetes Archive: https://archive.sweetops.com/kops/
Just have tested upgrading Works well
Thank you for explanation. I assumed that. Now that is not a problem.
Guys, if I made the manual changes in the terraform.tf file which I exported from kops, will kops upgrade procedure work properly? Or will this procedure discard my changes and apply a configuration that stored in kops? E.g. I use one NAT gw for 3 AZ (other two NAT gws I have removed from terraform.tf manually)
Will revert the kops upgrade procedure two nat gws back? Right?
I don’t have enough context
[terraform.tf](http://terraform\.tf) file? …that could be anything
Hi everyone! I have the new question again I am trying to set up k8s cluster using kops. I know that official documentation says that I should provide the user with Iam permissions
AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
But these permissions include FullAccess. This is very insecure. Do you have any the minimal rule sets that exclude FullAccess?
I would like to know how do you, guys, make this setup Because, some time, we need to set up k8s cluster into the customer account and customer’s admins are afraid the FullAccess policies
You’re not giving this level of access to kops
you are giving this level of access to the person or process responsible for provisioning kops
the user will need CRUD for EC2, ELBs, EBS, EIP, S3, VPC, and at that point, they are basically admins
I also…looking at KOPS configured with Public DNS but… I also wonder if thats good given that it creates records in etcd…. do you anything about this?
Hello All, I was reading Kubernetes Security Best Practices and it mentions the practice to use private topology with private VPC. Does anyone here uses a public website of sorts on top of Kops private topology? How is that working out?
@Fernanda Martins exactly, so best practice is to run all the masters and nodes on a private topology, but then use an
Ingress to expose a service
so a service will sit on a private “cluster ip”, and the (public) ingress will send traffic to that service.
technically, an ingress can be public or private. in your case, you’d want a public ingress.
That what essentially KOPS does because I see some private subnets and public ones tied with Load Balancer…
But I wonder if the public ones are configured in the best way…
There’s never a single best way; I guess it depends on the organization. For our use-case, we stick with the
We are about to explore upgrading kops and k8s to 1.12.x from 1.11.x
I have done the upgrades many times with Kops, not within the scope of geodesic
Anyone else done so yet?
I mean looking at this https://github.com/kubernetes/kops/blob/master/docs/releases/1.12-NOTES.md
Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops
it does sound a lot safer to spin up a new cluster
We are looking at upgrading from Kops 1.11 to 1.12. The upgrade instructions mention that it is a disruptive upgrade without going into details of how much. Is there anyone who has gone through it and can share their experience? cc @Jeremy (Cloud Posse)
Long thread below that
thanks, I went over that
honestly at this point im thinking I will build the capability to do full cluster backup and restore
try the update path as described in kops
probably better to rebuild if you can
if it goes tits up then re-roll the version we have
since the migration of data from one cluster to another requires downtime as is
feels like a better route
will need some planning and testing as we several prod workloads across 3 regions
will be fun though