What GitOps tools do people use here. I have been looking into
ArgoCD . Interested to hear about other peoples experiences with any other relevant tools.
The GitOps Kubernetes operator. Contribute to fluxcd/flux development by creating an account on GitHub.
Kubernetes native workflows, deployments, CI, events
Harness looks promising. My team is working on testing with it. Will report back later.
Also CodeFresh looks nice for setting up simple stuff since you can trigger off of pushes to docker registries (along with a bunch of other stuff)
Thanks Andrew. Let me know how you get on
We’re currently using flux in a new eks platform we’re building out - just starting to enter developer testing now and things are looking good
Hi, has anyone used k8s and route53 on gov cloud?
LevelUp has some open source material that they have published. https://dccscr.dsop.io/levelup-automation/aws-infrastructure
Cloud IaaS Automation
Hi, does anybody know if images of the OpenShift internal registry could be pulled cluster internally and pushed to an external (like Nexus) repository?
Somebody else go through this list and tell me if any of them sound fishy. I want to see if your list matches my list.
This document highlights and consolidates best practices for building, deploying and scaling apps on Kubernetes in production.
This is kind’a rad coming from the DoD.
November 12, 2019: Nicolas M. Chaillan posted on LinkedIn
Trying to parse
kubernetes logs with
helm chart form here: https://github.com/helm/charts/tree/master/stable/fluent-bit
Stuck in configuring outputs. What I need is to have a few outputs sending logs to different indices into AWS ES based on
If anyone happened to have worked with this helm or similar issue help is welcomed.
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
Build, Store, and Distribute your Applications and Containers - quay/quay
I’ve been unable to find info how to switch a
kops 1.13 cluster from single to multi-master; All the documentation I’ve found provides steps before the switch to
etcd-manager. I’m having issues even connecting to
Can anyone point me in the right direction?
A Kubernetes DaemonSet to gracefully handle EC2 Spot Instance interruptions. - aws/aws-node-termination-handler
I think that for while it is better to stick with https://github.com/kube-aws/kube-spot-termination-notice-handler. It lacks the features of asg detach and notifications. ASG detach improves a lot the recovering time making the interruption almost seamless
A Kubernetes DaemonSet to gracefully delete pods 2 minutes before an EC2 Spot Instance gets terminated - kube-aws/kube-spot-termination-notice-handler
If a termination notice is received for an instance that’s running on the cluster, the termination handler begins a multi-step cordon and drain process for the node.
Does any one know how to use kops on gov cloud? I can’t get the DNS right. Any help is appreciated
I thought it wasn’t yet supported
While there are DNS servers in the VPC, there is no Route53 service (API). This breaks many devops tools that make the assumption Route53 is / will be available (kops w/ kubernetes, for example). To be fair, some tools (like kops) provide an alternative to Route53 for bootstrapping the cluster, though our testing found the features to be buggy and not yet production quality. We worked around the need for Route53 by deploying our own self-healing and automated DNS solutions. A future post will dive into the details of our Route53 replacements.
If you have strict compliance criteria that require you to use AWS GovCloud, there are some obstacles you will encounter that we will help you address.
Someone told me of a work around with freeipa with your experience do you think it is a good option?
It works, one of my colleagues is doing it. They said it was janky though. My company is going to be doing a TON of work in the very near future with K8s in GovCloud so I’m looking for more information on the subject as well
Just saw this today, thought it was neat: https://github.com/linki/chaoskube it’s been around a while, guess i’m late to the party lol
check it out here: https://helm-notifier.com/repos/jfrog/artifactory/7.18.3...8.0.0
UI still rough, but value immense
you can compare the changes between any 2 releases
(see url syntax)
@Jeremy Grodberg @Igor Rodionov @Jeremy Grodberg
Thanks for sharing @Erik Osterman (Cloud Posse),
The idea was to validate this today with a little working prototype if you have feature ideas let me know. Currently the main benefit to hub.helm.sh is that you are able to compare two chart versions. Other Features that I have planned are:
- Notification on releases
- Notification if a new helm chart is added with a keyword you are looking for.
The only thing I’ve found that’s lacking is the UI controls around logs
been using it for the past few days with a dev eks cluster and have found it very nice so far
Toying w/ it, but it doesn’t seem like it supports iam auth. Going to tinker w/ it on minikube.
I’ve got it working now with IAM Auth - what issues are you having?
I didn’t really try very hard. I just selected my cluster map and it failed so i gave up. lol. I was just tinkering anyway.
Ok…it works fine, @Chris Fowles. It helps if you use the right config AND your AWS profile actually has access.
Looks like it’s now dead.
Just when I started playing with it.
you killed it!
Well that sucks. Just saw this. lol. I use this every day.
Ya, it’s a bummer. But they are working to open source it.
Have you played with it at all @Erik Osterman (Cloud Posse)?
nope, first i heard of it was today
Hello ALL I’ve found this feature https://github.com/zalando-incubator/stackset-controller Does somebody use it? What is your feedback?
Opinionated StackSet resource for managing application life cycle and traffic switching in Kubernetes - zalando-incubator/stackset-controller
Is it production ready?
An issue has been opened to track the fix for the CFS scheduler bug in CoreOS. People using CoreOS to host Kubernetes may want to track this: https://github.com/coreos/bugs/issues/2623
Issue Report Bug Container Linux Version NAME="Container Linux by CoreOS" ID=coreos VERSION=2191.5.0 VERSION_ID=2191.5.0 BUILD_ID=2019-09-04-0357 PRETTY_NAME="Container Linux by Core…