#kubernetes (2019-07)
Archive: https://archive.sweetops.com/kubernetes/
2019-07-02
Sanic is an omni-tool which lets you build, deploy, and manage Kubernetes clusters.
reminds me of #geodesic
Looks like a go cli tool that can be added to geodesic
2019-07-03
anyone running a container with 200GB+ memory ?
bad idea ?
just curious, why do you need that size of container?
somebody mentioned a tool for setting ENV variables for docker with support for different environments ?
with some template support
for the life of me I can’t recall the name….
dockerize?
it was something like systemenv ro something with env
env consule?
nop
probably that? https://github.com/mumoshu/variant
Wrap up your bash scripts into a modern CLI today. Graduate to a full-blown golang app tomorrow. - mumoshu/variant
I think so…
2019-07-04
2019-07-06
2019-07-08
2019-07-09
Here is an example which should be documented, it uses only pre-existing IAM and VPC resources: apiVersion: eksctl.io/v1alpha4 kind: ClusterConfig metadata: name: test-cluster-c-1 region: eu-north-…
Anyone know if it’s possible to pass settings in the config to run commands on startup (like with kops)?
…in eksctl
(@mumoshu)
e.g. in kops
, we can do:
hooks:
# Mitigate CVE-2019-5736
- before:
- docker.service
manifest: |
Type=oneshot
ExecStart=/usr/bin/chattr +i /usr/bin/docker-runc
There’s no easy way to add systemd units like that in eksctl. Technically there’re two options though -
-
use
preBootstrapCommands
to write systemd units(https://github.com/weaveworks/eksctl/blob/cf5e078273d8d0d8fa802ae704f038d9c56ad8d7/pkg/apis/eksctl.io/v1alpha5/types.go#L457) or -
deploy a privileged daemonset that mount host volumes and writes unit files(https://github.com/mumoshu/kube-node-init)
a CLI for Amazon EKS. Contribute to weaveworks/eksctl development by creating an account on GitHub.
Kubernetes daemonset for node initial configuration. Currently for modifying files and systemd services on eksctl nodes without changing userdata - mumoshu/kube-node-init
Hi, has anyone tried the bitnami kubeprod.io stack? (https://github.com/bitnami/kube-prod-runtime)
A standard infrastructure environment for Kubernetes - bitnami/kube-prod-runtime
2019-07-10
hi, anyone here knows about use boolean value with k8s ?
i’m trying to read the value from Environment variable, and i realize that when setting the env var inside a pod, it’s always format it as a string
for example MY_VAR=abc will be MY_VAR=‘abc’ inside a pod
if any program is expecting the boolean type, it will throw error
is there a way to solve this ?
thanks
environment variables are always strings
it needs to be handled by your program or whatever library you are using
thanks @MiLk
2019-07-11
We are working on the next version of the Kubernetes networking plugin for AWS. We've gotten a lot of feedback around the need for adding Kubenet and support for other CNI plugins in EKS. This …
Higher container density is coming
2019-07-14
Virtual Kubernetes. Contribute to ibuildthecloud/k3v development by creating an account on GitHub.
2019-07-17
Hi, I have an idea about how I’m going to implement config templating for our containers and I will like some feedback :
- Secrets and non secrets will be stored in aws Parameter Store+KMS
- Chamber will be used to update/create secrets
- Path IAM roles will be created for every environment /dev/secrets /prod/secrets etc
- ECS Task per environment will have access to the /env path that have secrets and not secrets ( no shared configs)
- Dev will use chamber to set ENV variable to run their local containers
- confd might be use to create the config templates on Docker build
most Jenkins will be doing the initial push and CodeDeploy will do the rest for Blue Green
any comments ?
2019-07-18
Anyone familiar enough with Kubernetes API to maybe know a way to get the resourceVersion
of the parent for a given pod
? Trying to ignore events in the watch if the change was initiated by its parent (I am not the controller). This is related to a ReplicaSet
2019-07-19
Draft is a tool for developers to create cloud-native applications on Kubernetes.
saw this at kubecon
my gripes wit this is the feedback loop for rebuilding an image is slow
what were your thoughts?
so during development esp. if you save alot assuming youre doing some web dev you’d typically want to refresh your browser right after you save
and see the changes right away
even during their demo at kubecon there was a bit of waiting
for the demo image (prob really small) to rebuild
I think developers mostly want live reloading
e.g. what you get with telepresence
yep
most prob use docker compose to spin up dependencies
and run their app locally
i think itd be awesome if you could have that type of live reloading but your app is hosted on a k8s cluster in the cloud
each developer would have their own namespace
yep - that’s what we’re working on
but haven’t yet tackled telepresence
are you able to do live reloading comparable to running it locally
cause thats a gamebreaker i think for most devs
Telepresence: a local development environment for a remote Kubernetes cluster
so with telepresence you run “the” service locally
but all your backing services run in k8s (e.g. in a developer namespace)
ah
telepresence is like a reverse proxy. it sits in k8s and any requests it gets it sends back to the service on your laptop
so it’s like teleporting your local service into the cluster
plus your local service can access everything running in k8s (e.g. database or other backing services)
since it runs on your local laptop, you get all the benefits
easier debugging, attaching debuggers, live reloads, etc
ah yeah
but i guess other than the fact that dependent services hosted in cloud
is there any other benefit than running those in a docker compose
you’re testing your services in an environment that’s closer to staging/prod
if you have 30-40 microservices as part of your stack, good luck doing that on your laptop
if you need large datasets for development, can’t do that easily locally
maybe it’s nicer to have all the data stay in AWS from a security perspective
truee
multiple developers can be working on pieces of the project at the same time
and using a shared environment
so you host the db remotely as well
assume youre working on an API
its easier for others to QA changes in a public environment (e.g. if your laptop is offline, no one can review)
yea, usually run db as a container for these env
nice
yeah the large dataset hosted remotely one is actually really useful
allowing multiple devs on same project
Has anyone here been able to get Istio installed to EKS? I’m trying to get it installed with my worker nodes all residing in private subnets and I’m running into weird issues.
@Vidhi Virmani I think did
Most of the examples I come across online assume you’re in public subnets with ELBs and security groups open to the world
@Vidhi Virmani if you’ve installed Istio on EKS in private subnets, ping me!
2019-07-22
Is there a way to have helm chart create external resources?(RDS/Elasticache)
AWS Service Operator allows you to create AWS resources using kubectl. - awslabs/aws-service-operator
AWS Service Broker. Contribute to awslabs/aws-servicebroker development by creating an account on GitHub.
Then use the Helm raw chart to provision
The CRDs for RDS
Thank you!
What are people using as an oauth2 provider to login to their apps like k8 dashboard?
keycloak
- gatekeeper proxies
keycloak too but not for k8 but for pretty much anything
2019-07-23
I’m interested to hear how people are using CD in Kubernetes. Is anyone doing Canary deployments on their EKS cluster? How?
i can’t answer your question, but is that commander keen as your profile pic?
I’d imagine a lot of people are using a service mesh like istio to manage the networking side of the canary deployment, but how they wrap all that in CD I’ve got no experience with
Rio by Rancher. Check it out.
2019-07-25
last year spent a few months troubleshooting and improving apps running in ElasticBeanstalk. with help from guys here (thank you). This year I’m deleting those ElasticBeanstalk stacks one by one after moving the apps to K8s. #lifeofadevopsengineer
haha too true. Thanks @i5okie for the update.
2019-07-29
I am looking at starting to use Kubernetes for the first time for a small system that I would eventually grow. Would you suggest using Kubernetes directly or using AWS EKS?
I would give eksctl
a shot
Probably the most turn key way to get up and running with EKS for a small project.
(we still use kops
)
i was playing around with kops
over the weekend
worked really well, just unsure at this point how to manage everything
kops is easy to get up and running.
the challenge with kubernetes is updates between major releases can be tricky
e.g. 1.11
-> 1.12
upgraded to etcd3 and there was no automated way to easily upgrade
while on EKS, those kinds of upgrade challenges are handled by the platform
also, i believe the upgrade from 1.14 to 1.15 is also that way
Thanks that is really good to know. I’ll give EKS a go. Much appreciated
we also have TF modules for EKS
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
complete example https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Awesome
Thanks so much!
Just found this as well, seems like a good beginner resource https://eksworkshop.com/introduction/
Amazon EKS Workshop
yes, I saw that - looks amazing
2019-07-30
yes
https://github.com/cloudposse/terraform-aws-eks-cluster is it possible here?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master>"
namespace = "eg"
stage = "testing"
name = "cluster"
tags = "${var.tags}"
vpc_id = "<YOUR VPC ID>"
subnet_ids = ["<YOUR PUBLIC SUBNET ID'S>"]
# `workers_security_group_count` is needed to prevent `count can't be computed` errors
workers_security_group_ids = ["${module.eks_workers.security_group_id}"]
workers_security_group_count = 1
}
2019-07-31
I’m looking for a minimally-hacky way to restart running pods in order pickup a change in config data
here’s the use case: pods bootup and source some envrionment vars from SSM
now I change a config value in SSM and would like pods to pick up that value
I’m looking for a minimally-hacky way
I’m okay with some amount of hack tbh
1 idea is to delete pods 1 by 1. They get restarted and when they run they fetch the data fresh from SSM and viola, config data is up to date
We use this https://github.com/stakater/Reloader
A Kubernetes controller to watch changes in ConfigMap and Secrets and then restart pods for Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it! - stakater/Relo…
cool thanks @Erik Osterman (Cloud Posse) I’ve heard of it and will check it out.
it was very easy to setup
our helmfile for it is here: https://github.com/cloudposse/helmfiles/blob/master/releases/reloader.yaml
Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles
I wonder if there’s a way to hook it into Prometheus somehow, so it would only restart a pod if the app overall is healthy. A cursory look at this suggests I’d have to roll my own
assuming health is only determined by prometheus…
however, services should have a healthcheck endpoint
per the Reloader README.md
, it says:
then perform a rolling upgrade
the only way to do a rolling upgrade is to wait for new pods to become healthy before moving on
thus if the new secrets cause problems, that should cause the rollout to hang
then the prometheus alerts for a pod crash loop should fire on the unhealthy pod
A Kubernetes controller to watch changes in ConfigMap and Secrets and then restart pods for Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it! - stakater/Relo…
to me, it looks like what they do is update an environment variable which causes k8s to do the rolling update
thus all rolling update semantics are handled by k8s.
Yeah, this gets more complicated with distributed applications such as Kafka, hence the need for external monitoring