#kubernetes (2020-08)
Archive: https://archive.sweetops.com/kubernetes/
2020-08-04
any suggestion for the service that auto-syncs secrets stored in AWS Parameter Store -> k8s secrets?
external secrets operator
An operator to fetch secrets from cloud services and inject them in Kubernetes - ContainerSolutions/externalsecret-operator
Thank you, Erik!
Do you have any experience or opinion about https://github.com/godaddy/kubernetes-external-secrets ?
Integrate external secret management systems with Kubernetes - godaddy/kubernetes-external-secrets
Oh right, the reason I was looking at the one by ContainerSolutions
was it supported 1Password.
I would choose the godaddy
one instead if strictly using SSM
(1Password support is nice from a developer UX perspective and doesn’t require providing any access to AWS, period.)
Can someone tell me how I set the TLS security policy on an ELB to use the “ELBSecurityPolicy-TLS-1-2-2017-01” predefined policy? I found what looks like the annotation necessary to do that, but after applying it, I am not seeing a change on the ELB……
[service.beta.kubernetes.io/aws-load-balancer-security-policy](http://service.beta.kubernetes.io/aws-load-balancer-security-policy): ELBSecurityPolicy-TLS-1-2-2017-01
2020-08-05
figured it out
[service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy](http://service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy): ELBSecurityPolicy-TLS-1-2-2017-01
Hey guys, anyone here is able to tell me why this call doesn’t delete the deployment resource in kubernetes 1.17?
curl -H “Content-type: application/json” -H “Authorization: Bearer XXXXXXXXXX” -k -X DELETE “<https://kubernetes:7010/apis/apps/v1/namespaces/%7Chttps://<host>:<port>/apis/apps/v1/namespaces/><namespace>/deployments/<deployment_name>”
it returns a 200 and some fancy response like this ;
{
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "zipkin",
"namespace": "monitoring",
"selfLink": "/apis/apps/v1/namespaces/monitoring/deployments/zipkin",
"uid": "33341330-5dca-4f45-8645-7e74ac8c9464",
"resourceVersion": "26994061",
"generation": 2,
"creationTimestamp": "2020-08-05T19:16:28Z",
"deletionTimestamp": "2020-08-05T19:16:47Z",
"deletionGracePeriodSeconds": 0,
"labels": {
"name": "zipkin"
},
"annotations": {
"[deployment.kubernetes.io/revision](http://deployment.kubernetes.io/revision)": "1"
},
"finalizers": [
"foregroundDeletion"
]
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"name": "zipkin"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"name": "zipkin"
}
},
"spec": {
"containers": [
{
"name": "zipkin",
"image": "openzipkin/zipkin",
"resources": {
`},`
`"terminationMessagePath": "/dev/termination-log",`
`"terminationMessagePolicy": "File",`
`"imagePullPolicy": "Always"`
`}`
`],`
`"restartPolicy": "Always",`
`"terminationGracePeriodSeconds": 10,`
`"dnsPolicy": "ClusterFirst",`
`"nodeSelector": {`
`"nodegroup": "eks-monitoring"`
`},`
`"automountServiceAccountToken": false,`
`"shareProcessNamespace": false,`
`"securityContext": {`
`},`
`"schedulerName": "default-scheduler"`
`}`
`},`
`"strategy": {`
`"type": "RollingUpdate",`
`"rollingUpdate": {`
`"maxUnavailable": 1,`
`"maxSurge": 2`
`}`
`},`
`"revisionHistoryLimit": 10,`
`"progressDeadlineSeconds": 600` `},` `"status": {`
`"observedGeneration": 2,`
`"replicas": 1,`
`"updatedReplicas": 1,`
`"readyReplicas": 1,`
`"availableReplicas": 1,`
`"conditions": [`
`{`
`"type": "Available",`
`"status": "True",`
`"lastUpdateTime": "2020-08-05T19:16:28Z",`
`"lastTransitionTime": "2020-08-05T19:16:28Z",`
`"reason": "MinimumReplicasAvailable",`
`"message": "Deployment has minimum availability."`
`},`
`{`
`"type": "Progressing",`
`"status": "True",`
`"lastUpdateTime": "2020-08-05T19:16:29Z",`
`"lastTransitionTime": "2020-08-05T19:16:28Z",`
`"reason": "NewReplicaSetAvailable",`
`"message": "ReplicaSet \"zipkin-5d6d665688\" has successfully progressed."`
`}`
`]` `}` `}`
What is this about replicaset has succesfully progressed ? I want it gone xD
The deployment, the replicaset under it, and the pods under it, cascading from the deployment DELETE request
is this some kind of bug ?
2020-08-10
Hey folks — I’m looking for a pointer in the right direction regarding database migrations on kubernetes.
I’ve inherited an ECS application that is in progress on migration to K8s. One of the patterns currently in place is using Lambdas to run Database migrations through an NPM project. It’s a bit complicated and we need to overhaul the process anyway, so I’m looking to pull this into the k8s migration and do it the “k8s way”. I’m wondering how others approach running initial database creation, migration, and seeding scripts? K8s jobs? Init containers? Or some other k8s pattern I’m not aware of?
Any thoughts / suggestions / things I should read?
Look into helm hooks
That’s how we typically do it.
Sweet. Will do.
Helm hooks are just jobs that have a special annotation and are executed at certain life cycle events
Sounds perfect. That’s the answer I was looking for. Thank man!
For out staging environment, we use pre-install hook in helm for creating a database and import a dump defore deploying for example
this is what ours looks like
apiVersion: batch/v1
kind: Job
metadata:
name: "api-migration-{{ now | unixEpoch }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install, post-upgrade
...
I’d also look into doing backwards and forwards compatible database migrations and https://aws.amazon.com/blogs/database/building-a-cross-account-continuous-delivery-pipeline-for-database-migrations/
But thats for… more intense requirements
To increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for database migrations is often a manual process. Adopting continuous integration and continuous delivery (CI/CD) for database migrations has the following benefits: An automated multi-account setup simplifies database migrations. The […]
@Vlad Ionescu (he/him) Yeah, looked into solutions like flyway. My client is a node / typescript shop and they’re using some up / down migration tool. But one thing at a time
We’ve been trying to answer the same question. Our client is a Java shop and they are using Liquibase extensively, migration scripts are located within the app and managed by the app during its startup. It worked pretty well in their pre-Kube env. The problem emerged with the migration to K8s. Now we’ve got locks here and there and unlocking all this stuff manually is a pain as there are quite some microservices with their own databases and a lot of envs containing all this stuff. It happens because we have several replicas, because we have cluster autoscaler, because a scheduler decides to reschedule the pod, because sometimes it takes too long for migration scripts to complete, etc. And it’s obvious that the current approach should be reconsidered and we should find a K8s-aware way.
Consider maybe a continuous delivery framework like ArgoCD
There are some projects trying to do something with liquibase locks
https://github.com/szintezisnet/Kubernetes-Liquibase-Lock-Release
and
https://github.com/oridool/liquibase-locking
Haven’t tried any of these. To me they don’t look mature, but who knows
Contribute to szintezisnet/Kubernetes-Liquibase-Lock-Release development by creating an account on GitHub.
Automatic management of Liquibase locking mechanism - to avoid infinite lock and manual fix - oridool/liquibase-locking
And there are these folks trying to define your db structure as K8s objects:
SchemaHero is a Kubernetes Operator for Declarative Schema Management for various databases.
Let’s not forget (and I’m joking…)
2020-08-11
At re:Invent 2019, we announced the ability to deploy Kubernetes pods on AWS Fargate via Amazon Elastic Kubernetes Service (Amazon EKS). Since then we’ve seen customers rapidly adopt the Kubernetes API to deploy pods onto Fargate, the AWS serverless infrastructure for running containers. This allows them to get rid of a lot of the undifferentiated […]
Basic Kubernetes networking question: Do CNIs or any networking layer in K8s use bridge port at the NIC level to do it’s magic OR it uses just multiple IPs per interface to do it’s thing? wondering about networking layer performance compared to Bridge mode in other settings
I’m wondering about this comment in ECS
Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family and container definitions are required in a task definition, while task role, network mode, volumes, task placement constraints, and launch type are optional.
The host and awsvpc network modes offer the highest networking performance for containers because they use the Amazon EC2 network stack instead of the virtualized network stack provided by the bridge mode. With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
and how that could relate to K8s
I have no idea on your actual question — but just wondering @jose.amengual, you / your org moving away from ECS and towards k8s?
we are thinking
but we were having a discussion on latency
and I think Bridge mode is slow than having a NIC+IP on a host but even more so in docker
Aha interesting. Sounds like you have a large investment in ECS. Let me know where you land — I’m interested if you end up taking the plunge.
but since in K8s bridge is widely used I was wondering how latency plays in
we are going to move or I quit
I hate ECS
Hahaha damn, strong opinion! Gotcha man.
there is sooooo many little things that adds up
it is WAY easier to run docker-compose on instances behind a alb
using cloud-init
ECS was supposed to help you to run containers, but the learning curve is huge
is like learning chef or ansible
Hahah I’d say the learning curve for k8s is larger though… it just gives you a better, open source platform.
yes, maybe the curve is about the same but then the deployments and such is easier
I think gke does this too to get better perfomance, so in your gke cluster the pods get an ip address of the real subnet, so you don’t have an overlay network inside k8s
gke uses calico
interesting
Haha, hate ecs I’ve been following ECS rebalancing issue (https://github.com/aws/containers-roadmap/issues/105) for few years but recently unsubscribed. What a relief it was
@euank Doesn't seem like ECS rebalances tasks to allocate resources more effectively. For example, if I have task A and task B running on different cluster hosts, and try to deploy task C, I…
Are you load testing the k8 bridge anyhow? It will be interesting to use something like Apache AB from more than 6 IP addresses to cover the different Zones.
no I have not done any testing yet, just wondering if this was a thing
As per our company policy, linux server will get latest patch updates from kernal level for evry 2 months , which will cause all the machines to force reboot. at that time kubernetes cluster is getting broken .. all nodes will be in notready state. did some one faced similar problem… ? if so how is handling…. ?
are all of the nodes getting restarted at the same time? can’t you stagger them out so pods can shuffle around and normalize between reboots?
all of nodes restart at same time … i was thinking probably its broking cluster network
is someone mandating that all of the nodes have to start at the same time?
if you don’t want stuff to break don’t restart everything at the same time
our infrastructure is managed by a different team and they do weekend patching for almost 500-1000 machines at the same time. so it’s difficult to request them to reboot servers in sequence
idk. i did something like this when i needed to do this exercise by hand.
https://gist.github.com/jfreeland/e937248a298e7d8e506b92b6bbc8ebad
there were some oddball cases where i couldn’t let the cluster-autoscaler do it for me so
“we have to restart 500-1000 machines and we don’t know how to do it properly” is a pretty nutty reason to risk taking a service down.
that particular script was focused on an aws_autoscaling_group but it’d be trivial to modify that to ssh to the node and reboot it as well. lots of options. is your kubernetes control plane on some of the nodes that are being recycled too?
They have to find a way to execute a “Rolling restart”. It should be possible in every cloud provider. For exmaple, in AWS Autoscaling group you have the instance refresh utility
And it’s callable via the AWS API (could be done right after the patch is applied, in your example)
It’s one way to tinker your way through it, but you guys gonna find your own hehe
haha yup I got it Thanks a lot for your inputs @JMC & @joey
2020-08-12
Is there a public document I could refer to on how to extend k8s HPA? I want to write a custom HPA that uses custom metrics to scale down. e.g. I have a custom metric that I can get from prometheus that is connection count to a pod, I want my pods only to scale down when connection is 0 and not send any new connections to it. Is there such a thing already?
https://github.com/stefanprodan/k8s-prom-hpa
https://github.com/DataDog/watermarkpodautoscaler
if i’m not mistaken https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-limit-scale-down-rate was added in v1.17 or v1.18.. i had to write a custom autoscaler in the past because kubernetes hpa didn’t have a cool down factor to not scale things down too fast
Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics - stefanprodan/k8s-prom-hpa
Custom controller that extends the Horizontal Pod Autoscaler - DataDog/watermarkpodautoscaler
The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller.
This is great joey, thank you!
2020-08-13
We have another poem for you –
“The time has come,” the maintainers said, “To talk of software fates: Of upgrades – and shipping Helm v3 – Of bugfixes – and k8s –”
Read @bridgetkromhout’s latest blog for details on the Helm v2 deprecation timeline: https://helm.sh/blog/helm-v2-deprecation-timeline/
If im running an app that requires one instance under a deployment, how do I ensure 1 pod is always available? Case is manual deletion of the pod causes some downtime. PDB doesn’t seem to do it.
Hey folks, in helm if I want to share template helper functions with my many application Charts (functions like fullName
that are pretty consistent across applications) — what is the best way to do that? Create a parent chart and treat all application charts as subcharts? Or is there a better way?
See this chart for inspiration and patterns:
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
Awesome — Will check that out.
Does CP use the common charts?
I ask as I’m digging into it now and I’m wondering how widely they’re used by the community. Even though they’re going to make my charts more DRY… they are also going introduce some fun rabbit holes in where things are coming from since those common templates are doing a good bit.
We have until recently. See https://github.com/cloudposse/charts/pull/256
what In [monochart]: Fix incorrect reference to serviceAccount in jobs.yaml Use empty maps rather than nil for default empty map values Remove dependency on obsolescent Kubernetes incubator Helm r…
Interesting… so you’re pulling back on using it and bringing it in because they’re deprecating that repo.
Is monochart your --starter
chart or is it an older version of a library chart?
It doesn’t seem to be marked a library chart and doesn’t seem to be using the library chart patterns, but then again I don’t know how folks were accomplishing that prior to helm3.
the monochart is our universal helm chart we use in nearly every customer engagement.
Using the monochart, we can deploy 99% of customer applications we’ve come across. It means we don’t need to develop an original chart for every application and thus reduces the techdebt and chart sprawl.
Gotcha. Seems I need to do some reading to grok that a bit more.
We look at the values.yaml
as a custom schema that lets you define a custom, declarative specification for how an app should work.
Then we make the chart implement that specification.
then we make developers choose a specification.
Gotcha. And I’m guessing you then you use Helmfile to just provide values.yaml files to monochart for each application image your deploying?
That sounds like a pattern I want! Tomorrow I’m jumping into the old ref architecture so I’ll see what I find while digging into that.
Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the c…
2020-08-16
Just installed OpenShift dedicated on AWS, any better advice to construct Jenkins on it? I use GitLab store BE/FE app source codes and try to deploy on OpenShift Pods.
2020-08-19
Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds
Wow, I knew about the recent pricing changes and how orphaned images were going to be deleted.
Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds
I didn’t know about this:
Why is this operator required?
The primary reason for creating this operator, is to make it easier for users of Kubernetes to consume images from the Docker Hub after recent pricing and rate-limiting changes were brought in, an authenticated account is now required to pull images.
Unauthenticated users: 100 layers / 6 hours
Authenticated users: 200 layers / 6 hours
Paying, authenticated users: unlimited downloads
See also//www.docker.com/pricing)
DockerHub rate limits seems like a huge liability!
Right? I was also surprised by that!
more reasons to build up a pull thru registry
uff. You know what I just now see as an Issue? CI/CD !
Sorry to drag up this oldish thread but the great limit switching time is approaching… Just wondering if anyone has come up with a reasonably elegant way to tackle getting ready for this?
Sure, you can docker login
or manage your docker config on nodes with user-data or you can create a Secret
in every namespace and then rewrite all of your charts, deployment YAMLs etc to include the relevant imagePullSecrets
oh, and re-deploy or kubectl patch
them all… and update Jenkins Kubernetes plugin configs… Probably have to hunt down any special implementations in Jenkinsfile
s … I’m sure you get the point.
Any experiences to share?
@tim.j.birkett did you see @Pierre Humberdroz’s recommendation? https://github.com/alexellis/registry-creds
Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds
Indeed I did see that @Erik Osterman (Cloud Posse) - it looks quite interesting and fairly similar to: https://github.com/zakkg3/ClusterSecret
Kubernetes ClusterSecret operator. Contribute to zakkg3/ClusterSecret development by creating an account on GitHub.
For anyone else who stumbles across this during a search of Slack or the Cloudposse Slack archives, @zadkiel kindly enlightened me to this project: https://github.com/titansoft-pte-ltd/imagepullsecret-patcher - It manages the image pull secrets and patches the default
service account in each namespace, keeping everything up to date on changes.
A simple Kubernetes client-go application that creates and patches imagePullSecrets to service accounts in all Kubernetes namespaces to allow cluster-wide authenticated access to private container …
@jdtobe has joined the channel
2020-08-20
AWS Controllers for Kubernetes (ACK) is a new tool that lets you directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly-available Kubernetes applications that utilize AWS services. Today, ACK is available as a developer preview on GitHub. In this post we will give you a brief introduction to the […]
Has someone ever used Gitlab Auto DevOps with custom Docker images for a worker and a service in the same repository? Not using the Herokuish/etc If so, how was the configuration experience?
2020-08-21
In this post we’ll explore K8s community decision making process by looking underneath the hood of the ‘kerfluffe’ of Google LLC being called out by Samsung SDS engineers for skipping ‘graduation criteria’ while merging the new ‘kustomize’ subcommand into upstream ‘kubectl’.
2020-08-25
So, Docker Inc has finally updated the FAQ for their their previously announced service limits. And I’m not going to lie, it’s pretty brutal. You should consider any (unpaid) use of Docker Hub to be an operational risk going forward.
AWS Controllers for Kubernetes - https://github.com/aws/aws-controllers-k8s
AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes - aws/aws-controllers-k8s
Not really sure how i feel about that. I’d rather manage things via terraform, or am I missing something?
AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes - aws/aws-controllers-k8s
ya im still looking into it. it does look promising. and it doesnt do all aws like terraform does so it’s not going to be a replacement
2020-08-26
2020-08-27
Is anyone using Datadog Log collection with EKS + Fargate? Did you have to jump through the whole Firelens + Fluentbit hurdles similar to DD + ECS log collection?
I don’t even know if Fluentbit + Firelens on EKS is a thing after looking for a few… but I would be interested if anyone got DataDog logging working properly with EKS Fargate services. This seems like a real pain.
Can’t DD just pull from Cloudwatch ?
Haha but still need to ship logs to cloudwatch and then also pay them as well.
I believe I found the solution. I need to mount a emptyDir volume to both agent and application and then configure DD to look at that location. It’s just tedious and I’m surprised it’s not documented well.
I POC’d the Firelens + Fluentbit with ECS Fargate, I got it working pretty quickly – didn’t move forward with it though because we’d have to recreate a handful of Monitors/Log Views that are currently based on the CloudWatch Log Group Name
I’m probably going to end up creating a Fluentbit configuration that also pushes to CloudWatch. Which is a little silly, but then again I don’t know that I want to rely solely on DD.
the only thing I’m going to say about DD log collection is that is SUPER expensive
far more than other SaaS products out there
we forbid the use of it in our company for that reason, just my 2 cents
Hello guys, I’m trying to install lens in centos 7.6(64-bit) using snap (https://snapcraft.io/install/kontena-lens/centos) and installation is successful, but when i run kontena-lens it gives below /snap/kontena-lens/110/kontena-lens: error while loading shared libraries: libgtk-3.so.0: cannot open shared object file: No such file or directory sudo yum provides libgtk-3.so.0 Last metadata expiration check: 031 ago on Fri 28 Aug 2020 0651 AM UTC. gtk3-3.22.30-3.el8.i686 : GTK+ graphical user interface library Repo : @System Matched from: Provide : libgtk-3.so.0
gtk3-3.22.30-3.el8.i686 : GTK+ graphical user interface library Repo : rhel-8-appstream-rhui-rpms Matched from: Provide : libgtk-3.so.0
sudo yum install gtk3-3.22.30-3.el8.i686 -y
after installing all dependency packages still getting same error any suggestion on this ?
Get the latest version of Lens for on CentOS - Lens - The Kubernetes IDE
2020-08-28
What do folks prefer for a visual K8s Tool? Lens, k9s, Octant, or other?
@roth.andy, this was one of the best tool recommendations ever! Thank you!!!