#kubernetes (2020-08)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2020-08-04

tolstikov avatar
tolstikov

any suggestion for the service that auto-syncs secrets stored in AWS Parameter Store -> k8s secrets?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

external secrets operator

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
ContainerSolutions/externalsecret-operator

An operator to fetch secrets from cloud services and inject them in Kubernetes - ContainerSolutions/externalsecret-operator

tolstikov avatar
tolstikov

Thank you, Erik!

Do you have any experience or opinion about https://github.com/godaddy/kubernetes-external-secrets ?

godaddy/kubernetes-external-secrets

Integrate external secret management systems with Kubernetes - godaddy/kubernetes-external-secrets

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh right, the reason I was looking at the one by ContainerSolutions was it supported 1Password.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would choose the godaddy one instead if strictly using SSM

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(1Password support is nice from a developer UX perspective and doesn’t require providing any access to AWS, period.)

1
1
David Hubbell avatar
David Hubbell

Can someone tell me how I set the TLS security policy on an ELB to use the “ELBSecurityPolicy-TLS-1-2-2017-01” predefined policy? I found what looks like the annotation necessary to do that, but after applying it, I am not seeing a change on the ELB…… [service.beta.kubernetes.io/aws-load-balancer-security-policy](http://service.beta.kubernetes.io/aws-load-balancer-security-policy): ELBSecurityPolicy-TLS-1-2-2017-01

2020-08-05

David Hubbell avatar
David Hubbell

figured it out [service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy](http://service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy): ELBSecurityPolicy-TLS-1-2-2017-01

JMC avatar

Hey guys, anyone here is able to tell me why this call doesn’t delete the deployment resource in kubernetes 1.17?

curl -H “Content-type: application/json” -H “Authorization: Bearer XXXXXXXXXX” -k -X DELETE “<https://kubernetes:7010/apis/apps/v1/namespaces/%7Chttps://<host>:<port>/apis/apps/v1/namespaces/><namespace>/deployments/<deployment_name>

JMC avatar

it returns a 200 and some fancy response like this ;

JMC avatar

{ "kind": "Deployment", "apiVersion": "apps/v1", "metadata": { "name": "zipkin", "namespace": "monitoring", "selfLink": "/apis/apps/v1/namespaces/monitoring/deployments/zipkin", "uid": "33341330-5dca-4f45-8645-7e74ac8c9464", "resourceVersion": "26994061", "generation": 2, "creationTimestamp": "2020-08-05T19:16:28Z", "deletionTimestamp": "2020-08-05T19:16:47Z", "deletionGracePeriodSeconds": 0, "labels": { "name": "zipkin" }, "annotations": { "[deployment.kubernetes.io/revision](http://deployment.kubernetes.io/revision)": "1" }, "finalizers": [ "foregroundDeletion" ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "name": "zipkin" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "zipkin" } }, "spec": { "containers": [ { "name": "zipkin", "image": "openzipkin/zipkin", "resources": {

        `},`
        `"terminationMessagePath": "/dev/termination-log",`
        `"terminationMessagePolicy": "File",`
        `"imagePullPolicy": "Always"`
      `}`
    `],`
    `"restartPolicy": "Always",`
    `"terminationGracePeriodSeconds": 10,`
    `"dnsPolicy": "ClusterFirst",`
    `"nodeSelector": {`
      `"nodegroup": "eks-monitoring"`
    `},`
    `"automountServiceAccountToken": false,`
    `"shareProcessNamespace": false,`
    `"securityContext": {`

    `},`
    `"schedulerName": "default-scheduler"`
  `}`
`},`
`"strategy": {`
  `"type": "RollingUpdate",`
  `"rollingUpdate": {`
    `"maxUnavailable": 1,`
    `"maxSurge": 2`
  `}`
`},`
`"revisionHistoryLimit": 10,`
`"progressDeadlineSeconds": 600`   `},`   `"status": {`
`"observedGeneration": 2,`
`"replicas": 1,`
`"updatedReplicas": 1,`
`"readyReplicas": 1,`
`"availableReplicas": 1,`
`"conditions": [`
  `{`
    `"type": "Available",`
    `"status": "True",`
    `"lastUpdateTime": "2020-08-05T19:16:28Z",`
    `"lastTransitionTime": "2020-08-05T19:16:28Z",`
    `"reason": "MinimumReplicasAvailable",`
    `"message": "Deployment has minimum availability."`
  `},`
  `{`
    `"type": "Progressing",`
    `"status": "True",`
    `"lastUpdateTime": "2020-08-05T19:16:29Z",`
    `"lastTransitionTime": "2020-08-05T19:16:28Z",`
    `"reason": "NewReplicaSetAvailable",`
    `"message": "ReplicaSet \"zipkin-5d6d665688\" has successfully progressed."`
  `}`
`]`   `}` `}`
JMC avatar

What is this about replicaset has succesfully progressed ? I want it gone xD

JMC avatar

The deployment, the replicaset under it, and the pods under it, cascading from the deployment DELETE request

JMC avatar

is this some kind of bug ?

2020-08-10

Matt Gowie avatar
Matt Gowie

Hey folks — I’m looking for a pointer in the right direction regarding database migrations on kubernetes.

I’ve inherited an ECS application that is in progress on migration to K8s. One of the patterns currently in place is using Lambdas to run Database migrations through an NPM project. It’s a bit complicated and we need to overhaul the process anyway, so I’m looking to pull this into the k8s migration and do it the “k8s way”. I’m wondering how others approach running initial database creation, migration, and seeding scripts? K8s jobs? Init containers? Or some other k8s pattern I’m not aware of?

Any thoughts / suggestions / things I should read?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Look into helm hooks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s how we typically do it.

Matt Gowie avatar
Matt Gowie

Sweet. Will do.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Helm hooks are just jobs that have a special annotation and are executed at certain life cycle events

Matt Gowie avatar
Matt Gowie

Sounds perfect. That’s the answer I was looking for. Thank man!

Issif avatar

For out staging environment, we use pre-install hook in helm for creating a database and import a dump defore deploying for example

1
btai avatar

this is what ours looks like

apiVersion: batch/v1
kind: Job
metadata:
  name: "api-migration-{{ now | unixEpoch }}"
  annotations:
    # This is what defines this resource as a hook. Without this line, the
    # job is considered part of the release.
    "helm.sh/hook": post-install, post-upgrade
...
3
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I’d also look into doing backwards and forwards compatible database migrations and https://aws.amazon.com/blogs/database/building-a-cross-account-continuous-delivery-pipeline-for-database-migrations/

But thats for… more intense requirements

Building a cross-account continuous delivery pipeline for database migrations | Amazon Web Servicesattachment image

To increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for database migrations is often a manual process. Adopting continuous integration and continuous delivery (CI/CD) for database migrations has the following benefits: An automated multi-account setup simplifies database migrations. The […]

Matt Gowie avatar
Matt Gowie

@Vlad Ionescu (he/him) Yeah, looked into solutions like flyway. My client is a node / typescript shop and they’re using some up / down migration tool. But one thing at a time

1
Andrew Nazarov avatar
Andrew Nazarov

We’ve been trying to answer the same question. Our client is a Java shop and they are using Liquibase extensively, migration scripts are located within the app and managed by the app during its startup. It worked pretty well in their pre-Kube env. The problem emerged with the migration to K8s. Now we’ve got locks here and there and unlocking all this stuff manually is a pain as there are quite some microservices with their own databases and a lot of envs containing all this stuff. It happens because we have several replicas, because we have cluster autoscaler, because a scheduler decides to reschedule the pod, because sometimes it takes too long for migration scripts to complete, etc. And it’s obvious that the current approach should be reconsidered and we should find a K8s-aware way.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Consider maybe a continuous delivery framework like ArgoCD

Andrew Nazarov avatar
Andrew Nazarov

There are some projects trying to do something with liquibase locks

https://github.com/szintezisnet/Kubernetes-Liquibase-Lock-Release

and

https://github.com/oridool/liquibase-locking

Haven’t tried any of these. To me they don’t look mature, but who knows

szintezisnet/Kubernetes-Liquibase-Lock-Release

Contribute to szintezisnet/Kubernetes-Liquibase-Lock-Release development by creating an account on GitHub.

oridool/liquibase-locking

Automatic management of Liquibase locking mechanism - to avoid infinite lock and manual fix - oridool/liquibase-locking

Andrew Nazarov avatar
Andrew Nazarov

And there are these folks trying to define your db structure as K8s objects:

https://schemahero.io/

SchemaHero - A modern approach to database schema migrationsattachment image

SchemaHero is a Kubernetes Operator for Declarative Schema Management for various databases.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s not forget (and I’m joking…)

https://news.ycombinator.com/item?id=24618598

2020-08-11

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)
Saving money a pod at a time with EKS, Fargate, and AWS Compute Savings Plans | Amazon Web Servicesattachment image

At re:Invent 2019, we announced the ability to deploy Kubernetes pods on AWS Fargate via Amazon Elastic Kubernetes Service (Amazon EKS). Since then we’ve seen customers rapidly adopt the Kubernetes API to deploy pods onto Fargate, the AWS serverless infrastructure for running containers. This allows them to get rid of a lot of the undifferentiated […]

jose.amengual avatar
jose.amengual

Basic Kubernetes networking question: Do CNIs or any networking layer in K8s use bridge port at the NIC level to do it’s magic OR it uses just multiple IPs per interface to do it’s thing? wondering about networking layer performance compared to Bridge mode in other settings

jose.amengual avatar
jose.amengual

I’m wondering about this comment in ECS

jose.amengual avatar
jose.amengual
Task definition parameters - Amazon Elastic Container Service

Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types. The family and container definitions are required in a task definition, while task role, network mode, volumes, task placement constraints, and launch type are optional.

jose.amengual avatar
jose.amengual
The host and awsvpc network modes offer the highest networking performance for containers because they use the Amazon EC2 network stack instead of the virtualized network stack provided by the bridge mode. With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
jose.amengual avatar
jose.amengual

and how that could relate to K8s

Matt Gowie avatar
Matt Gowie

I have no idea on your actual question — but just wondering @jose.amengual, you / your org moving away from ECS and towards k8s?

jose.amengual avatar
jose.amengual

we are thinking

jose.amengual avatar
jose.amengual

but we were having a discussion on latency

jose.amengual avatar
jose.amengual

and I think Bridge mode is slow than having a NIC+IP on a host but even more so in docker

Matt Gowie avatar
Matt Gowie

Aha interesting. Sounds like you have a large investment in ECS. Let me know where you land — I’m interested if you end up taking the plunge.

jose.amengual avatar
jose.amengual

but since in K8s bridge is widely used I was wondering how latency plays in

jose.amengual avatar
jose.amengual

we are going to move or I quit

jose.amengual avatar
jose.amengual

I hate ECS

Matt Gowie avatar
Matt Gowie

Hahaha damn, strong opinion! Gotcha man.

jose.amengual avatar
jose.amengual

there is sooooo many little things that adds up

jose.amengual avatar
jose.amengual

it is WAY easier to run docker-compose on instances behind a alb

jose.amengual avatar
jose.amengual

using cloud-init

jose.amengual avatar
jose.amengual

ECS was supposed to help you to run containers, but the learning curve is huge

jose.amengual avatar
jose.amengual

is like learning chef or ansible

Matt Gowie avatar
Matt Gowie

Hahah I’d say the learning curve for k8s is larger though… it just gives you a better, open source platform.

jose.amengual avatar
jose.amengual

yes, maybe the curve is about the same but then the deployments and such is easier

jose.amengual avatar
jose.amengual

not yet another vendor locking way of doing things etc

this1
Emmanuel Gelati avatar
Emmanuel Gelati

I think gke does this too to get better perfomance, so in your gke cluster the pods get an ip address of the real subnet, so you don’t have an overlay network inside k8s

Emmanuel Gelati avatar
Emmanuel Gelati

gke uses calico

jose.amengual avatar
jose.amengual

interesting

Marcin Brański avatar
Marcin Brański

Haha, hate ecs I’ve been following ECS rebalancing issue (https://github.com/aws/containers-roadmap/issues/105) for few years but recently unsubscribed. What a relief it was

[Rebalancing] Smarter allocation of ECS resources · Issue #105 · aws/containers-roadmap

@euank Doesn&#39;t seem like ECS rebalances tasks to allocate resources more effectively. For example, if I have task A and task B running on different cluster hosts, and try to deploy task C, I…

jose.amengual avatar
jose.amengual

fb-wow

som.ban.mca avatar
som.ban.mca

Are you load testing the k8 bridge anyhow? It will be interesting to use something like Apache AB from more than 6 IP addresses to cover the different Zones.

jose.amengual avatar
jose.amengual

no I have not done any testing yet, just wondering if this was a thing

Harsha avatar

As per our company policy, linux server will get latest patch updates from kernal level for evry 2 months , which will cause all the machines to force reboot. at that time kubernetes cluster is getting broken .. all nodes will be in notready state. did some one faced similar problem… ? if so how is handling…. ?

joey avatar

are all of the nodes getting restarted at the same time? can’t you stagger them out so pods can shuffle around and normalize between reboots?

Harsha avatar

all of nodes restart at same time … i was thinking probably its broking cluster network

joey avatar

is someone mandating that all of the nodes have to start at the same time?

joey avatar

that just sounds like poor decision making

2
joey avatar

if you don’t want stuff to break don’t restart everything at the same time

Harsha avatar

our infrastructure is managed by a different team and they do weekend patching for almost 500-1000 machines at the same time. so it’s difficult to request them to reboot servers in sequence

joey avatar

idk. i did something like this when i needed to do this exercise by hand.

https://gist.github.com/jfreeland/e937248a298e7d8e506b92b6bbc8ebad

there were some oddball cases where i couldn’t let the cluster-autoscaler do it for me so

joey avatar

“we have to restart 500-1000 machines and we don’t know how to do it properly” is a pretty nutty reason to risk taking a service down.

joey avatar

that particular script was focused on an aws_autoscaling_group but it’d be trivial to modify that to ssh to the node and reboot it as well. lots of options. is your kubernetes control plane on some of the nodes that are being recycled too?

JMC avatar

They have to find a way to execute a “Rolling restart”. It should be possible in every cloud provider. For exmaple, in AWS Autoscaling group you have the instance refresh utility

JMC avatar
JMC
03:41:41 PM
JMC avatar
JMC
03:42:06 PM
JMC avatar

And it’s callable via the AWS API (could be done right after the patch is applied, in your example)

JMC avatar

It’s one way to tinker your way through it, but you guys gonna find your own hehe

Harsha avatar

haha yup I got it Thanks a lot for your inputs @JMC & @joey

2020-08-12

Alejandro Rivera avatar
Alejandro Rivera

Is there a public document I could refer to on how to extend k8s HPA? I want to write a custom HPA that uses custom metrics to scale down. e.g. I have a custom metric that I can get from prometheus that is connection count to a pod, I want my pods only to scale down when connection is 0 and not send any new connections to it. Is there such a thing already?

joey avatar

https://github.com/stefanprodan/k8s-prom-hpa

https://github.com/DataDog/watermarkpodautoscaler

if i’m not mistaken https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-limit-scale-down-rate was added in v1.17 or v1.18.. i had to write a custom autoscaler in the past because kubernetes hpa didn’t have a cool down factor to not scale things down too fast

stefanprodan/k8s-prom-hpa

Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics - stefanprodan/k8s-prom-hpa

DataDog/watermarkpodautoscaler

Custom controller that extends the Horizontal Pod Autoscaler - DataDog/watermarkpodautoscaler

Horizontal Pod Autoscaler

The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller.

Alejandro Rivera avatar
Alejandro Rivera

This is great joey, thank you!

2020-08-13

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

We have another poem for you –

“The time has come,” the maintainers said, “To talk of software fates: Of upgrades – and shipping Helm v3 – Of bugfixes – and k8s –”

Read @bridgetkromhout’s latest blog for details on the Helm v2 deprecation timeline: https://helm.sh/blog/helm-v2-deprecation-timeline/

wannafly37 avatar
wannafly37

If im running an app that requires one instance under a deployment, how do I ensure 1 pod is always available? Case is manual deletion of the pod causes some downtime. PDB doesn’t seem to do it.

Matt Gowie avatar
Matt Gowie

Hey folks, in helm if I want to share template helper functions with my many application Charts (functions like fullName that are pretty consistent across applications) — what is the best way to do that? Create a parent chart and treat all application charts as subcharts? Or is there a better way?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See this chart for inspiration and patterns:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Awesome — Will check that out.

Matt Gowie avatar
Matt Gowie

Does CP use the common charts?

I ask as I’m digging into it now and I’m wondering how widely they’re used by the community. Even though they’re going to make my charts more DRY… they are also going introduce some fun rabbit holes in where things are coming from since those common templates are doing a good bit.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
[monochart] Fix bugs, remove dependency on obsolescent Kubernetes repo by Nuru · Pull Request #256 · cloudposse/charts

what In [monochart]: Fix incorrect reference to serviceAccount in jobs.yaml Use empty maps rather than nil for default empty map values Remove dependency on obsolescent Kubernetes incubator Helm r…

Matt Gowie avatar
Matt Gowie

Interesting… so you’re pulling back on using it and bringing it in because they’re deprecating that repo.

Is monochart your --starter chart or is it an older version of a library chart?

Matt Gowie avatar
Matt Gowie

It doesn’t seem to be marked a library chart and doesn’t seem to be using the library chart patterns, but then again I don’t know how folks were accomplishing that prior to helm3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the monochart is our universal helm chart we use in nearly every customer engagement.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using the monochart, we can deploy 99% of customer applications we’ve come across. It means we don’t need to develop an original chart for every application and thus reduces the techdebt and chart sprawl.

Matt Gowie avatar
Matt Gowie

Gotcha. Seems I need to do some reading to grok that a bit more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We look at the values.yaml as a custom schema that lets you define a custom, declarative specification for how an app should work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then we make the chart implement that specification.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then we make developers choose a specification.

Matt Gowie avatar
Matt Gowie

Gotcha. And I’m guessing you then you use Helmfile to just provide values.yaml files to monochart for each application image your deploying?

1
Matt Gowie avatar
Matt Gowie

cool-doge That sounds like a pattern I want! Tomorrow I’m jumping into the old ref architecture so I’ll see what I find while digging into that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Mirantis acquires Lens, an IDE for Kubernetes – TechCrunchattachment image

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the c…

2
1
1

2020-08-16

mado avatar

Just installed OpenShift dedicated on AWS, any better advice to construct Jenkins on it? I use GitLab store BE/FE app source codes and try to deploy on OpenShift Pods.

2020-08-19

Pierre Humberdroz avatar
Pierre Humberdroz
alexellis/registry-creds

Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow, I knew about the recent pricing changes and how orphaned images were going to be deleted.

alexellis/registry-creds

Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I didn’t know about this:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Why is this operator required?
The primary reason for creating this operator, is to make it easier for users of Kubernetes to consume images from the Docker Hub after recent pricing and rate-limiting changes were brought in, an authenticated account is now required to pull images.
Unauthenticated users: 100 layers / 6 hours
Authenticated users: 200 layers / 6 hours
Paying, authenticated users: unlimited downloads
See also//www.docker.com/pricing)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

DockerHub rate limits seems like a huge liability!

Pierre Humberdroz avatar
Pierre Humberdroz

Right? I was also surprised by that!

Pierre Humberdroz avatar
Pierre Humberdroz

more reasons to build up a pull thru registry

Pierre Humberdroz avatar
Pierre Humberdroz

uff. You know what I just now see as an Issue? CI/CD !

tim.j.birkett avatar
tim.j.birkett

Sorry to drag up this oldish thread but the great limit switching time is approaching… Just wondering if anyone has come up with a reasonably elegant way to tackle getting ready for this?

Sure, you can docker login or manage your docker config on nodes with user-data or you can create a Secret in every namespace and then rewrite all of your charts, deployment YAMLs etc to include the relevant imagePullSecrets oh, and re-deploy or kubectl patch them all… and update Jenkins Kubernetes plugin configs… Probably have to hunt down any special implementations in Jenkinsfiles … I’m sure you get the point.

Any experiences to share?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@tim.j.birkett did you see @Pierre Humberdroz’s recommendation? https://github.com/alexellis/registry-creds

alexellis/registry-creds

Automate Kubernetes registry credentials, to extend Docker Hub limits - alexellis/registry-creds

tim.j.birkett avatar
tim.j.birkett

Indeed I did see that @Erik Osterman (Cloud Posse) - it looks quite interesting and fairly similar to: https://github.com/zakkg3/ClusterSecret

zakkg3/ClusterSecret

Kubernetes ClusterSecret operator. Contribute to zakkg3/ClusterSecret development by creating an account on GitHub.

tim.j.birkett avatar
tim.j.birkett

For anyone else who stumbles across this during a search of Slack or the Cloudposse Slack archives, @zadkiel kindly enlightened me to this project: https://github.com/titansoft-pte-ltd/imagepullsecret-patcher - It manages the image pull secrets and patches the default service account in each namespace, keeping everything up to date on changes.

titansoft-pte-ltd/imagepullsecret-patcher

A simple Kubernetes client-go application that creates and patches imagePullSecrets to service accounts in all Kubernetes namespaces to allow cluster-wide authenticated access to private container …

1
jdtobe avatar
jdtobe
05:41:02 PM

@jdtobe has joined the channel

wave1

2020-08-20

aaratn avatar
Introducing the AWS Controllers for Kubernetes (ACK) | Amazon Web Servicesattachment image

AWS Controllers for Kubernetes (ACK) is a new tool that lets you directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly-available Kubernetes applications that utilize AWS services. Today, ACK is available as a developer preview on GitHub. In this post we will give you a brief introduction to the […]

5
ayr-ton avatar
ayr-ton

Has someone ever used Gitlab Auto DevOps with custom Docker images for a worker and a service in the same repository? Not using the Herokuish/etc If so, how was the configuration experience?

2020-08-21

roth.andy avatar
roth.andy
The Kubernetes Kustomize KEP Kerfuffleattachment image

In this post we’ll explore K8s community decision making process by looking underneath the hood of the ‘kerfluffe’ of Google LLC being called out by Samsung SDS engineers for skipping ‘graduation criteria’ while merging the new ‘kustomize’ subcommand into upstream ‘kubectl’.

2020-08-25

Chris Wahl avatar
Chris Wahl

So, Docker Inc has finally updated the FAQ for their their previously announced service limits. And I’m not going to lie, it’s pretty brutal. You should consider any (unpaid) use of Docker Hub to be an operational risk going forward.

RB avatar

AWS Controllers for Kubernetes - https://github.com/aws/aws-controllers-k8s

aws/aws-controllers-k8s

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes - aws/aws-controllers-k8s

zeid.derhally avatar
zeid.derhally

Not really sure how i feel about that. I’d rather manage things via terraform, or am I missing something?

aws/aws-controllers-k8s

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes - aws/aws-controllers-k8s

RB avatar

ya im still looking into it. it does look promising. and it doesnt do all aws like terraform does so it’s not going to be a replacement

2020-08-26

2020-08-27

Matt Gowie avatar
Matt Gowie

Is anyone using Datadog Log collection with EKS + Fargate? Did you have to jump through the whole Firelens + Fluentbit hurdles similar to DD + ECS log collection?

Matt Gowie avatar
Matt Gowie

I don’t even know if Fluentbit + Firelens on EKS is a thing after looking for a few… but I would be interested if anyone got DataDog logging working properly with EKS Fargate services. This seems like a real pain.

maarten avatar
maarten

Can’t DD just pull from Cloudwatch ?

Matt Gowie avatar
Matt Gowie

Haha but still need to ship logs to cloudwatch and then also pay them as well.

Matt Gowie avatar
Matt Gowie

I believe I found the solution. I need to mount a emptyDir volume to both agent and application and then configure DD to look at that location. It’s just tedious and I’m surprised it’s not documented well.

Darren Cunningham avatar
Darren Cunningham

I POC’d the Firelens + Fluentbit with ECS Fargate, I got it working pretty quickly – didn’t move forward with it though because we’d have to recreate a handful of Monitors/Log Views that are currently based on the CloudWatch Log Group Name

I’m probably going to end up creating a Fluentbit configuration that also pushes to CloudWatch. Which is a little silly, but then again I don’t know that I want to rely solely on DD.

jose.amengual avatar
jose.amengual

the only thing I’m going to say about DD log collection is that is SUPER expensive

1
jose.amengual avatar
jose.amengual

far more than other SaaS products out there

jose.amengual avatar
jose.amengual

we forbid the use of it in our company for that reason, just my 2 cents

bbhupati avatar
bbhupati

Hello guys, I’m trying to install lens in centos 7.6(64-bit) using snap (https://snapcraft.io/install/kontena-lens/centos) and installation is successful, but when i run kontena-lens it gives below /snap/kontena-lens/110/kontena-lens: error while loading shared libraries: libgtk-3.so.0: cannot open shared object file: No such file or directory sudo yum provides libgtk-3.so.0 Last metadata expiration check: 031 ago on Fri 28 Aug 2020 0651 AM UTC. gtk3-3.22.30-3.el8.i686 : GTK+ graphical user interface library Repo : @System Matched from: Provide : libgtk-3.so.0

gtk3-3.22.30-3.el8.i686 : GTK+ graphical user interface library Repo : rhel-8-appstream-rhui-rpms Matched from: Provide : libgtk-3.so.0

sudo yum install gtk3-3.22.30-3.el8.i686 -y

after installing all dependency packages still getting same error any suggestion on this ?

Install Lens on CentOS using the Snap Store | Snapcraft

Get the latest version of Lens for on CentOS - Lens - The Kubernetes IDE

2020-08-28

Matt Gowie avatar
Matt Gowie

What do folks prefer for a visual K8s Tool? Lens, k9s, Octant, or other?

roth.andy avatar
roth.andy

K9s

4
2
Eric Berg avatar
Eric Berg

@roth.andy, this was one of the best tool recommendations ever! Thank you!!!

2020-08-30

    keyboard_arrow_up