#kubernetes (2021-05)


Archive: https://archive.sweetops.com/kubernetes/


Christian avatar

Hi everyone, just curious as to how are people managing users/roles in an EKS cluster with only private access enabled? In this case, we would only be able to communicate with the API server inside a bastion host. How would rbac or IAM access work?

Jonathon Canada avatar
Jonathon Canada

Hi Christian. One option is to use Teleport: https://github.com/gravitational/teleport

Essentially you could deploy a Teleport pod into your k8s clusters and that pod would create a reverse tunnel to a Teleport proxy that you would have running. This would allow you to keep your k8s clusters in private subnets/privately accessible. You would still need to create RBAC within your k8s clusters, but any groups or users you create within your k8s clusters you could map to roles you create in Teleport.

In full disclosure I work for Teleport, but the open source version that I linked is fully free to use. Please let me know if I can answer any questions for you.

gravitational/teleportattachment image

Certificate authority and access plane for SSH, Kubernetes, web applications, and databases - gravitational/teleport


Santiago Campuzano avatar
Santiago Campuzano

I am really excited to share this story with all the community, I used my passion time at GumGum to write this story that collects personal and team experiences with Kubernetes.

Santiago Campuzano avatar
Santiago Campuzano
Implementing Kubernetes: The Hidden Part of the Iceberg — Part 1attachment image

A summary of personal and team experiences and challenges when implementing a production-grade flee of Kubernetes clusters at GumGum.



Milosb avatar

Guys, what is your cluster utilization in production?

roth.andy avatar

CPU: 14%, RAM: 29%


Sergey Kvetko avatar
Sergey Kvetko

Hi there! Is anybody uses kpt as a main deploy tool ?

Kptattachment image

Kubernetes configuration package management


tomkinson avatar

Would anyone here have the ability (time and generosity) to help take a min and check this video log out its about 30 sec. It seems out ETC is in a ‘boot loop’ of sorts. We don’t even know what started it. Any assistance would be amazing as we’re learning this stuff as we go. Tried powering down and restarting the droplets too but did nothing https://forums.rancher.com/t/sh-is-booting-us-out-because-boot-loop/19973/2

SH is booting us out because boot loop

The UI is just static HTML/JS files, so not being able to get to it is just a symptom and not your actual problem. Etcd restarting constantly basically means you have no cluster, in which Rancher is supposed to be running, which provides the API, which serves up the UI assets. But there’s not much anyone can tell you in detail given just that “a boot loop or something” is happening.


Harry avatar

I’m not sure if this is the right channel as I’m using Nomad rather than Kubernetes but I’m having some issues with the container storage interface which I believe is a Kubernetes project. In short, I’m getting an error trying to launch a job that depends on an EBS volume with the error “Constraint CSI volume grafana has exhausted its available writer claims filtered 1 node” but the volume says it has no read or write allocations. Any ideas what might be causing this or how to fix it would be much appreciated. I’m finding that attaching some persistent disk storage to a container seems far far harder than I’d expect it to be.