I’m trying to go further with my multi tenant cluster and want to show only their namespaces to my teams. I did not find a way to reduce the number of shown namespaces when I do a
k get ns. Any idea how I can get this done?
Author: Adrian Ludwin (Google) Safely hosting large numbers of users on a single Kubernetes cluster has always been a troublesome task. One key reason for this is that different organizations use Kubernetes in different ways, and so no one tenancy model is likely to suit everyone. Instead, Kubernetes offers you building blocks to create your own tenancy solution, such as Role Based Access Control (RBAC) and NetworkPolicies; the better these building blocks, the easier it is to safely build a multitenant cluster.
Hello all, Can any one help with the Azure Kubernetes service please what if the namespace is accidentally deleted Is there any recovery process (Disaster Recovery). Any inputs from the team please. ~ Thanks much appreciated.
AFAIK there’s no way to revert this out of the box. What pipeline did you use to get the yamls into that namespace? Usually the expectation is that the pipeline is easily repeatable, that’s why not too many talks around recoveries.
If you still going to approach the problem from a backup/recovery side, there are couple of cloud-generic projects to achieve what you want:
Kubernetes resource state sync to git - GitHub - pieterlange/kube-backup: Kubernetes resource state sync to git
use gitops and problem solved
Hi People, Wanted to ask about experiences upgrading kubernetes eks versions. I recently did an upgrade from 1.19 to 1.20. After the upgrade some of my workloads are experiencing weird high cpu spikes. But correlation does not equal causation so I wanted to ask if anyone here experienced something similar.
:helm: New to k8 and helm.
Need to define multiple pieces of my internal app, some based on public helm charts, others just internal containers.
I started with
kompose and converted Docker compose files to give me a headstart on what might be contained in k8 yaml schema, but not clear if I need to create my own helm charts or not. I’m since I’m not going to reuse these pieces in other projects, I’m assuming I don’t need helm charts.
If I draw some similarity to Terraform…. would a helm chart be like a terraform module, and the k8 schema yaml be similar to a “root module”? If that parallel applies, then I’d only worry about helm charts when consuming a prebuilt resource or trying to reuse in different places in the company. If it’s a standalone root application definition, I’m assuming I’ll just do this without helm.
How far off am I? #k8newbie
Update: I am reading more on this and see that there are benefits for internal use too that allow using the same deployment with easier templating approach.
helm install --set env=dev --replicates=1
with less templating configs required as it would allow me to set my templating values dynamically. I’m guessing kubectl has this with the overrides file, but perhaps a bit less flexible or easier long term.
Rollbacks also seem to be really smooth, though again i’m guessing kubectl has similar features just be referencing prior source.
Another pro is that you can go beyond the schema of the app and also handle application level configuration. My guess is that that’s where k8 operators would be required to better handle application level configuration actions.
any quick insight on approaching an internal deployment with helm? A lot to learn so making sure I don’t focus on the wrong thing as I try to migrate from terraform ecs infra to kubernetes.
cc @Erik Osterman (Cloud Posse) would welcome any quick insight as this is all new to me from you or your team.
yes, a helm chart is a lot like a terraform module in the sense that you bundle up the “complexity” into a package and then expose some inputs as your declarative interface
also, we’ve relatively recently released https://github.com/cloudposse/terraform-aws-helm-release
Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role
which we’re using to more easily deploy helm releases to EKS
So if I’m newer to this and I’m basically dealing with a root module/application deployment and need just env flexibility, but not a lot of other flexibility or sharing…. do I still stick to using helm or stick with k8 yaml instead? Where do I spend the effort?
A lot to learn
Gotta narrow the scope
there are 2 major camps right now: kustomize and helm
I would first master the raw resources, to learn/appreciate the alternative ways then to manage them.
then look forward at tools like ArgoCD/Flux - not that you will use them, but understand how they fit in to the picture.
thank you. I’ll stick with native k8 schema then as I really have to dial in the basics first and then can dive into others as I go. The less abstractions the better right now as I try to prepare team for such a big jump Doing my best to resist using Pulumi too for now
lol, yes, resist the urge until you appreciate the fundamentals and the limitations.
All roads lead to jsonnet. . Seems to be for me at least…
Grafana Tanka… is pretty awesome to be honest.
Reading https://github.com/cloudposse/terraform-aws-eks-node-group/blob/780163dacd9c892b64b988077a994f6675d8f56d/MIGRATION.md to be able to jump to the module 0.25.0 version (had recent overhaul).
remote_access_enabled was removed from the module, but not documented in the migration guide to 0.25.0 …
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
|Join us for a hands-on lab to implement Argo CD with ApplicationSets the new way of bootstrapping your cluster in Kubernetes. Friday 8:30 AEST||Thursday 3:30 https://community.cncf.io/events/details/cncf-cloud-native-dojo-presents-hands-on-lab-getting-started-with-argocd/|
I’m attending CNCF Cloud Native Dojo w/ Hands on Lab - Getting started with ArgoCD on Sep 10, 2021
Hi People, anyone ever had this issue with the AWS ALB Ingress controller:
failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet with at least 8 free IP Addresses for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/my-cluster-name>': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See <https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details.
So there three subnets with the appropriate tagging and many ips I could not yet find the reason why it is complaining about the subnets
Perhaps there aren’t enough free ips in those tagged subnets?
there are only few nodes running, and there are thousands of ips i added another tag on the subnets so they look like this now:
kubernetes.io/cluster/my-cluster-name shared kubernetes.io/role/internal-elb 1 kubernetes.io/role/elb
now the AWS ALB Ingress controller starts successfully and registers the targets in the target group but all my requests to any application in the cluster are timing out
Sounds like the first problem was solved. Nice job!
New problem seems like a misconfiguration in the pod that utilizes this new controller?
|Hi guys, I am looking for a “user friendly” solution to manage multiple clusters for a customer. In the end I’m between [Rancher> and <https://kubesphere.io/||Kubesphere](https://rancher.com/), has anyone here used any of these solutions in production?. They are using EKS (AWS). Thanks|
Probably you’ll find this video interesting. It’s a Victor Farcic’s review of Kubesphere