#kubernetes (2021-10)

kubernetes

Archive: https://archive.sweetops.com/kubernetes/

2021-10-10

Brad McCoy avatar
Brad McCoy

Hello we did a talk last week on certificates in Kubernetes using cert-manager and letsencrypt if any is interested: https://www.youtube.com/watch?v=mqYP837jk6I

1

2021-10-13

Mr.Devops avatar
Mr.Devops

can clusterrole/clusterrolebindings and role/rolebindings coexist if you’re defining access rules?

Mr.Devops avatar
Mr.Devops

or is it one or the other?

2021-10-14

sheldonh avatar
sheldonh

Team is new to K8s. Wondering a couple things from the pros here.

Would Pulumi be a good start for a team writing Go to learn K8 without doing the normal yaml approach, or is there any opinion on starting with the yaml and maturing to Pulumi later?

I’ve considered one approach with https://devspace.sh/ which seems to make this a nice smooth process, but was hoping to actually drive the config via code rather than piles of yaml if it made sense. Context:

• All the folks in my team are software engineering background with no/little infrastructure/terraform/k8 experience.

• I’m basically transferring services that would be normally in docker compose to k8.

• I want to focus on local development to minikube/kind or such first and then start pushing to our shared AKS cluster.

If I start with Pulumi considering the group I’m with, does that make sense or adds more complexity than it’s worth over just k8 yaml?

DevSpace - The Fastest Developer Tool for Kubernetes (open-source)

DevSpace is an open-source CLI tool that allows you to accelerate your development workflow when building applications on top of Kubernetes. It provides a powerful localhost UI and uses hot reloading to update containers while you are coding.

Antoine Taillefer avatar
Antoine Taillefer

HI, I’d say it really depends on what you want to deploy to your cluster: if it’s applications only, Pulumi might not be the best approach (and a bit “risky” in the sense that you could miss the fundamental concepts of Kubernetes by not taking this first YAML step), you might want to look first at YAML/Helm/Kustomize/… Yet, if you need to deploy infrastructure (create/configure the K8s cluster itself: control plane/node pools, DNS, ingress controller, certificate manager, etc. and also configure some cloud provider objects/services such as storage buckets), then Pulumi sounds nice as it’s advertised as infrastructure as code. There’s also cdk8s.

DevSpace - The Fastest Developer Tool for Kubernetes (open-source)

DevSpace is an open-source CLI tool that allows you to accelerate your development workflow when building applications on top of Kubernetes. It provides a powerful localhost UI and uses hot reloading to update containers while you are coding.

2
Kyle Johnson avatar
Kyle Johnson

the yaml for a Deployment and a Service pointing to the Deployment is really simple

same for cronjobs and configmaps

We have non-infra folks edit them all the time to tweak env vars, with minimal kubernetes knowledge; the patterns are often self-explanatory within a file. If they can understand a docker-compose file, they can understand the stuff above.

Where things got complicated was:

• Helm (finally moved this to terraform)

• Ingress (nginx and now istio)

• any sort of rbac / service account setup… but we handle that all via terraform and just tell folks “here’s your login details” and “here’s the service account too use for your app” The “complicated stuff” lives in Terraform and rarely changes at this point. Easy stuff is still yamls (deployments, etc) and we recently moved to kustomize to make it a bit simpler if you’re just bumping an image version

1
sheldonh avatar
sheldonh

So no one except me knows terraform on my team. We are consuming the cluster managed by cloud operations team so I’m focused on the namespace app definitions.

I’d love to use Pulumi in this scenario if it makes sense, but still not clear on how local development works if everyone wanted to deploy to minikube locally. Not sure if the state would be considered each as its own state remotely stored at that point or if local development testing is different.

I can use yaml but at this point I have folks who know Go more than DevOps tooling and was thinking this would be a good fit for pulumi.

I do not want a pile of terraform that only I know how to maintain

Zachary Loeber avatar
Zachary Loeber

Is everyone on the team jointly managing the cluster?

Zachary Loeber avatar
Zachary Loeber

I guarantee that if all the devs have to really worry about is their own app/namespace that they will be more productive and you will have less emergencies.

Zachary Loeber avatar
Zachary Loeber

I’d still automate the whole deployment via tf and pipelines then maybe setup devspaces for the devs to use against it. Or if they want to use pulumi that would be their call at that point right? The scope of their Pulumi work would be at the kube namespace level and lower (plus maybe some other cloud resources they may depend upon)

sheldonh avatar
sheldonh

We don’t manage the cluster. We are “consumers of it”. I’m now working at the application development level, not part of Cloud Operations team.

I was thinking of using Pulumi for this as it’s all in Go anyway.

We have nothing to do with the AKS cluster which is provided as a “service” to us by the Cloud Operations team. I used to be on that side of things so your advice makes sense there. I’m actually now in the application level part of things and can determine what to use to define the app directly.

I’m thinking Pulumi/Yaml, doesn’t really matter, but since Pulumi is Go I’ll have flexibility in more test automation in the near future with sticking with Pulumi. Does that make sense then? @Zachary Loeber

Zachary Loeber avatar
Zachary Loeber

Pulumi started as typescript I thought? Either way, I’d likely scaffold out a yaml based deployment to then export and help the devs model in their own code if that is the level they would like to take things (using pulumi, AWS CDK, et cetera). The devs may appreciate not having to deal with any of the kube/app scaffolding (but usually golang coders are all in on coding the depths of everything though so I doubt you would be that lucky…)

sheldonh avatar
sheldonh

I’m part of the Go development team now. I’m asking from that context. Is there any negative to just using Pulumi for the application level K8 definitions. I’m checking as my background was cloud operations which Pulumi might not always fit for. However, I’m now using the AKS stack provided by Cloud Operations and responsible for just the application level stack.

I figured Pulumi would be a legit use case for this since I’m working in Go with Go devs. I just wanted to double check and make sure that in context of K8 there weren’t any “negatives” that make using this for a new K8 project a problem.

justin.dynamicd avatar
justin.dynamicd

I personally think Pulumi is the wrong fit if you are only worried about consuming/using k8s. Pulumi is really about giving infra maintainers the ability to use GPLs to help provision/manage their infrastructure and not be bound by DSLs that are the norm. In this case, k8s is already deployed, you likely have a namespace assigned (if not your own cluster), and all you care about is deploying k8s resources. By nature k8s is extremely declarative, and so something like Pulumi is just unnesisary overhead. My own advise would be to look at Helm and Helmcharts: its basically just pure Go templating which you will be well versed with already, so creating a bunch of go templates to standardize your yamls should be a breeze.

Keep in mind that even if you used something like Pulumi, there’s no way around having to learn those primitives to deploy to k8s. You’ll have to understand deployments, ingress, services, etc. Pulumi just lets you get more clever when it comes to generating those final resources.{

Just my 2cents

sheldonh avatar
sheldonh

@justin.dynamicd perfect info. That’s the blend of info I was looking for. I’ve been mostly on the cloud operation side so Pulumi would have been a great fit for a full stack creation of vpcs and such with generated names etc. As a team Pulumi seems to fit for application level definitions with examples like this: https://www.pulumi.com/blog/deploy-kubernetes-and-apps-with-go/ that’s what got me interested, using it for app level.

Deploy Kubernetes and Applications with Goattachment image

Manage Kubernetes clusters and apps with Go using Pulumi’s reusable components.

sheldonh avatar
sheldonh

It’s an interesting mix. To your case Pulumi is the wrong fit for the app only, but since the majority of devops teams I’ve seen aren’t doing application development, then using Pulumi/CDK which is meant to give access to the programming native languages is a mismatch since I’d assume the DevOps/Cloud Ops teams would be more comfortable with HCL over Go/Typescript projects.

That’s my gut reaction. I think the target for pulumi was meant to be developers managing their stack at any level, so your info is making me rethink my assumptions. Good input!

justin.dynamicd avatar
justin.dynamicd

Coming from a Ops background myself, I found the biggest benefit with Pulumi was overcoming the limitations inherent in a declarative language (which is all the rage right now, hence the predominance of yaml). The more flexible you need the code to be, the less “declarative” it becomes, and the harder it is leverage all these yaml-happy DSLs. At it’s worst extreme look at Puppet’s DSL where you can’t even update variables: everything is declared and thus immutable forcing you jump through tons of hoops.

This is where Pulumi shines IMO: you get a full featured, powerful GPL to build your environment with so you are no longer shackled by declarative DSLs. So in go you could make components with as complex logic as you wanted. This is great because often you have to reuse/duplicate things … but the different environments are just different enough that it may cause a traditional DSL problems and in ways that arent necessarily easy to reflect in a simple param list via yaml/json.

My biggest problem with Pulumi: with great power comes great responsibility. Most DSLs are highly opinionated and have a very clear intended use. Hide that behind a GPL and suddenly its very easy to re-invent the wheel and invent a lot of needless work. “this logic will ensure deployments and pods are in place before my service is defined”, for example, is just wasted effort: k8s is designed to handle eventual consistency but its easy to get lost in those details “because you can”.

1

2021-10-21

2021-10-27

2021-10-28

Or Azarzar avatar
Or Azarzar
NGINX Custom Snippets CVE-2021-25742attachment image

Here’s a deep dive into what high severity alert known as CVE-2021-25742 really is and what it means for today’s organizations.

Zach avatar
1
z0rc3r avatar

terraform provider doesn’t allow to use bottlerocket ami_type in aws_eks_node_group yet. filled https://github.com/hashicorp/terraform-provider-aws/issues/21548

aws_eks_node_group: Support new AMI types for Bottlerocket · Issue #21548 · hashicorp/terraform-provider-awsattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…

2021-10-29

    keyboard_arrow_up