Did anyone play around with Generic Ephemeral Volumes?
Hello, I have a Jenkins on Azure, and also have an OpenShift which has a project test, the project contains BuildConfig/DeploymentConfig to build & deploy my application. Firstly, the Jenkins do its work such as fetching code from git, run test like SonarQube/Dependency check, once pass the test, it will kick the building job in BuildConfig(simply curl the endpoint of BuildConfig). So far it works fine, but this workflow only run the latest version of code. I want to build the specific branch/tag of code and every time I need manually modify the version in BuildConfig and DeploymentConfig in OpenShift. For security concern, I can only run Jenkins on Azure, is there any way I can set the version of code I want when kicking the Jenkins job not manually change configs in OpenShit??
It’s getting intense. I just shipped a multi-cluster pulumi deployment.
Some days I think: Is Pulumi overcomplicating it? I mean raw yaml or helm would have been just taking a helm chart and installing on new cluster.
At the same time:
• State based tracking of what succeeded and failed
• One command for
n clusters all at once.
• Random unique names (when done right) makes more work to pass input/output than just matching on names but then means you get a create before delete on all resources (like a lightweight blue green). I’m hopeful this will help with tackling true blue green too.
• Absolutely digging the strongly typed nature of objects. I can be pretty confident in big refactor or changes being compiled and written corrrectly and have to only focus on logical errors. I can still render to yaml if I must.
I know there’s been talk in the past on Pulumi. I’ve been doing some blog posts on it and maybe there’s some debate still if it’s worth it purely for Kubernetes, but I’m overall pretty happy.
I dig how pulumi handles secret encryption per stack in the yaml/backend too. Super handy, probably single best feature over terraform that feels like a no brainer.
Thanks for this update
Could we do a demo on office hours sometime?
That would be great! I’d need to prep something up, but that would be fun to do. It’s still not written/talked about as much being so new to scene so the more out there shared the better!
@antonbabenko has also been looking into it
@sheldonh would love to see a demo of Pulumi on #office-hours I’ve been sticking my head in the sand about Pulumi because in all honesty… I don’t want it to be good haha.
Interesting bit on secrets though… Are you not a fan of sops? I believe that is cross-platform and is how I solve all of the work you just mentioned.
That’s a cool CLI. I hadn’t looked at that in a while. I think the tool is probably a great solution for many of these things but still adds more complexity for someone new to it. Think of the way Pulumi solves is per stack encryption for secrets with zero fuss as long as you already auth’d to Pulumi. Better? Maybe not, but quick and secure seems like it.
I starred sops to examine in future when I need to do this again. Thanks for the share!
Well I already had it starred lol. Just apparently didn’t dive into it at the time.
There is something to be said for the “per stack secrets” that you mentioned. It could be accomplished for sops + Terraform, but usually that fine grained of access hasn’t been something I’ve needed in my work. So that is more secure and I can get behind that idea. Then again… how does Pulumi deal with sharing secrets across many stacks?
Hmm. That’s another discussion I can look into if you have a more specific scenario. Just drop a comment on that post with what scenario you are looking at and when I can I’ll either update post or respond. I need some time + more detail to answer the scenario.
BTW, you are the first real person I’ve seen have any kind of real experience with Pulumi so thanks for forging this path
It’s more complex than yaml and mixed so far on how I feel about the random suffix for naming. It seems super useful to deploy uniquely, but it’s a lot more complex than I thought. I also can’t use a lot of the cool tools out of the box like Tilt as this branches into a more unique approach.
So far I’m slower to get new changes in, but the changes are solid, strongly typed, and I’m confident I built it right.
I can’t compare to CDK. I had to focus on one choice.
I think pulumi is pretty cool overall.
I don’t think I would have done it if they didn’t have a kube2pulumi or whatever that tool was. Gave me a great jump start.
I’m currently doing secrets management via a dedicated Hashi Vault enterprise deployment and using the kube CSI driver w/ Vault provider to declare required secrets (CRDS) for a deployment to avoid having secrets stored locally in kube and also not have to use the sidecar pattern for secrets injection
Also try their
import command. Apparently now it can generate your code from deployments, so it might jump start you on existing stuff.
Right now azure devops variables groups + kubernetes deployment are in pulumi and I’m plugging away as I work on new stuff to include it.
Haven’t touched automation api but looks intruiging
Cool! I had to use configmap mounting a config.yaml into container.
I want to explore something more robust like what you mentioned in the future.
It is easy in concept but becomes quite a bit of work when it comes to larger deployments with multiple owners (the dev owns the workload, their team may own the secrets, another team may own the kube clusters, yet another owns Vault…. et cetera)
In any case, I too would be interested in a demo if you are able to scratch one together, you always share such cool stuff
I keep questioning myself on using pulumi with k8 as it’s really cool but not as common.
I keep coming back to, I can use yaml render and get back to templates if I need to, and in the meantime Kubernetes is complex already, it’s not like yaml makes it easier. In fact, I feel I have a better understanding of the resources and dependency chain due to using Go and requiring strongly typed dependencies and name ID resolution instead of just text name.
Overall I feel it was a reasonable decision as it was either helm or this.
I would love to if I can cobble together one. I appreciate the interest! I would love to share but have to prep a localized version or something.
I hear that, so much work that I do is not easily made externally presentable
In the meantime try playing with kube2pulumi or even pulumi import which supposedly can generate the source code + import into state. Maybe you’ll find some fun stuff with it
I will, I’ve already been poking at that cdk8s framework anyway
So pulumi and kubernetes have tons of overlap, but off the cuff, my first thought would be that using pulumi to drive your kubes gives you a proper programming language to work with instead of yaml and kubectl scripts or terraform modules. That means you get testing frameworks and all the other trimmings like integration/extensibility and package management.
I wrote a blog on my experiences a few months ago in case anyone is interested https://www.anitian.com/eureka-how-pulumi-brought-sanity-to-our-devops-team/
Great writeup! I do feel I still have a rube goldberg style machine with K8 + pulumi, but a strongly typed robust one. It’s hard though as it does require more complexity than pure yaml. However, once you compare to the same option as it expands in more templated yaml or other tools then I think you are picking your poison.
I’ll choose Go over a bunch of yaml tools anyday.
Another update related to this https://www.sheldonhull.com/using-randomization-for-pulumi-kubernetes-resources/
Randomization can help ensure logical name uniqueness with Pulumi. Getting to that point can be tricky, but here’s a way to keep it simple.
I’m pulling back from using Pulumi for Kubernetes primarily, but still using it for most other things. This is primarily because the path of least effort is now helm charts which are built now by another team and maintained. Even though Pulumi can run helm charts, I’m using things as is to benefit.
I still think it has a huge amount of promise, but it is a very different path to follow in the K8 ecosystem. I think the most benefit is found when you create components that wrap up a complete stack of resources with best practices applied. If you just use it for the equivalent of helm you aren’t gaining the full power.
In addition, tracking state for K8 deployments is a very foreign concept and makes things much tricker to integrate with other tooling. It feels sorta like you need to go all in or not at all with app deployments with Pulumi to benefit.
Cluster creation, I imagine is great, but app level I’m not certain I’d recommend unless you are going all in.
Thanks for the update!
The National Security Agency (NSA) and CISA have updated their joint Cybersecurity Technical Report (CTR): Kubernetes Hardening Guide, originally released in August 2021, based on valuable feedback and inputs from the cybersecurity community.
Has anyone seen
User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
logged by the
cluster-autoscaler on AWS EKS?
is it possible to use aws application load balancer controller with selfmanaged Kubernetes cluster (not eks)?
yes, it appears so
Looks like others have gotten it working on
kops which is technically “self managed”
yes, its not tied to EKS at all
We have 2 Kubernetes clusters, and we deployed Elasticsearch inside that using helm
Cluster-1(PROD) is having 2 master 3 data 1 client
Cluster-2(DEV) is having 2 master 2 node 1 client
both are having the same number of shards but the shard size is a little big(3-4Mb difference)in Cluster-1(PROD) for some indices. we are running a term query to generate a report using elasticsearch. what I noticed is.
when I run this query in both the cluster, Cluster-2(DEV) worked fine and produced a result, but the same query failed causing all 2 data pods to restart, Then we looked at the resource consumption of data pods in both the cluster,
For the last 3 months, Cluster-2(DEV) memory utilization looks like this limit = 4GB request = 2GB and in use always will be in range 2.5 to 2.8GB
and For the last 3 months, Cluster-1(PROD) memory utilization looks like this limit = 4GB request = 1.2GB and in use always will be in range 3.8 to 3.9g
and when I looked at that configuration file for data pod for Cluster-2(DEV) request memory resource is defined as 2GB and limit memory resource is defined as 4GB and for Cluster-1(PROD) request memory resource is defined as 1.2GB and limit memory resource is defined as 4GB
my question is: *even though the shard count is similar in both the cluster why the memory using always be 3.8 to 3.9 in our Cluster-1(PROD)* *Is it because the request is too low? is there any recommended ratio of requests and limit resources?*