#helmfile (2021-06)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2021-06-02
Hey @mumoshu! Long time no talk. Hope you’re doing well. Curious if you’d be open to support an issue/PR to the secrets handling for helmfile. I have a scenario where my secrets are resolved externally and brought in with an exec, but I’d really like to get them able to be suppressed. I’m thinking if i could shove them into the secrets:
block with a secret engine of none
, I could still leverage the suppression with minimal code change?
Hey! Your contribution is always welcomed but
with a secret engine of none
I couldn’t understand this part.
Probably you can share me a imaginary configuration that you’ll write after your suggested helmfile improvement, so that I can better see your goal?
Actually after some research I think I was simply expecting secrets functionality in helmfile to operator differently than it does. figuring out --supress-secrets
literally supresses kubernetes secrets objects output and nothing else
--supress-diff
is the best available to me right now for not logging secrets to CI/CD
or I could just go with a sync
to produce no output as well i suppose
one thing I also tried was using the remote secrets stuff via vals
. They have a format secretref+...
which takes the sensitivity of the infromation into account when displaying
secretref+file://<file>
, but helmfile does not appear to notice or care about this , although it does load the data fine enough in plain text
Do you think its possible we could determine via vals if the data came from secretrefs
and block it from output ?
that will require a lot of refactoring and development effort
2021-06-04
2021-06-06
2021-06-09
hello all, when I do a helmfile apply and sometimes I cannot deploy all apps becaue of an error, but I see helmfile already created the “release” secret file and next file it won’t try to deploy the app because it checks the file and it does not see any changes, however I could not deploy that app. is that expected, or is there a setting or param that I can set to delete this file on error / revert ?
What do you mean by a release
secret file? Can you provide a code example?
I amtalking about this file
But where is that coming from? Could share some code example? That looks like coming from falco
this release file? It is deployed by helmfile. what kind of code example you would like to see. it can reproduce with any type of charts. if it fails to deploy sometimes it left the release file and next time when you run helmfile it will check the release file and won’t find any changes so won’t deploy it to the cluster and will miss. On that time we delete this release file and apply again.
Unfortunately, it isn’t clear to me what is the issue and how to tackle. Maybe @mumoshu have more insights
I see helmfile already created the “release” secret file
Are you talking about the fact that helm creates a release secret containing the release info, on install?
Helmfile doesn’t touch it. Perhaps you just want to set releases[].atomic
to true
in helmfile.yaml to prevent helm from creating the release on failed install.
thanks, I would like that. going to try it.
2021-06-10
2021-06-11
Hey guys, recently started using helmfiles, had some questions in my head which i wanted to confirm how people here in community manage:
• I am exploring a use-case, where i want to use a public helm chart (from an open source repo) with my own values.yml file, but want to use some of my own custom templates as well (with crd’s, on my local). Is there a possibility to club them together in a single helmfile -f blah.yml sync
command?
• Or is it not possible and only way (and best practice) is to get a copy of the actual templates and create in-house charts?
Using your own values.yaml file is no problem, but if you want to add templates and modify the chart, you either have to copy it, or use something like kustomize
to modify just the parts of the chart that you want instead
You can use JSON/Strategic-Merge patches directly in Helmfile, see https://github.com/roboll/helmfile#highlights and https://github.com/roboll/helmfile/pull/673
I take the approach of putting my dependencies or extra config either in a local helm chart, or using the incubator/raw chart (if it’s a very simple thing) and using labels and “needs” to group the releases.
For example, I deploy the istio-operator
with an upstream helm chart, and deploy the custom resource as another helm release: istio-config
(using the incubator/raw chart). I specify that the istio-config
release needs the istio-operator
release and add an arbitrary label app: istio
to the releases (in the helmfile). You can then do something like helmfile -e dev -l app=istio sync
2021-06-14
2021-06-16
2021-06-17
Do you guys use helmfile and argocd together?
2021-06-18
Has anyone had an issue with kustomize/jsonPatch after v0.139.0
What used to be a simple jsonPatch applying labels to a manifest now writes out an empty manifest
I have tried both a jsonPatch and a strategicMergePatch. If i remove the patch entirely, i get template output, if i add the patch back in i get zero output from templating
it actually doesn’t even work if i mimic the tests in helmfile repo… I still get empty output from template
68
69 - name: prometheus-manifests-{{ $region }}-{{ substr 0 3 $okeCluster }}
70 namespace: {{ $namespace }}
71 chart: incubator/raw
72 force: false
73 kubeContext: "{{ $kubeContext }}"
74 labels:
75 realm: {{ $realm }}
76 region: {{ $region }}
77 cluster: {{ $okeCluster }}
78 app: prometheus-manifests
79 namespace: {{ $namespace }}
80 values:
81 - resources:
82 - apiVersion: v1
83 kind: Secret
84 metadata:
85 name: thanos-sidecar-objstore-secret
86 namespace: infra-monitoring
87 labels:
88 app: thanos-sidecar
89 type: Opaque
90 data:
91 oci_config: FOO
92 # Use kustomize to patch manifests with relevant labels
93 jsonPatches:
94 - target:
95 name: thanos-sidecar-objstore-secret
96 patch:
97 - op: replace
98 path: /data/oci_config
99 value: {{ $object_storage_config | quote }}
100 - op: add
101 path: /metadata/labels/cluster
102 value: "{{ $okeCluster }}"
103 - op: add
104 path: /metadata/labels/realm
105 value: "{{ $realm }}"
106 - op: add
107 path: /metadata/labels/region
108 value: "{{ $region }}"
109
Helmfile versions 0.139.0 and later have broken chartify/kustomize integration. Please see the following examples: test-helmfile.yaml releases: - name: testPatch namespace: default chart: incubator…
2021-06-21
when I use jsonpatch. Is there any param or option to force helmfile to quit if the file that I would like to patch is not exists. E.g. I added the name incorrectly.
2021-06-22
hi here! nice to meet you all. I’m a new comer to helmfile and struggling to find an elegant solution to a problem… wondered if anyone had come up with some solutions in the past.
I’ve inherited a helmfile with 100+ releases defined within. Most of the helm charts for the releases share a common values structure, except they vary in the root element name; e.g. my-app-a
and my-app-b
in the samples below
# chart-a/values.yaml
my-app-a:
theCommonThing:
theCommonSubThing: theValue
mySpecialThing: mySpecialValue
# chart-b/values.yaml
my-app-b:
theCommonThing:
theCommonSubThing: theValue
With DRY in mind, I’m trying to find a way of defining some common values and apply them to the helmfile releases that inherit this structure. For example, using the charts above, I’d like to override theCommonThing.theCommonSubThing
for multiple releases, without affecting theCommonThing.mySpecialThing
.
The standard approach of layering the values doesn’t quite work because each helm chart has a different root value… and I don’t believe I can target or wildcard this.
Without updating the 100s of helm charts, the best solution I can think of it to introduce a convention in my helmfile
where .Release.Name
= root node name and then use {{ .Release.Name }}
as the root node in a shared values file.
#common.yaml.gotmpl
{{.Release.Name}}:
theCommonThing:
theCommonSubThing: theOverridenValue
#helmfile.yaml
releases:
- name: myRelease
values:
- common.yaml.gotmpl
- my-app-a:
theCommonThing:
mySpecialThing: myOverridenValue
However I’m as little apprehensive about hijacking the release name for this purpose.
Feels like I should be able to use templating or yaml anchors to solve this, but helmfile is quite specific about what can be templated. Interested to hear if anyone else has hit a similar challenge…
Sometimes just writing this stuff out helps… just thought of another solution… duplicate entries in the inline values section for each release… e.g. using my-app-a
twice below
releases:
- name: myRelease
values:
- my-app-a:
{{ tpl (readFile "common.yaml.gotmpl") . | nindent 8 }}
- my-app-a:
theCommonThing:
mySpecialThing: myOverridenValue
2021-06-25
I wrote this recently:
• https://joachim8675309.medium.com/devops-tools-introducing-helmfile-f7c0197f3aea
Automate Helm Charts with Helmfile
2021-06-27
I was deploying ClusterIssuer
kind in helmfile using raw
helm chart, and I got this:
STDERR:
Error: Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "<https://cert-manager-webhook.kube-addons.svc:443/mutate?timeout=10s>": dial tcp 10.0.31.194:443: connect: connection refused
Code is from:
• https://gist.github.com/darkn3rd/594e5ddcf27fe577e04e356884cf7e54
My question, is how do I try this again, Everytime I do helmfile apply, it does nothing. I don’t see the anything updated after the failure, and helm ls
gives:
Comparing release=cert-manager-issuers, chart=itscontained/raw
I had to helm delete cert-manager-issuers && helmfile apply
. The solution doesn’t seem all that, um, eloguent.
I think the root cause of the issues, is that the certmanager-webhook is not yet available, and there needs to be a delay in deploying this.
Yes, to mitigate this we are doing
...
- events: ["postsync"]
showlogs: true
command: "sleep"
args: ["30s"]
...
in cert-manager release definition and then for the cluster-issuer release we define a dependency:
...
needs:
- cert-manager/cert-manager
...
Yes, we recommend the same thing
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
I had some time this week to replace simple sleep w/ some heuristic, f.e. w/ some old version of vault-operator
hooks:
# let's get some time to vault-operator to push CRDs into k8s, or vault-catalog will fail
# the idea is to
# 1) wait till all CRDs will be pushed to k8s api - bash cycle
# 2) wail till all these CRDs will be completely parsed by k8s api - kubectl wait
- events: [ "postsync" ]
showlogs: true
command: "bash"
args:
- -c
- >
TARGET=16;
CRDs=0;
LABEL="app=kubevault";
while [[ $CRDs -lt $TARGET ]]; do
sleep 1;
CRDs=$(kubectl --context {{`{{.Release.KubeContext}}`}} get crd -l $LABEL|wc -l|tr -d ' ');
echo "CRD amount=$CRDs, target=$TARGET";
done;
RET=1;
while [[ $RET -ne 0 ]]; do
kubectl --context {{`{{.Release.KubeContext}}`}} wait --for condition=established --timeout=120s -l $LABEL crd;
RET=$?;
done