#helmfile (2021-06)

https://github.com/helmfile/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles

Archive: https://archive.sweetops.com/helmfile/

2021-06-02

jason800 avatar
jason800

Hey @mumoshu! Long time no talk. Hope you’re doing well. Curious if you’d be open to support an issue/PR to the secrets handling for helmfile. I have a scenario where my secrets are resolved externally and brought in with an exec, but I’d really like to get them able to be suppressed. I’m thinking if i could shove them into the secrets: block with a secret engine of none, I could still leverage the suppression with minimal code change?

mumoshu avatar
mumoshu

Hey! Your contribution is always welcomed but
with a secret engine of none
I couldn’t understand this part.

Probably you can share me a imaginary configuration that you’ll write after your suggested helmfile improvement, so that I can better see your goal?

jason800 avatar
jason800

Actually after some research I think I was simply expecting secrets functionality in helmfile to operator differently than it does. figuring out --supress-secrets literally supresses kubernetes secrets objects output and nothing else

jason800 avatar
jason800

--supress-diff is the best available to me right now for not logging secrets to CI/CD

jason800 avatar
jason800

or I could just go with a sync to produce no output as well i suppose

jason800 avatar
jason800

one thing I also tried was using the remote secrets stuff via vals. They have a format secretref+... which takes the sensitivity of the infromation into account when displaying

jason800 avatar
jason800

secretref+file://<file> , but helmfile does not appear to notice or care about this , although it does load the data fine enough in plain text

jason800 avatar
jason800

Do you think its possible we could determine via vals if the data came from secretrefs and block it from output ?

mumoshu avatar
mumoshu

that will require a lot of refactoring and development effort

2021-06-04

2021-06-06

2021-06-09

Balazs Varga avatar
Balazs Varga

hello all, when I do a helmfile apply and sometimes I cannot deploy all apps becaue of an error, but I see helmfile already created the “release” secret file and next file it won’t try to deploy the app because it checks the file and it does not see any changes, however I could not deploy that app. is that expected, or is there a setting or param that I can set to delete this file on error / revert ?

Rene Hernandez avatar
Rene Hernandez

What do you mean by a release secret file? Can you provide a code example?

Balazs Varga avatar
Balazs Varga
06:35:34 AM

I amtalking about this file

Rene Hernandez avatar
Rene Hernandez

But where is that coming from? Could share some code example? That looks like coming from falco

Balazs Varga avatar
Balazs Varga

this release file? It is deployed by helmfile. what kind of code example you would like to see. it can reproduce with any type of charts. if it fails to deploy sometimes it left the release file and next time when you run helmfile it will check the release file and won’t find any changes so won’t deploy it to the cluster and will miss. On that time we delete this release file and apply again.

Rene Hernandez avatar
Rene Hernandez

Unfortunately, it isn’t clear to me what is the issue and how to tackle. Maybe @mumoshu have more insights

mumoshu avatar
mumoshu


I see helmfile already created the “release” secret file
Are you talking about the fact that helm creates a release secret containing the release info, on install?

mumoshu avatar
mumoshu

Helmfile doesn’t touch it. Perhaps you just want to set releases[].atomic to true in helmfile.yaml to prevent helm from creating the release on failed install.

1
Balazs Varga avatar
Balazs Varga

thanks, I would like that. going to try it.

2021-06-10

2021-06-11

grv avatar

Hey guys, recently started using helmfiles, had some questions in my head which i wanted to confirm how people here in community manage:

• I am exploring a use-case, where i want to use a public helm chart (from an open source repo) with my own values.yml file, but want to use some of my own custom templates as well (with crd’s, on my local). Is there a possibility to club them together in a single helmfile -f blah.yml sync command?

• Or is it not possible and only way (and best practice) is to get a copy of the actual templates and create in-house charts?

Jonathan avatar
Jonathan

Using your own values.yaml file is no problem, but if you want to add templates and modify the chart, you either have to copy it, or use something like kustomize to modify just the parts of the chart that you want instead

Antoine Taillefer avatar
Antoine Taillefer

You can use JSON/Strategic-Merge patches directly in Helmfile, see https://github.com/roboll/helmfile#highlights and https://github.com/roboll/helmfile/pull/673

Tim Birkett avatar
Tim Birkett

I take the approach of putting my dependencies or extra config either in a local helm chart, or using the incubator/raw chart (if it’s a very simple thing) and using labels and “needs” to group the releases.

For example, I deploy the istio-operator with an upstream helm chart, and deploy the custom resource as another helm release: istio-config (using the incubator/raw chart). I specify that the istio-config release needs the istio-operator release and add an arbitrary label app: istio to the releases (in the helmfile). You can then do something like helmfile -e dev -l app=istio sync

2021-06-14

2021-06-16

2021-06-17

Christian avatar
Christian

Do you guys use helmfile and argocd together?

2021-06-18

jason800 avatar
jason800

Has anyone had an issue with kustomize/jsonPatch after v0.139.0

jason800 avatar
jason800

What used to be a simple jsonPatch applying labels to a manifest now writes out an empty manifest

jason800 avatar
jason800

I have tried both a jsonPatch and a strategicMergePatch. If i remove the patch entirely, i get template output, if i add the patch back in i get zero output from templating

jason800 avatar
jason800

it actually doesn’t even work if i mimic the tests in helmfile repo… I still get empty output from template

jason800 avatar
jason800
 68
 69   - name: prometheus-manifests-{{ $region }}-{{ substr 0 3 $okeCluster }}
 70     namespace: {{ $namespace }}
 71     chart: incubator/raw
 72     force: false
 73     kubeContext: "{{ $kubeContext }}"
 74     labels:
 75       realm: {{ $realm }}
 76       region: {{ $region }}
 77       cluster: {{ $okeCluster }}
 78       app: prometheus-manifests
 79       namespace: {{ $namespace }}
 80     values:
 81       - resources:
 82           - apiVersion: v1
 83             kind: Secret
 84             metadata:
 85               name: thanos-sidecar-objstore-secret
 86               namespace: infra-monitoring
 87               labels:
 88                 app: thanos-sidecar
 89             type: Opaque
 90             data:
 91               oci_config: FOO
 92     # Use kustomize to patch manifests with relevant labels
 93     jsonPatches:
 94       - target:
 95           name: thanos-sidecar-objstore-secret
 96         patch:
 97           - op: replace
 98             path: /data/oci_config
 99             value: {{ $object_storage_config | quote }}
100           - op: add
101             path: /metadata/labels/cluster
102             value: "{{ $okeCluster }}"
103           - op: add
104             path: /metadata/labels/realm
105             value: "{{ $realm }}"
106           - op: add
107             path: /metadata/labels/region
108             value: "{{ $region }}"
109
jason800 avatar
jason800
Helmfile 0.139.0+ breaks chartify/jsonPatch · Issue #1889 · roboll/helmfileattachment image

Helmfile versions 0.139.0 and later have broken chartify/kustomize integration. Please see the following examples: test-helmfile.yaml releases: - name: testPatch namespace: default chart: incubator…

2021-06-21

Balazs Varga avatar
Balazs Varga

when I use jsonpatch. Is there any param or option to force helmfile to quit if the file that I would like to patch is not exists. E.g. I added the name incorrectly.

2021-06-22

jamesc avatar

hi here! nice to meet you all. I’m a new comer to helmfile and struggling to find an elegant solution to a problem… wondered if anyone had come up with some solutions in the past.

I’ve inherited a helmfile with 100+ releases defined within. Most of the helm charts for the releases share a common values structure, except they vary in the root element name; e.g. my-app-a and my-app-b in the samples below

# chart-a/values.yaml
my-app-a:
  theCommonThing:
    theCommonSubThing: theValue
    mySpecialThing: mySpecialValue

# chart-b/values.yaml
my-app-b:
  theCommonThing:
    theCommonSubThing: theValue

With DRY in mind, I’m trying to find a way of defining some common values and apply them to the helmfile releases that inherit this structure. For example, using the charts above, I’d like to override theCommonThing.theCommonSubThing for multiple releases, without affecting theCommonThing.mySpecialThing.

The standard approach of layering the values doesn’t quite work because each helm chart has a different root value… and I don’t believe I can target or wildcard this.

Without updating the 100s of helm charts, the best solution I can think of it to introduce a convention in my helmfile where .Release.Name = root node name and then use {{ .Release.Name }} as the root node in a shared values file.

#common.yaml.gotmpl
{{.Release.Name}}:
  theCommonThing:
    theCommonSubThing: theOverridenValue

#helmfile.yaml 
releases:
  - name: myRelease
    values:
      - common.yaml.gotmpl
      - my-app-a:
          theCommonThing:
            mySpecialThing: myOverridenValue

However I’m as little apprehensive about hijacking the release name for this purpose.

Feels like I should be able to use templating or yaml anchors to solve this, but helmfile is quite specific about what can be templated. Interested to hear if anyone else has hit a similar challenge…

jamesc avatar

Sometimes just writing this stuff out helps… just thought of another solution… duplicate entries in the inline values section for each release… e.g. using my-app-a twice below

releases:
  - name: myRelease
  values:
    - my-app-a:
        {{ tpl (readFile "common.yaml.gotmpl") . | nindent 8 }}
    - my-app-a:
         theCommonThing:
            mySpecialThing: myOverridenValue

2021-06-25

2021-06-27

Joaquin Menchaca avatar
Joaquin Menchaca

I was deploying ClusterIssuer kind in helmfile using raw helm chart, and I got this:

STDERR:
  Error: Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "<https://cert-manager-webhook.kube-addons.svc:443/mutate?timeout=10s>": dial tcp 10.0.31.194:443: connect: connection refused

Code is from: • https://gist.github.com/darkn3rd/594e5ddcf27fe577e04e356884cf7e54 My question, is how do I try this again, Everytime I do helmfile apply, it does nothing. I don’t see the anything updated after the failure, and helm ls gives:

Comparing release=cert-manager-issuers, chart=itscontained/raw
Joaquin Menchaca avatar
Joaquin Menchaca

I had to helm delete cert-manager-issuers && helmfile apply. The solution doesn’t seem all that, um, eloguent.

Joaquin Menchaca avatar
Joaquin Menchaca

I think the root cause of the issues, is that the certmanager-webhook is not yet available, and there needs to be a delay in deploying this.

Andrew Nazarov avatar
Andrew Nazarov

Yes, to mitigate this we are doing

...
- events: ["postsync"]
  showlogs: true
  command: "sleep"
  args: ["30s"]
...

in cert-manager release definition and then for the cluster-issuer release we define a dependency:

...
needs:
  - cert-manager/cert-manager
...
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, we recommend the same thing

cloudposse/helmfilesattachment image

Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles

voron avatar

I had some time this week to replace simple sleep w/ some heuristic, f.e. w/ some old version of vault-operator

    hooks:
      # let's get some time to vault-operator to push CRDs into k8s, or vault-catalog will fail
      # the idea is to
      # 1) wait till all CRDs will be pushed to k8s api - bash cycle
      # 2) wail till all these CRDs will be completely parsed by k8s api - kubectl wait
      - events: [ "postsync" ]
        showlogs: true
        command: "bash"
        args:
          - -c
          - >
            TARGET=16;
            CRDs=0;
            LABEL="app=kubevault";
            while [[ $CRDs -lt $TARGET ]]; do
            sleep 1;
            CRDs=$(kubectl --context {{`{{.Release.KubeContext}}`}} get crd -l $LABEL|wc -l|tr -d ' ');
            echo "CRD amount=$CRDs, target=$TARGET";
            done;
            RET=1;
            while [[ $RET -ne 0 ]]; do
            kubectl --context {{`{{.Release.KubeContext}}`}} wait --for condition=established --timeout=120s -l $LABEL crd;
            RET=$?;
            done
2

2021-06-28

2021-06-30

    keyboard_arrow_up