#helmfile (2022-02)

https://github.com/helmfile/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles

Archive: https://archive.sweetops.com/helmfile/

2022-02-01

steenhoven avatar
steenhoven

Did anyone come across a plugin for helmfile that detects new versions? All configurations are present in the helmfile, so would be straight forward

Alexander avatar
Alexander

not helmfile, but i use this https://github.com/sstarcher/helm-exporter and integrate with prometheus

GitHub - sstarcher/helm-exporter: Export helm stats into the Prometheus formatattachment image

Export helm stats into the Prometheus format. Contribute to sstarcher/helm-exporter development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use rennovatebot with custom expressions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dylan can you share the example you had?

Dylan avatar

Here’s the relevant portion of the renovate.json we’ve been using internally:

{
  "extends": [
    "config:base"
  ],
  "regexManagers": [
    {
      "description": "Updating unannotated Helm chart versions in YAML files",
      "fileMatch": ["(^|/)*.ya?ml$"],
      "matchStrings": [
        " {4}(?<depName>):\n((( {6,}[.-[\n]]+)|)\n)+",
        " {6}vars:\n((( {8,}[.-[\n]]+)|)\n)+",
        " {8}chart_repository: \"?(?<registryUrl>[^\s\"]+)\"?\n {8}chart_version: \"?(?<currentValue>[^\s\"]+)\"?\n"
      ],
      "matchStringsStrategy": "recursive",
      "datasourceTemplate": "helm"
    },
    {
      "description": "Updating annotated Helm chart versions in YAML files",
      "fileMatch": ["(^|/)*.ya?ml$"],
      "matchStrings": [
        " {4}(?<depName>):\n((( {6,}[.-[\n]]+)|)\n)+",
        " {6,}# renovate: (datasource=\S+)? ?(depName=(?<depName>\S+))? ?registryUrl=(?<registryUrl>\S+)\n {6}chart_version: \"?(?<currentValue>[^\s\"]+)\"?\n"
      ],
      "matchStringsStrategy": "recursive",
      "datasourceTemplate": "helm"
    },
  ]
}
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Show what the comment would look like

Andrew Nazarov avatar
Andrew Nazarov

We are also using Renovate. One of our favourite tools ever - it saves so much time for a small infra/ops/cloud team.

Dylan avatar

Ah, sorry for the delay. I just saw this. A comment/annotation would follow this format (this comment is for Docker, not Helm): # renovate: datasource=docker depName=cloudposse/geodesic registryUrl=[docker.io/images](http://docker.io/images). These parameters should be documented in the Renovatebot documentation, iirc.

1
johncblandii avatar
johncblandii
09:32:49 PM

@johncblandii has joined the channel

Dylan avatar
Dylan
11:56:48 PM

@Dylan has joined the channel

2022-02-02

2022-02-03

Alexey Murz Korepov avatar
Alexey Murz Korepov

How can I apply only single release using helmfile apply command? And same question for helmfile diff

1
Alexey Murz Korepov avatar
Alexey Murz Korepov

Now I use the trick to temporary comment-out other releases in helmfile.yaml but it’s not the best way…

Max avatar

via selector, for instance helmfile -l name=traefik apply

Alexey Murz Korepov avatar
Alexey Murz Korepov

Yeah, this works, thanks! But why this key is missing in helmfile apply --help output?

Max avatar

it’s global option, use helmfile –help

Alexey Murz Korepov avatar
Alexey Murz Korepov

Found it in global help, thank you!

1

2022-02-04

Alexey Murz Korepov avatar
Alexey Murz Korepov

Can anyone share their experience how do you organize per-release password management in environment secrets (possible even without sharing your passwords values :smirk:)? Something like this from mine is ok?

Single environments/production/secrets.yaml file (because I can’t use per-release files, issue):

releases:
  # actually chart names contain dashes, but here I convert then to camelCase to prevent issue <https://github.com/helm/helm/issues/2192>
  myChartOne:
    mariadb: 
      auth:
        password: value
        rootPassword: value
  myChartTwo:
    mariadb: 
      auth:
        password: value
        rootPassword: value

And insert them in charts/my-chart-one/values.yaml.gotmpl files like this:

mariadb:
  auth:
    username: user
    password: {{ .Values | get "releases.myChartOne.mariadb.auth.password" }}
    rootPassword: {{ .Values | get "releases.myChartOne.mariadb.auth.rootPassword" }}

Can you give me some recommendations how to improve this? `Cuz it looks for me messy a bit

Values from secrets added in release object are not available in gotmpl file · Issue #2070 · roboll/helmfileattachment image

I have some secrets for specific chart, that not depend on environment, so I fill them into variables/my-chart/secrets.yaml file, and add link to this file into release configuration: releases: - n…

Minh-Quan Tran avatar
Minh-Quan Tran

I do the same way.

Or you could separate into multiple secrets:

env/prod/secret-db1.yaml

releases:
  MyChart1:
    mariadb:
      rootPassword:

env/prod/secret-db2.yaml

releases:
  MyChart2:
    mariadb:
      rootPassword:
Values from secrets added in release object are not available in gotmpl file · Issue #2070 · roboll/helmfileattachment image

I have some secrets for specific chart, that not depend on environment, so I fill them into variables/my-chart/secrets.yaml file, and add link to this file into release configuration: releases: - n…

Rayane BELLAZAAR avatar
Rayane BELLAZAAR

Hi, I’m trying to create my first helmfile bundle to deploy a simple application. I have one issue when I’m trying to use environment feature. Can we use array in values ? When I try to create a list of values helmfile render it as a string.

bases:
- environments.yaml
---
repositories:
- name: external-dns
  url: <https://kubernetes-sigs.github.io/external-dns>

releases:
  - name: external-dns
    namespace: core-system
    createNamespace: true
    labels:
      foo: bar
    chart: external-dns/external-dns
    version: 1.7.0
    condition: externalDNS.enabled
    missingFileHandler: Warn
    values:
    - sources: {{ .Values.externalDNS.config.sources }}
      provider: {{ .Values.externalDNS.config.provider }}
      registry: txt
      txtOwnerId: "external-dns-automation"
      domainFilters: {{ .Values.externalDNS.config.domainFilters }}
      policy: sync
      resources: 
        requests:
          cpu: {{ .Values.externalDNS.config.resources.cpu.requests }}
          memory: {{ .Values.externalDNS.config.resources.memory.requests }}
        limits:
          cpu: {{ .Values.externalDNS.config.resources.cpu.limits }}
          memory: {{ .Values.externalDNS.config.resources.memory.limits }}
      serviceAccount:
        annotations:
          iam.gke.io/gcp-service-account: {{ .Values.externalDNS.config.gcp.serviceAccountWorkloadIdentity }}
    verify: true
    wait: true
    waitForJobs: true
    timeout: 60
    recreatePods: true
    force: false
    installed: true
    atomic: true
    cleanupOnFail: false
environments:
  default:
    values:
    - externalDNS:
        enabled: true
        config:
          # CAN WE DO THAT ?
          sources:
          - ingress
          - istio-gateway
          provider: google
          domainFilters: []
          gcp:
            serviceAccountWorkloadIdentity: ""
          resources: 
            cpu:
              requests: 100m
              limits: 100m
            memory:
              requests: 128Mi
              limits: 128Mi
Minh-Quan Tran avatar
Minh-Quan Tran

I think you should use | toYaml | nindent ... to format back the array

2022-02-12

Phil Chen avatar
Phil Chen

Hi, Helmfile is a great little tool and I am using it to setup many infrastructure k8s services for our k8s clusters. It is much easier than deploying those with helm directly. However, one of the issues (and it is kind of dangerous) that I am facing is automatic k8s context binding when we apply helmfile with specific env–this might not be an issue at all, it’s just that I am not really very knowledgeable about helmfile. Without further ado, here is my problem. In helmfile, we can define env like this: environments: dev: prod: staging:

Then we can do this to deploy all the releases specific an env, say for staging env:

helmfile -e staging sync

However, this still applies to current k8s context. If my current context is set for dev k8s cluster, the above command will end up with applying staging configuration to the dev k8s cluster. Is there any way that we can specify the k8s context within the env, that way, whenever the above command is entered, it automatically applies to staging cluster regardless what current k8s context is set to?

Any help will be appreciated!

Phil Chen avatar
Phil Chen

Answer my own question, here is how to do this: {{ if eq .Environment.Name “prod” }} kubeContext: prod_cluster {{ end }} {{ if eq .Environment.Name “staging” }} kubeContext: staging_cluster {{ end }} {{ if eq .Environment.Name “dev” }} kubeContext: dev_cluster {{ end }}

2
mumoshu avatar
mumoshu


Is there any way that we can specify the k8s context within the env
I’m wondering why we’ve never had a feature request for this! (Or did I just forget it?

Minh-Quan Tran avatar
Minh-Quan Tran

I had much time thinking about this, many times my teams apply on the wrong cluster but only on dev ones, other envs we stick with pipelines which set its own context. But it would be nice to have something like

environment:
  <name>:
    kubeContext:

What do you think? @mumoshu @Phil Chen

2
Phil Chen avatar
Phil Chen

A different question, how do I specify in the helmfile which release should be included or excluded for a specific env?

Rene Hernandez avatar
Rene Hernandez

You could use the installed field on each of the releases. Like:

- name: ....
  installed: {{ .Environment.Name == "<env_name>" }}
1
Phil Chen avatar
Phil Chen

Thanks Rene.

1
Phil Chen avatar
Phil Chen

One more questions, I guess we could do this for multiple envs (say I have dev, staging, prod) like that:

- name: ....
  installed: {{ or (.Environment.Name == "dev") (.Environment.Name == "staging") }}

In the above example, the release will be installed in dev and staging, but not in prod.

alternatively we could do this:

- name: ....
  installed: {{.Environment.Name != "prod" }}

If we have many more envs, this expression can get a little bit long. Could this be done via something like we do to custom per env values files?

Rene Hernandez avatar
Rene Hernandez

You could have a field per environment that specifies whether the release is enabled or not. E.g:

# review.yaml

release:
  enabled: true

Then in the helmfile:

environments:
  review:
    values:
      - ./review.yaml

releases:
  - name: ....
    installed: {{ .Values.release.enabled }}

If you don’t want to specify the field in all env files, you can modify the installed expression as follows:

installed: {{ .Values.release | get "enabled" false }}

To default to false if the enabled field is not set for a particular environment

Phil Chen avatar
Phil Chen

Error: values don’t meet the specifications of the schema(s) in the following chart(s): jupyterhub:

  • (root): Additional property release is not allowed
Rene Hernandez avatar
Rene Hernandez

First time I see that error, maybe change it from

release:
  enabled: true

to

<name>:
  enabled: true

Where is something different than `release`. It looks like release is a protected keyword

Minh-Quan Tran avatar
Minh-Quan Tran

I think that there is a mixup between values of the chart and values of environnement

2022-02-15

2022-02-16

Alexey Murz Korepov avatar
Alexey Murz Korepov

Can anybody throw some ideas by what reasons helm diff command shows diffs in color, but helmfile diff shows without coloring on same machine with same session? Both machines have the pretty similar setup - Ubuntu 20.04 with some additional packages, $TERM = xterm-256color on both. Here are screenshots of two commands launched in series:

1
Alexey Murz Korepov avatar
Alexey Murz Korepov

And the interesting thing that on other machine with pretty similar setup (Ubuntu 20.04 + some packages) - I see colored output of helmfile diff!

Alexey Murz Korepov avatar
Alexey Murz Korepov

And here is screenshot of same command on “problematic” machine:

z0rc3r avatar

After upgrading helm-diff to 3.2.0 or newer, running helmfile diff produces output with no colors when running in regular terminal (not CI, TERM=xterm-256color). In the same time running helm diff in the same terminal produces colors as expected.

My guess is that after databus23/helm-diff#240, there is check https://github.com/databus23/helm-diff/blob/master/cmd/root.go#L47-L50 which always true due to how helmfile executes helm-diff. helm-diff’s stdout is captured/redirected in separate file descriptor which isn’t considered as terminal.

I tried passing running helmfile --no-color=false diff, but it didn’t enable color in diff output. Only way to enforce colors I found was helmfile diff --args="--no-color=false", but it’s problematic in way where this arg passed to helm list execution and fails there.

Alexey Murz Korepov avatar
Alexey Murz Korepov

Oooh, thanks! Didn’t even think that this can depend of helm-diff plugin version! Downgrading to version < 3.2 resolves the issue! Will wait for proper fix in next helmfile releases…

Andrew Nazarov avatar
Andrew Nazarov

We are solving this using the following environment variable: HELM_DIFF_COLOR=true

We have helm-diff v3.4.0

3

2022-02-17

2022-02-23

2022-02-25

sohaibahmed98 avatar
sohaibahmed98

wave Hello, team!

sohaibahmed98 avatar
sohaibahmed98

did anyone use this? https://garden.io/

The end-to-end development and testing platform for Kubernetes and Cloudattachment image

Garden removes barriers between development, testing, and CI. Use the same workflows and production-like Kubernetes environments at every step of the process.

Alex Bowers avatar
Alex Bowers

I’m trying to install prometheus but getting an error, any ideas?

Adding repo prometheus-community <https://prometheus-community.github.io/helm-charts>
"prometheus-community" has been added to your repositories

Comparing release=monitoring, chart=prometheus-community/kube-prometheus-stack
in ./helmfile.yaml: command "/usr/local/bin/helm" exited with non-zero status:

PATH:
  /usr/local/bin/helm

ARGS:
  0: helm (4 bytes)
  1: diff (4 bytes)
  2: upgrade (7 bytes)
  3: --reset-values (14 bytes)
  4: --allow-unreleased (18 bytes)
  5: monitoring (10 bytes)
  6: prometheus-community/kube-prometheus-stack (42 bytes)
  7: --version (9 bytes)
  8: 33.0.0 (6 bytes)
  9: --namespace (11 bytes)
  10: monitoring (10 bytes)
  11: --detailed-exitcode (19 bytes)

ERROR:
  exit status 1

EXIT STATUS
  1

STDERR:
  Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"]
  Error: plugin "diff" exited with error

COMBINED OUTPUT:
  ********************
  	Release was not present in Helm.  Diff will show entire contents as new.
  ********************
  Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"]
  Error: plugin "diff" exited with error
Alex Bowers avatar
Alex Bowers

Hmm, using helm to delete the broken helm install and then re-applying it seemed to make it work

Zack Johnson avatar
Zack Johnson

Hi! My team noticed that our helmDefaults weren’t being applied and I think I’m missing some piece of understanding about the interactions between sub-helmfiles and helmfile layering functionality. Our helmfile.yaml looks basically like this:

environments:
  environ1:
  environ2:
---

bases:
  - ./helmDefaults.yaml

--- 

helmfiles:
  - "./environ1_helmfile.yaml"
  - "./environ2_helmfile.yaml" 

Is there something obviously wrong with this setup? I noticed that the helmDefaults do get applied if we remove the bases section and then just copy the helmDefaults to the top of environ1_helmfile.yaml and environ2_helmfile.yaml.

Are there any docs I should be reviewing for this topic?

Zack Johnson avatar
Zack Johnson

So after reading https://github.com/roboll/helmfile/blob/master/docs/writing-helmfile.md#layering-state-files and https://github.com/roboll/helmfile#separating-helmfileyaml-into-multiple-independent-files, I’ve got a guess. Tell me if this sounds right.

• You use more than 1 helmfile as a matter of convenience when your main helmfile.yaml gets too unwieldy. Each helmfile is treated independently and there is nothing special about the helmfile.yaml other than that’s where helmfile looks first by default. • You use the helmfiles: keyword just to specify where your helmfiles are when you have multiple. So you can run a helmfile command, have it scan helmfile.yaml , which then points it to the other helmfiles to scan. • You use the bases keyword to extract out repeated parts. Such that if you multiple helmfiles that all reference the same set of environments, you can just put the environments list in an environments.yaml and then reference that file as a base in all your helmfiles. This way, if you add an environment to the list, you only have to add it one place.

So my problem, I think, is that we were imbuing helmfile.yaml with more significance than it deserved. When we put in the helmfile.yaml:

bases:
  - ./helmDefaults

we expected these defaults to be applied to all of the helmfiles referenced in the helmfile.yaml under helmfiles: . But this is a bad assumption. Instead, what’s happening is that the helmDefaults are indeed being added to the helmfile.yaml. But because we’re not creating any releases in this helmfile, the helmDefaults don’t do anything. Then, when we process the other environ1_helmfile.yaml and environ2_helmfile.yaml, we weren’t adding the bases section. So the helmDefaults weren’t being layered on and so weren’t being applied.

So the best practice here I think is to remove the bases section from helmfile.yaml but add that same section to environ1_helmfile.yaml and environ2_helmfile.yaml

roboll/helmfile
Zack Johnson avatar
Zack Johnson

That seems to work as expected. Yay!

roboll/helmfile

2022-02-28

Luc Juggery avatar
Luc Juggery

Hello everyone, I’m quite new to Helmfile and not sure about the way to do the following. I have a values.common.yaml that is common to all environments and I only need to override a couple of properties that are env specific. For instance, both properties needs to be changed: • domains.www • “grafana.ini”.server.root_url I use the following helmfile.yaml I do not manage to have the .Values.domains.www taken into account in the chart.

environments:
  test:
    values:
    - 'grafana.ini':
        server:
          root_url: grafana_test_url
      domains:
          www: app_test_url
  prod:
    values:
    - 'grafana.ini':
        server:
          root_url: grafana_prod_url
    - domains:
        www: app_prod_url

releases:
  - name: monitoring
    namespace: monitoring
    labels:
      app: monitoring
    chart: .
    values:
      - ./values/common.yaml
    secrets:
      - ./values/secrets.common.yaml

any hints what I’m missing ?

    keyboard_arrow_up