#helmfile (2019-09)

https://github.com/helmfile/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles

Archive: https://archive.sweetops.com/helmfile/

2019-09-03

Shikhar Goel avatar
Shikhar Goel

Hi…Actually I have made a docker file that runs helmfile apply i want to get the status of ‘helmfile apply’.So that my job can fail if helmfile fails.

2019-09-05

sung kang avatar
sung kang

Can anyone help me debug why

values:
  - runnerRegistrationToken: ""
    installed: true

releases:
  - name: {{ .Environment.Name }}
    namespace: "gitlab"
    labels:
      chart: "gitlab-runner"
      repo: "gitlab"
      namespace: "gitlab"
      vendor: "gitlab"
    chart: "gitlab/gitlab-runner"
    version: "v0.9.0"
    wait: true
    installed: {{ .Values.installed }}
    tillerNamespace: "gitlab"
    values:
      - imagePullPolicy: Always
        gitlabUrl: git.url.com
        runnerRegistrationToken: {{ .Values.runnerRegistrationToken }}
        runners:
          tags: {{ .Environment.Name }},eks
          serviceAccountName: "admin"
        rbac:
          create: true

bases:
  - defaults.yaml

Results in releases/gitlab-runner.yaml: error during gitlab-runner.yaml.part.0 parsing: template: stringTemplate:16:25: executing "stringTemplate" at <.Values.installed>: map has no entry for key "installed"

sung kang avatar
sung kang

Calling it from my root helmfile like so:

  # Deployment Stack
  - path: releases/gitlab-runner.yaml
    values:
    - installed: {{ .Environment.Values | get "namespaces.installed" true }}
      runnerRegistrationToken: {{ .Environment.Values.gitlab.runnerRegistrationToken }}
sung kang avatar
sung kang

I see this happen every now and then and can never figure out what I did to have it render properly

sung kang avatar
sung kang

I believe the states do get merged properly from the debug logs

first-pass produced: &{utility map[installed:false runnerRegistrationToken:foo] map[]}

first-pass rendering result of "gitlab-runner.yaml.part.0": {utility map[installed:false runnerRegistrationToken:foo] map[]}

second-pass rendering failed, input of "gitlab-runner.yaml.part.0":

sung kang avatar
sung kang

Hmm very odd, may be a bug. Unless I pass down a nested state value helmfile doesn’t seem to want to render.

Nelson Jeppesen avatar
Nelson Jeppesen

Is it possible to have helmfile rollback a deployment if the pods return ErrImagePull?

Nelson Jeppesen avatar
Nelson Jeppesen

Have atomic, force and wait set to true but it does not seem to do that

Nelson Jeppesen avatar
Nelson Jeppesen
RESOURCES:
==> v1/Pod(related)
NAME                                                             READY  STATUS        RESTARTS  AGE
testreplease-578d4r5sg  0/1    ErrImagePull  0         3s
testreplease-77b4n22hc  0/1    Terminating   0         6m12s
Nelson Jeppesen avatar
Nelson Jeppesen

here you see it killing the old working pod with a new pod in ErrImagePull status but reporting a success and not rolling back

Nelson Jeppesen avatar
Nelson Jeppesen

I think I’ve got it figured out, in a way. If I use a replica greater than 1, it works as I’d hope

rms1000watt avatar
rms1000watt

can you explain? I might need some kubectl rollout status ... && kubectl rollout undo .. equivalents in helmfile

Nelson Jeppesen avatar
Nelson Jeppesen

Let me know if this makes sense

Nelson Jeppesen avatar
Nelson Jeppesen

Lets say I have a helmfile deployment, with a replica of 2

Nelson Jeppesen avatar
Nelson Jeppesen

then I do a second deployment updating that release with a missing docker image or maybe some other issue

Nelson Jeppesen avatar
Nelson Jeppesen

when I tested the deploy it killed one of the two pods. Waited for it to go healthy - it never did, at which time helm/helmfile rolled back the release and it was back to running 2 healthy pods from the previous release

Nelson Jeppesen avatar
Nelson Jeppesen

Does that help at all @rms1000watt?

rms1000watt avatar
rms1000watt

Oh wait, so this happened automagically?
at which time helm/helmfile rolled back the release and it was back to running 2 healthy pods from the previous release

rms1000watt avatar
rms1000watt

If so, then it’s all good

Nelson Jeppesen avatar
Nelson Jeppesen

It did, but only with a replica of 2 or more. It wasn’t extensive testing though, so I’m not sure if there are edge cases I’m not aware of

mumoshu avatar
mumoshu

I havent read fully but if you do want safe k8s deployment even with 1 replicas, I’d suggest configuring maxUnavailable and maxSurge in your deployment accordingly

mumoshu avatar
mumoshu

For kubectl rollout status/undo I guess the only option today would be to use helmfile postsync hooks

2019-09-06

2019-09-09

TBeijen avatar
TBeijen

Hi. I recently started using Helmfile and now try to have 2 environments (clusters) deployed using Jenkins parallel pipeline steps. But one of them results in error because of missing env vars. It works fine if I disable the ‘other parallel step’. So… I think under the hood Helmfile does some in place intermediate rendering that doesn’t play nice when 2 processes are operating on the same underlying workspace.

Does this look familiar or does anyone have tips on how to debug or where to dig for more information? (Short-term fix will be: No parallelisation)

mumoshu avatar
mumoshu


But one of them results in error because of missing env vars

Hey! Would you mind sharing the actual error message you had seen?

Helmfile does intermediate rendering thing but I’m not sure if it matters in this case.

mumoshu avatar
mumoshu

How does Jenkins parallel pipeline steps works? Does it share the workspace(=directory) across parallel steps - in this case multiple helmfile apply runs?

mumoshu avatar
mumoshu

@TBeijen

TBeijen avatar
TBeijen

Yes. It shares the workspace. I managed to work around it by copying helmfile and all related files to /tmp. Both helmfile processes run in separate containers so then it’s isolated.

TBeijen avatar
TBeijen

Initially I was confused as on my local MacBook I found intermediate helmfile renders in tmp folder (as denoted by env var TMPDIR)

TBeijen avatar
TBeijen

Yet in containers it didn’t seem to obey tmpdir and uses working dir?

TBeijen avatar
TBeijen

I scanned codebase a bit and seemed to notice it using os.tempDir so that surprised me.

mumoshu avatar
mumoshu

Yeah it should write all the temp files(=rendered values.yaml files passed to helm “-f” flags) somewhere under the tempdir

TBeijen avatar
TBeijen

I can create a GitHub issue for it? It seems like a good option to be able to have it write temp files outside of the workspace.

mumoshu avatar
mumoshu

That would be great!

TBeijen avatar
TBeijen

Hmm, okay. I’m using the latest helmfile container. Might be how Jenkins does it’s magic that interferes with tmpdir behavior.

TBeijen avatar
TBeijen

Will create a gh issue later (on mobile now)

mumoshu avatar
mumoshu

Would you mind running helmfile with –log-level=debug, like helmfile --log-level=debug apply so that we can see where helmfile is actually writing files in the parallel containers

mumoshu avatar
mumoshu

Okay! Thanks for your cooperation

TBeijen avatar
TBeijen

Yes, already ran with debug. Need to curate some secrets but will include it.

1
1
TBeijen avatar
TBeijen
Parallel execution on same Helmfile failing because of processes interfering · Issue #865 · roboll/helmfile

Experienced behaviour When executing Helmfile on Jenkins in parallel in 2 different containers, often one of the deploys fails because of missing vars. This suggests intermediate files being render…

2019-09-10

erik-stephens avatar
erik-stephens

Hi, anyone able to get --args working? I like helmfile diff --args='--context=2' but it fails when using helmfiles feature. I see this in debug log:

Building dependency release=foo, chart=../foo exec: helm dependency build ../foo –context=2 Basically, how do I inform it to only use --context=2 when doing helm diff?

mumoshu avatar
mumoshu

Hey! Unfortunately it isn’t currently supported. (Actually it worked before but there was really no concrete specification on how --args should work against any helmfile command that involves multiple helm commands.

mumoshu avatar
mumoshu

But I’m considering to make helmfile diff --args pass the args to helm diff only as an interim fix

mumoshu avatar
mumoshu

@erik-stephens Does that make sense?

erik-stephens avatar
erik-stephens

@mumoshu That’s perfect for me since that’s the only arg I care about right now. Thanks! Just an idea if ever revisit --args, maybe deprecate that if too hairy and provide a way via helmDefaults.COMMAND.args.

mumoshu avatar
mumoshu

helmDefaults.COMMAND.args sounds great Thanks for your feedback!

mumoshu avatar
mumoshu

FYI: --context N has been implemented today and is available since v0.84.0

2019-09-11

Steve Nolen avatar
Steve Nolen

potentially a simple (albeit weird) question: Is there a way to inject the value passed to helmfile via --kube-context into a variable of a release?

Steve Nolen avatar
Steve Nolen

or potentially less abstractly, I’d like to know if there is a way to inject the name of the current cluster I’m operating on into a variable

mumoshu avatar
mumoshu

@Steve Nolen Hey! Interesting question. Unfortunately there’s no way but I believe it worth a feature request.

Steve Nolen avatar
Steve Nolen

awesome, I’ll write it up!

in the meantime, I think I’ve got a solution for myself by using

helmDefaults:
  kubeContext: {{ requiredEnv "KUBE_CONTEXT" | quote }}

when I force the env for setting context, I can then assume the env var is available in other locations and use it there. imperfect but it seems to work!

1
Steve Nolen avatar
Steve Nolen

while i’m here @mumoshu, helmfile is a really, really great tool. Thank you for your efforts on it!

1
mumoshu avatar
mumoshu

thanks for kind words it really encourages me!

2019-09-12

2019-09-15

yuri avatar

hey guys, helmfile newbie here, i’m looking to keep my all charts values in 1 file that defines an environment ppe.yaml

namespace: ppe-test

nginx:
  replicas: 2
  tag: 1.0.79
  chart: stable/nginx

traefik:
  replicas: 2
  tag: 1.0.79
  chart: stable/traefik

templates.yaml

templates:
  default: &default
    namespace: "{{ .Values.namespace }}"
    missingFileHandler: Error
    chart: "{{ `{{ .Release.Name }}`.chart }}"
    values:
      - envs/{{ .Environment.Name }}.yaml

helmfile.yaml

bases:
  - envs/environments.yaml
---
{{ readFile "templates.yaml" }}

releases:
- name: nginx
  <<: *default

is it possible to access the .Release.Name and to pass it to env values?

mumoshu avatar
mumoshu

should be possible. but you’re asking because it didn’t work?

yuri avatar

@mumoshu not really, i dont know if its possible to find a key inside that

chart: "{{ `{{ .Release.Name }}`.chart }}"
yuri avatar

the chart part is less critical, im just looking for a way how to include all values in 1 env file to have multiple services, and pass values of each service to a specific release

2019-09-16

Shikhar Goel avatar
Shikhar Goel

Is rollback possible in helmfile…similar to rollback in helm

starets avatar
starets

is there some “escape sequence” for variable values containing valid templating expressions, like:

abc: "{{  template "slack.general.text" . }}"

(in my case this value shouldn’t be rendered, but passed as-is)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use gotemplate escaping

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

{{{{Blah}}}}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
How do I escape “{{” and “}}” delimiters in Go templates?

I’m using AngularJS as the front-end JS library, with Go templates within Revel framework to to generate the markup on the back-end. But both Go and Angular use {{ and }} for delimiters in their

starets avatar
starets

oh, that was easier than I thought. Thank you very much, Erik!

How do I escape “{{” and “}}” delimiters in Go templates?

I’m using AngularJS as the front-end JS library, with Go templates within Revel framework to to generate the markup on the back-end. But both Go and Angular use {{ and }} for delimiters in their

1

2019-09-17

Jakub Korzeniowski avatar
Jakub Korzeniowski

Hi All wave . I want to deploy Atlantis to be able to run helmfile instead of terraform. Using an official helm chart for that (https://www.runatlantis.io/docs/deployment.html#deployment-2). I was wondering whether someone has already built a docker image with helmfile and helm secrets installed that I could simply substitute the official one for.

Deployment | Atlantis

Atlantis: Terraform Pull Request Automation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jakub Korzeniowski we use atlantis with helmfile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

our strategy is to use geodesic. we bake our cloud automation toolchain into a container

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then we can run that container anywhere (e.g. on our tty or with atlantis)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

2019-09-18

Asaduzzaman Pavel avatar
Asaduzzaman Pavel

hello

Asaduzzaman Pavel avatar
Asaduzzaman Pavel

how do i add file to files to configmaps?

Asaduzzaman Pavel avatar
Asaduzzaman Pavel
helmfile -f ../deployments/helmfile.yaml sync
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read helmfile.yaml.part.0: reading document at index 1: yaml: unmarshal errors:
  line 2: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
  line 3: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
  line 4: cannot unmarshal !!str `./servi...` into state.ReleaseSpec
in ../deployments/helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
  line 2: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
  line 3: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
  line 4: cannot unmarshal !!str `./servi...` into state.ReleaseSpec
Error: running "helmfile -f ../deployments/helmfile.yaml sync" failed with exit code 1

starets avatar
starets

might you pls show what’s inside your helmfile?

mumoshu avatar
mumoshu

@Asaduzzaman Pavel Hey! As @starets asked, would you mind sharing your helmfile.yam? Those errors seem to indicate that you have a bunch of syntax errors in your helmfile.yaml.

Generally speaking, you can use idioms like

releases:
- name: yourapp
  set:
  - name: key1
     file: path/to/file

and/or {{ readFile "path/to/file" | indent ... }}

Asaduzzaman Pavel avatar
Asaduzzaman Pavel

any idea whats going on?

2019-09-19

Volodymyr Barna avatar
Volodymyr Barna

wow, there’s a chat for helmfile, awesome! But unfortunately, I already created an issue for my question. Any ~helm~elp is appreciated! (can’t find anything in docs and in other issues) https://github.com/roboll/helmfile/issues/867

Propagate helmDefaults to `helmfiles` · Issue #867 · roboll/helmfile

Suppose I have a helmfile.yaml with helmDefaults and helmfiles with a list of files. How do I propagate values from helmDefaults to these child helmfiles? I know if I&#39;d used releases instead of…

1
yuri avatar

hey guys, can i reference Release.Name in values? for example, i deploy a db that creates an svc, and i need this db-svc to be referenced in another app values

mumoshu avatar
mumoshu

@yuri Hey! So you wanna refer to the db release’s name from the app release, right? If so, unfortunately it isn’t possible.

How about just using a template variable though?

{{ $db_release := "mydb" }}

releases:
- name: {{ $db_release}}
  chart: charts/db
- name: app
  chart: charts/app
  values:
  - db:
      host: {{ $db_release }}:3306
yuri avatar

@mumoshu thanks! not bad at all it can work for me

2019-09-20

2019-09-21

yuri avatar

based on that feature: https://github.com/roboll/helmfile/pull/439 i was assuming that i can use this:

releases:
- name: "appv1"
  chart: private/appv1
  <<: *default
  values:
  - fullnameOverride: "{{`{{ .Release.Name }}`}}"

template is passing fine, but sync results in:

List of releases in error :
RELEASE
appv1
err: release "appv1" in "app.yaml" failed: failed processing release appv1: helm exited with status 1:
  Error: release: "appv1" not found

in ./helmfile.yaml: in .helmfiles[0]: in apps/app.yaml: failed processing release appv1: helm exited with status 1: Error: release: “appv1” not found ```

if im just using - fullnameOverride: "appv1" it works just fine

feat: Release Template by mumoshu · Pull Request #439 · roboll/helmfile

This feature is supposed to help advanced use-cases like Conventional Directory Structure explained in several issues like #428. Newly added configuration keys templates, missingFileHandler, and th…

mumoshu avatar
mumoshu

@yuri Thanks! Looks like it should just work.

I’m curious, but which version of helmfile are you running, and could you share logs from helmfile --log-level=debug -f yourhelmfile.yaml <CMD> so that you can see the intermediate result of rendered helmfile.yaml that helps debugging?

feat: Release Template by mumoshu · Pull Request #439 · roboll/helmfile

This feature is supposed to help advanced use-cases like Conventional Directory Structure explained in several issues like #428. Newly added configuration keys templates, missingFileHandler, and th…

2019-09-22

mumoshu avatar
mumoshu

Trying CUE as an alternative to go YAML+Go templates:

https://github.com/roboll/helmfile/issues/869

Try CUE for writing helmfile.yaml · Issue #869 · roboll/helmfile

I got to know about the CUE language which is similar to Jsonnet at glance but based on somewhat different theory and goal. CUE provides you a typed, structured template that can be used in various…

2019-09-25

Volodymyr Barna avatar
Volodymyr Barna

Hello. I recreated kubernetes cluster (everything is deleted) but when I run helmfile list on a newly created cluster it still shows a list of relases with installed:true. Where’s helmfile state is saved? I even removed ~/.helm folder and run helmfile destroy. It didn’t help. Thanks!

mumoshu avatar
mumoshu

Ah

mumoshu avatar
mumoshu

helmfile list works locally and is still useful to see the list of releases being installed BEFORE actually installing them.

But I’d say is IS confusing in contrast to helm list.

mumoshu avatar
mumoshu

helmfile itself doesn’t store anything in cluster. all it depends are helm releases in cluster and local files

mumoshu avatar
mumoshu

I’d like to improve the situation somehow.

Should we change helmfile to use helmfile list --local for the current behavior - list releases managed by helmfile, and use helmfile list for running helm list for each managed release to augment the result with info like REVISION UPDATED STATUS etc?

mumoshu avatar
mumoshu

@Volodymyr Barna Would you agree with the change proposed above?

Volodymyr Barna avatar
Volodymyr Barna

yes, that would be awesome!

1
Volodymyr Barna avatar
Volodymyr Barna

On the new cluster I want to run in tillerless mode. I followed the docs but it says tiller not found when I run helmfile apply. Or when I run helmfile status it says release: "blah" not found. So I guess it has something to do with the state saved locally.

Volodymyr Barna avatar
Volodymyr Barna

wow! it when I do tillerless: true on the release itself it works! but not if I do it in helmDefaults, why? I have a dozen of releases I want to keep it DRY

Volodymyr Barna avatar
Volodymyr Barna

btw releases and helmDefaults are separated by --- Because I need to use template values

Volodymyr Barna avatar
Volodymyr Barna

so much magic is going on

Volodymyr Barna avatar
Volodymyr Barna

but other values (like kubeContext) are propagated to each and every release. Can I somehow generate output template and see what goes where?

mumoshu avatar
mumoshu

@Volodymyr Barna Hmmm, this sounds like a bug that is preventing helmDefaults.tillerless to releases somehow(it should!) I’ll take a look asap. Thx for reporting

Volodymyr Barna avatar
Volodymyr Barna

maybe other values (like force, atomic) are not inhereted as well. I dunno how to check. I would be great if helmfile build would show everything that has been inhereted. And maybe add --with-default option which would also show the output file with all the possible parameters, including defaults

Volodymyr Barna avatar
Volodymyr Barna

I wrote a reply, seems to be my issue, but stiil, I don’t know how to debug and any helm would be much appreciated

Andrew Nazarov avatar
Andrew Nazarov

Once I had a feeling that some defaults were not taken into the consideration. But tillerless: true has been working for me like a charm. I don’t have --- between defaults and releases though. And I’m using pretty old helmfile version right now.

Nullck avatar

here in my case I have multiples helmfiles.yaml in order to use include flag, something like it:

ydf-stack on feature/cloudability» tree clusters                                                                                                                                                                                                                                                                                  [20:14:58]
clusters
├── americas
│   └── helmfile.yaml
├── apac
│   └── helmfile.yaml
├── commonHelmfile.yaml
├── emea
│   └── helmfile.yaml

I must to declare

helmDefaults:
  tillerNamespace: helm-revisions  #dedicated default key for tiller-namespace
  tillerless: true
  force: true

for each one of these files, would be great to declare in just the root files. americas/helmfile.yaml has an include to commonHelmfile.yaml.

helmfiles:
  - path: ../commonHelmfile.yaml
Nullck avatar

I don’t if we’re talking about the same problem, if not sorry

aaronbatilo avatar
aaronbatilo

Has anyone had success with generating a lock file for your helmfile?

root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# ls
helmfile.yaml
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# cat helmfile.yaml
releases:
  - name: metrics-server
    chart: stable/metrics-server
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# helmfile deps
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# ls
helmfile.yaml
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# helmfile -v
helmfile version v0.85.2

No matter what I do, I can’t seem to create that lock file

aaronbatilo avatar
aaronbatilo

Nevermind. I needed an explicit repositories block

mumoshu avatar
mumoshu

@aaronbatilo Thanks for sharing your finding!

That makes sense from the implementor’s perspective, but sounds a bit confusing from a users perspective

Any idea to improve it? Should helmfile fail if helmfile deps are run on a helmfile.yaml without the repositories block?

aaronbatilo avatar
aaronbatilo

Omg it’s mumoshu. Big fan, big fan. I think that does should fail. It was unintuitive for me to see an exit code of 0 but nothing was happening

1
1
mumoshu avatar
mumoshu

Thx for confirming!

Issue opened https://github.com/roboll/helmfile/issues/878

`helmfile deps` should fail when no repos defined · Issue #878 · roboll/helmfile

As discussed in https://sweetops.slack.com/archives/CE5NGCB9Q/p1569439225028200 To not break users depend on the existing behavior, I&#39;d add also some command-line flag to turn the existing beha…

mumoshu avatar
mumoshu

Perhaps another way would be to change Helmfile just emit some warning message to help you notice that you missed repositories block. Thoughts?

mumoshu avatar
mumoshu

Pls feel free to reply here or in the issue

aaronbatilo avatar
aaronbatilo

What kind of message do you imagine would be in the warning?

aaronbatilo avatar
aaronbatilo

Oh, I see you left a message in the issue

mumoshu avatar
mumoshu

something like

Unable to update chart dependencies due to that no `repositories` defined in your helmfile.yaml
Nothing to update. Did you miss `repositories` in your helmfile.yaml? See <https://github.com/roboll/helmfile/issues/878>
No repositories managed hence nothing to be updated by Helmfile. Did you miss repositories block in your helmfile.yaml?

2019-09-26

2019-09-27

    keyboard_arrow_up