#helmfile (2019-09)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2019-09-03
Hi…Actually I have made a docker file that runs helmfile apply i want to get the status of ‘helmfile apply’.So that my job can fail if helmfile fails.
2019-09-05
Can anyone help me debug why
values:
- runnerRegistrationToken: ""
installed: true
releases:
- name: {{ .Environment.Name }}
namespace: "gitlab"
labels:
chart: "gitlab-runner"
repo: "gitlab"
namespace: "gitlab"
vendor: "gitlab"
chart: "gitlab/gitlab-runner"
version: "v0.9.0"
wait: true
installed: {{ .Values.installed }}
tillerNamespace: "gitlab"
values:
- imagePullPolicy: Always
gitlabUrl: git.url.com
runnerRegistrationToken: {{ .Values.runnerRegistrationToken }}
runners:
tags: {{ .Environment.Name }},eks
serviceAccountName: "admin"
rbac:
create: true
bases:
- defaults.yaml
Results in releases/gitlab-runner.yaml: error during gitlab-runner.yaml.part.0 parsing: template: stringTemplate:16:25: executing "stringTemplate" at <.Values.installed>: map has no entry for key "installed"
Calling it from my root helmfile like so:
# Deployment Stack
- path: releases/gitlab-runner.yaml
values:
- installed: {{ .Environment.Values | get "namespaces.installed" true }}
runnerRegistrationToken: {{ .Environment.Values.gitlab.runnerRegistrationToken }}
I see this happen every now and then and can never figure out what I did to have it render properly
I believe the states do get merged properly from the debug logs
first-pass produced: &{utility map[installed:false runnerRegistrationToken:foo] map[]}
first-pass rendering result of "gitlab-runner.yaml.part.0": {utility map[installed:false runnerRegistrationToken:foo] map[]}
second-pass rendering failed, input of "gitlab-runner.yaml.part.0":
Hmm very odd, may be a bug. Unless I pass down a nested state value helmfile doesn’t seem to want to render.
Is it possible to have helmfile rollback a deployment if the pods return ErrImagePull
?
Have atomic
, force
and wait
set to true but it does not seem to do that
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
testreplease-578d4r5sg 0/1 ErrImagePull 0 3s
testreplease-77b4n22hc 0/1 Terminating 0 6m12s
here you see it killing the old working pod with a new pod in ErrImagePull
status but reporting a success and not rolling back
I think I’ve got it figured out, in a way. If I use a replica greater than 1
, it works as I’d hope
can you explain? I might need some kubectl rollout status ...
&& kubectl rollout undo ..
equivalents in helmfile
Let me know if this makes sense
Lets say I have a helmfile deployment, with a replica of 2
then I do a second deployment updating that release with a missing docker image or maybe some other issue
when I tested the deploy it killed one of the two pods. Waited for it to go healthy - it never did, at which time helm/helmfile rolled back the release and it was back to running 2 healthy pods from the previous release
Does that help at all @rms1000watt?
Oh wait, so this happened automagically?
at which time helm/helmfile rolled back the release and it was back to running 2 healthy pods from the previous release
If so, then it’s all good
It did, but only with a replica of 2 or more. It wasn’t extensive testing though, so I’m not sure if there are edge cases I’m not aware of
I havent read fully but if you do want safe k8s deployment even with 1 replicas, I’d suggest configuring maxUnavailable
and maxSurge
in your deployment accordingly
For kubectl rollout status/undo
I guess the only option today would be to use helmfile postsync
hooks
2019-09-06
2019-09-09
Hi. I recently started using Helmfile and now try to have 2 environments (clusters) deployed using Jenkins parallel pipeline steps. But one of them results in error because of missing env vars. It works fine if I disable the ‘other parallel step’. So… I think under the hood Helmfile
does some in place intermediate rendering that doesn’t play nice when 2 processes are operating on the same underlying workspace.
Does this look familiar or does anyone have tips on how to debug or where to dig for more information? (Short-term fix will be: No parallelisation)
But one of them results in error because of missing env vars
Hey! Would you mind sharing the actual error message you had seen?
Helmfile does intermediate rendering thing but I’m not sure if it matters in this case.
How does Jenkins parallel pipeline steps works? Does it share the workspace(=directory) across parallel steps - in this case multiple helmfile apply
runs?
@TBeijen
Yes. It shares the workspace. I managed to work around it by copying helmfile and all related files to /tmp. Both helmfile processes run in separate containers so then it’s isolated.
Initially I was confused as on my local MacBook I found intermediate helmfile renders in tmp folder (as denoted by env var TMPDIR
)
Yet in containers it didn’t seem to obey tmpdir and uses working dir?
I scanned codebase a bit and seemed to notice it using os.tempDir
so that surprised me.
Yeah it should write all the temp files(=rendered values.yaml files passed to helm “-f” flags) somewhere under the tempdir
I can create a GitHub issue for it? It seems like a good option to be able to have it write temp files outside of the workspace.
That would be great!
Hmm, okay. I’m using the latest helmfile container. Might be how Jenkins does it’s magic that interferes with tmpdir behavior.
Will create a gh issue later (on mobile now)
Would you mind running helmfile with –log-level=debug, like helmfile --log-level=debug apply
so that we can see where helmfile is actually writing files in the parallel containers
Okay! Thanks for your cooperation
Took some time (busy, busy): https://github.com/roboll/helmfile/issues/865
Experienced behaviour When executing Helmfile on Jenkins in parallel in 2 different containers, often one of the deploys fails because of missing vars. This suggests intermediate files being render…
2019-09-10
Hi, anyone able to get --args
working? I like helmfile diff --args='--context=2'
but it fails when using helmfiles
feature. I see this in debug log:
Building dependency release=foo, chart=../foo exec: helm dependency build ../foo –context=2 Basically, how do I inform it to only use
--context=2
when doinghelm diff
?
Hey! Unfortunately it isn’t currently supported. (Actually it worked before but there was really no concrete specification on how --args
should work against any helmfile command that involves multiple helm commands.
So the direction is https://github.com/roboll/helmfile/issues/787
Extracted from #768 (comment)
But I’m considering to make helmfile diff --args
pass the args to helm diff
only as an interim fix
@erik-stephens Does that make sense?
@mumoshu That’s perfect for me since that’s the only arg I care about right now. Thanks! Just an idea if ever revisit --args
, maybe deprecate that if too hairy and provide a way via helmDefaults.COMMAND.args
.
helmDefaults.COMMAND.args
sounds great Thanks for your feedback!
FYI: --context N
has been implemented today and is available since v0.84.0
2019-09-11
potentially a simple (albeit weird) question: Is there a way to inject the value passed to helmfile via --kube-context
into a variable of a release?
or potentially less abstractly, I’d like to know if there is a way to inject the name of the current cluster I’m operating on into a variable
@Steve Nolen Hey! Interesting question. Unfortunately there’s no way but I believe it worth a feature request.
awesome, I’ll write it up!
in the meantime, I think I’ve got a solution for myself by using
helmDefaults:
kubeContext: {{ requiredEnv "KUBE_CONTEXT" | quote }}
when I force the env for setting context, I can then assume the env var is available in other locations and use it there. imperfect but it seems to work!
while i’m here @mumoshu, helmfile is a really, really great tool. Thank you for your efforts on it!
thanks for kind words it really encourages me!
2019-09-12
2019-09-15
hey guys, helmfile newbie here, i’m looking to keep my all charts values in 1 file that defines an environment ppe.yaml
namespace: ppe-test
nginx:
replicas: 2
tag: 1.0.79
chart: stable/nginx
traefik:
replicas: 2
tag: 1.0.79
chart: stable/traefik
templates.yaml
templates:
default: &default
namespace: "{{ .Values.namespace }}"
missingFileHandler: Error
chart: "{{ `{{ .Release.Name }}`.chart }}"
values:
- envs/{{ .Environment.Name }}.yaml
helmfile.yaml
bases:
- envs/environments.yaml
---
{{ readFile "templates.yaml" }}
releases:
- name: nginx
<<: *default
is it possible to access the .Release.Name
and to pass it to env values?
should be possible. but you’re asking because it didn’t work?
@mumoshu not really, i dont know if its possible to find a key inside that
chart: "{{ `{{ .Release.Name }}`.chart }}"
the chart part is less critical, im just looking for a way how to include all values in 1 env file to have multiple services, and pass values of each service to a specific release
2019-09-16
Is rollback possible in helmfile…similar to rollback in helm
is there some “escape sequence” for variable values containing valid templating expressions, like:
abc: "{{ template "slack.general.text" . }}"
(in my case this value shouldn’t be rendered, but passed as-is)
Use gotemplate escaping
{{{{Blah}}
}}
I’m using AngularJS as the front-end JS library, with Go templates within Revel framework to to generate the markup on the back-end. But both Go and Angular use {{ and }} for delimiters in their
oh, that was easier than I thought. Thank you very much, Erik!
I’m using AngularJS as the front-end JS library, with Go templates within Revel framework to to generate the markup on the back-end. But both Go and Angular use {{ and }} for delimiters in their
2019-09-17
Hi All . I want to deploy Atlantis to be able to run helmfile instead of terraform. Using an official helm chart for that (https://www.runatlantis.io/docs/deployment.html#deployment-2). I was wondering whether someone has already built a docker image with helmfile and helm secrets installed that I could simply substitute the official one for.
Atlantis: Terraform Pull Request Automation
@Jakub Korzeniowski we use atlantis with helmfile
our strategy is to use geodesic. we bake our cloud automation toolchain into a container
then we can run that container anywhere (e.g. on our tty or with atlantis)
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
2019-09-18
hello
how do i add file to files to configmaps?
helmfile -f ../deployments/helmfile.yaml sync
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read helmfile.yaml.part.0: reading document at index 1: yaml: unmarshal errors:
line 2: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
line 3: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
line 4: cannot unmarshal !!str `./servi...` into state.ReleaseSpec
in ../deployments/helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 2: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
line 3: cannot unmarshal !!str `./depen...` into state.ReleaseSpec
line 4: cannot unmarshal !!str `./servi...` into state.ReleaseSpec
Error: running "helmfile -f ../deployments/helmfile.yaml sync" failed with exit code 1
might you pls show what’s inside your helmfile?
@Asaduzzaman Pavel Hey! As @starets asked, would you mind sharing your helmfile.yam? Those errors seem to indicate that you have a bunch of syntax errors in your helmfile.yaml.
Generally speaking, you can use idioms like
releases:
- name: yourapp
set:
- name: key1
file: path/to/file
and/or {{ readFile "path/to/file" | indent ... }}
any idea whats going on?
2019-09-19
wow, there’s a chat for helmfile, awesome! But unfortunately, I already created an issue for my question. Any ~helm~elp is appreciated! (can’t find anything in docs and in other issues) https://github.com/roboll/helmfile/issues/867
Suppose I have a helmfile.yaml with helmDefaults and helmfiles with a list of files. How do I propagate values from helmDefaults to these child helmfiles? I know if I'd used releases instead of…
hey guys, can i reference Release.Name in values? for example, i deploy a db that creates an svc, and i need this db-svc to be referenced in another app values
@yuri Hey! So you wanna refer to the db release’s name from the app release, right? If so, unfortunately it isn’t possible.
How about just using a template variable though?
{{ $db_release := "mydb" }}
releases:
- name: {{ $db_release}}
chart: charts/db
- name: app
chart: charts/app
values:
- db:
host: {{ $db_release }}:3306
@mumoshu thanks! not bad at all it can work for me
2019-09-20
2019-09-21
based on that feature: https://github.com/roboll/helmfile/pull/439 i was assuming that i can use this:
releases:
- name: "appv1"
chart: private/appv1
<<: *default
values:
- fullnameOverride: "{{`{{ .Release.Name }}`}}"
template is passing fine, but sync results in:
List of releases in error :
RELEASE
appv1
err: release "appv1" in "app.yaml" failed: failed processing release appv1: helm exited with status 1:
Error: release: "appv1" not found
in ./helmfile.yaml: in .helmfiles[0]: in apps/app.yaml: failed processing release appv1: helm exited with status 1: Error: release: “appv1” not found ```
if im just using - fullnameOverride: "appv1"
it works just fine
This feature is supposed to help advanced use-cases like Conventional Directory Structure explained in several issues like #428. Newly added configuration keys templates, missingFileHandler, and th…
@yuri Thanks! Looks like it should just work.
I’m curious, but which version of helmfile are you running, and could you share logs from helmfile --log-level=debug -f yourhelmfile.yaml <CMD>
so that you can see the intermediate result of rendered helmfile.yaml that helps debugging?
This feature is supposed to help advanced use-cases like Conventional Directory Structure explained in several issues like #428. Newly added configuration keys templates, missingFileHandler, and th…
2019-09-22
Trying CUE as an alternative to go YAML+Go templates:
I got to know about the CUE language which is similar to Jsonnet at glance but based on somewhat different theory and goal. CUE provides you a typed, structured template that can be used in various…
2019-09-25
Hello. I recreated kubernetes cluster (everything is deleted) but when I run helmfile list
on a newly created cluster it still shows a list of relases with installed:true. Where’s helmfile state is saved? I even removed ~/.helm
folder and run helmfile destroy
. It didn’t help. Thanks!
Ah
helmfile list
works locally and is still useful to see the list of releases being installed BEFORE actually installing them.
But I’d say is IS confusing in contrast to helm list
.
helmfile itself doesn’t store anything in cluster. all it depends are helm releases in cluster and local files
I’d like to improve the situation somehow.
Should we change helmfile to use helmfile list --local
for the current behavior - list releases managed by helmfile, and use helmfile list
for running helm list
for each managed release to augment the result with info like REVISION
UPDATED
STATUS
etc?
@Volodymyr Barna Would you agree with the change proposed above?
On the new cluster I want to run in tillerless mode. I followed the docs but it says tiller not found
when I run helmfile apply
. Or when I run helmfile status
it says release: "blah" not found
. So I guess it has something to do with the state saved locally.
wow! it when I do tillerless: true
on the release itself it works! but not if I do it in helmDefaults
, why? I have a dozen of releases I want to keep it DRY
btw releases and helmDefaults are separated by ---
Because I need to use template values
so much magic is going on
but other values (like kubeContext) are propagated to each and every release. Can I somehow generate output template and see what goes where?
@Volodymyr Barna Hmmm, this sounds like a bug that is preventing helmDefaults.tillerless
to releases
somehow(it should!)
I’ll take a look asap. Thx for reporting
Issue opened: https://github.com/roboll/helmfile/issues/875
Per https://sweetops.slack.com/archives/CE5NGCB9Q/p1569425828026200 Is this a bug? Needs investigation.
maybe other values (like force, atomic) are not inhereted as well. I dunno how to check. I would be great if helmfile build
would show everything that has been inhereted. And maybe add --with-default
option which would also show the output file with all the possible parameters, including defaults
I wrote a reply, seems to be my issue, but stiil, I don’t know how to debug and any helm would be much appreciated
Once I had a feeling that some defaults were not taken into the consideration. But tillerless: true
has been working for me like a charm. I don’t have ---
between defaults and releases though. And I’m using pretty old helmfile version right now.
here in my case I have multiples helmfiles.yaml in order to use include flag, something like it:
ydf-stack on feature/cloudability» tree clusters [20:14:58]
clusters
├── americas
│ └── helmfile.yaml
├── apac
│ └── helmfile.yaml
├── commonHelmfile.yaml
├── emea
│ └── helmfile.yaml
I must to declare
helmDefaults:
tillerNamespace: helm-revisions #dedicated default key for tiller-namespace
tillerless: true
force: true
for each one of these files, would be great to declare in just the root files. americas/helmfile.yaml has an include to commonHelmfile.yaml.
helmfiles:
- path: ../commonHelmfile.yaml
I don’t if we’re talking about the same problem, if not sorry
Has anyone had success with generating a lock file for your helmfile?
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# ls
helmfile.yaml
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# cat helmfile.yaml
releases:
- name: metrics-server
chart: stable/metrics-server
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# helmfile deps
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# ls
helmfile.yaml
root@d8760d0f2279:~/src/helmfiles/aws/us-west-2/l2v-dev# helmfile -v
helmfile version v0.85.2
No matter what I do, I can’t seem to create that lock file
Nevermind. I needed an explicit repositories
block
@aaronbatilo Thanks for sharing your finding!
That makes sense from the implementor’s perspective, but sounds a bit confusing from a users perspective
Any idea to improve it? Should helmfile fail if helmfile deps
are run on a helmfile.yaml without the repositories
block?
Omg it’s mumoshu. Big fan, big fan. I think that does should fail. It was unintuitive for me to see an exit code of 0 but nothing was happening
Thx for confirming!
Issue opened https://github.com/roboll/helmfile/issues/878
As discussed in https://sweetops.slack.com/archives/CE5NGCB9Q/p1569439225028200 To not break users depend on the existing behavior, I'd add also some command-line flag to turn the existing beha…
Perhaps another way would be to change Helmfile just emit some warning message to help you notice that you missed repositories
block. Thoughts?
Pls feel free to reply here or in the issue
What kind of message do you imagine would be in the warning?
Oh, I see you left a message in the issue
something like
Unable to update chart dependencies due to that no `repositories` defined in your helmfile.yaml
Nothing to update. Did you miss `repositories` in your helmfile.yaml? See <https://github.com/roboll/helmfile/issues/878>
No repositories managed hence nothing to be updated by Helmfile. Did you miss repositories block in your helmfile.yaml?