#helmfile (2019-07)

https://github.com/roboll/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles Archive: https://archive.sweetops.com/helmfile/

2019-07-31

Emanuel avatar
Emanuel

I think I don’t fully understand how environments work. If I have

helmfiles:
- ./*/helmfile.yaml

And I want to use helmfile -e production apply then does each of those subhelmfiles have to define the environment? Else you’d get err: no releases found that matches specified selector() and environment(production), in any helmfile

So what would that look like? Do I put this code at the top of each subhelmfile? There has to be a better way!

bases:
{{- range $_, $file := ( exec "sh" (list "-c" "echo ../environments/*yaml") | splitList " " ) }}
- {{ trim $file }}
{{ end }}
mumoshu avatar
mumoshu


does each of those subhelmfiles have to define the environment

Yep

mumoshu avatar
mumoshu


There has to be a better way!

Definitely

mumoshu avatar
mumoshu


If environments are global, I’d expect to define them in the parent helmfile only.

The point is that they aren’t global. To make each sub-helmfile modular, they are intentionally not global.

mumoshu avatar
mumoshu

Btw, did I implement globbing in bases:? That would reduce the boilerplate to:

bases:
- ../environments/*yaml

If it doesn’t work, it would worth a feature request

mumoshu avatar
mumoshu

My belief is that sub-helmfile shouldn’t rely on environments so that they are modular and reusable.

Once you’ve removed all the environments from sub-helmfiles, https://github.com/roboll/helmfile/issues/762 will allow you to pass necessary helmfile values as template params of sub-helmfiles

Allow opting in for inheriting all the values to sub-helmfile · Issue #762 · roboll/helmfile

Helmfile doesn't inherit values to sub-helmfiles by default today. It does support selectively inheriting some values(#725), but there's no easy way to inherit all the values. Perhaps it wo…

Emanuel avatar
Emanuel

If environments are global, I’d expect to define them in the parent helmfile only.

Mical avatar
Mical

A --no-hooks option to helmfile would be nice.

mumoshu avatar
mumoshu

I’m hearing - I’d appreciate it if you could write up a feature request with your use-case

https://github.com/roboll/helmfile/issues/new

GitHub

GitHub is where people build software. More than 36 million people use GitHub to discover, fork, and contribute to over 100 million projects.

mumoshu avatar
mumoshu

Alternatively, would it make sense to extend our helmfile.yaml syntax to make hooks conditional depending on environment names or helmfile values?

mumoshu avatar
mumoshu

That’s possible today with helmfile templates, but we’re having a discussion to make it possible without templates. For toggling releases without templates, we have https://github.com/roboll/helmfile/issues/781

feat: First-class Support for Conditional Release · Issue #781 · roboll/helmfile

I'm seeing emerging use of {{ if eq .Environment.Name "theenv" }} in helmfile.yaml templates for making releases optional for a subset of environments. To me, helmfile templates are n…

Mical avatar
Mical

That would be nice, but would like the option to disable hooks globally which I think a --no-hooks flag would make most sense. I can file an issue for it.

mumoshu avatar
mumoshu

Thanks! Looking forward to read it

2019-07-30

Emanuel avatar
Emanuel

@Yannis is that not what {{ exec }} does? (I’m also new to Helmfile)

Yannis avatar
Yannis

@Emanuel I must have been blind! Let’s test it then

1

2019-07-29

2019-07-25

Yannis avatar
Yannis

Hey, just started to experiment with Helmfile and besides some hiccups with helm-diff, I was wondering, is it possible to capture directly stdout from a command and use it in a template? E.g. something like {{ requiredEnv "PLATFORM_ENV" }}, maybe {{ getOutput "echo Hello" }}?

2019-07-24

Mical avatar
Mical

Would be nice to have condition: boolean on hooks because sometimes you might want to run a hook only if installed: boolean is true on another release.. if course it can be wrapped by other logic but would be cleaner to have a flag for it

:--1:1
Mical avatar
Mical

bugs me that helm test --cleanup does not output the logs before removal on error

Emanuel avatar
Emanuel

Hiya. Currently I’m doing this:

releases:
- name: abc-namespace
  chart: stable/magic-namespace
  set:
  - name: tiller.image.tag
    value: v{{ .Values.tillerVersion }}
- name: xyz-namespace
  chart: stable/magic-namespace
  set:
  - name: tiller.image.tag
    value: v{{ .Values.tillerVersion }}

Is there a way to set tiller.image.tag as a default for all releases so that I don’t have to specify that set every time? I’ve seen you can use release templates, but I wonder if there’s an even more general way.

mumoshu avatar
mumoshu

@Emanuel Hey! Unfortunately there’s only a marginally better way:

templates:
  setTillerImage: &setTillerImage
  - name: tiller.image.tag
     value: v{{ .Values.tillerVersion }}

releases:
- name: abc-namespace
  chart: stable/magic-namespace
  set:
  - <<: *setTillerImage
- name: xyz-namespace
  chart: stable/magic-namespace
  set:
  - <<: *setTillerImage
mumoshu avatar
mumoshu

would you mind opening a feature request if you need something better? thx!

Mical avatar
Mical

can someone tell me if i’m not understanding environments: correctly and how it applies to releases. i have 2 files in helmfile.d: istio.yaml

environments:
  staging:

releases:
  ...

app.yaml

environments:
  default:
  staging:

releases:
  ...
  • if i run helmfile sync i expect only app.yaml releases to be synced
  • if i run helmfile -e staging sync i expect both to be synced.
mumoshu avatar
mumoshu

@Mical Your assumption is legit but I’m skeptical if I implemented helmfile as such
- if i run helmfile sync i expect only app.yaml releases to be synced

  • if i run helmfile -e staging sync i expect both to be synced.
mumoshu avatar
mumoshu

The general rule of Helmfile is that it has an empty default environment by default. And any reference to undefined environment results in a failure.

That said, helmfile --env default diff or helmfile diff should process both yamls because they all refer to the default env. helmfile --env staging diff should also process both yamls as it refers to the staging env defined in the both yamls

mumoshu avatar
mumoshu

What I dont understand yet is this:

https://sweetops.slack.com/archives/CE5NGCB9Q/p1563977112071400
because if i replace default with dev in my case and run helmfile -e dev sync then it only applies to app.yaml releases (edited)

If you have dev defined in both app.yaml and istio.yaml, they should work on the dev environment.

What happened to istio.yaml in your case? You encountered no error and got default loaded into istio.yaml?

because if i replace default with dev in my case and run helmfile -e dev sync then it only applies to app.yaml releases

Mical avatar
Mical

My initial problem was that I did not want all releases to be part of the default environment. I only managed to do this with a conditional wrapped around everything which I didn’t really like. Now I’m only using explicit environments, trying to avoid default.

Mical avatar
Mical

In the message you linked to I was talking about replacing default with an explicit dev environment.

mumoshu avatar
mumoshu

Ah! So you wanted Helmfile to ignore all the releases defined in istio.yaml when in default environment

Mical avatar
Mical

Yes, but I realized that’s not how it works

mumoshu avatar
mumoshu

Yep. Then replacing default with anything else should work

mumoshu avatar
mumoshu

Good

mumoshu avatar
mumoshu

Btw I see emerging use of {{ if eq .Environment.Name "theenv" }} in helmfile.yaml templates these days

mumoshu avatar
mumoshu
dep locks should be per environment. · Issue #779 · roboll/helmfile

We use the pre-release version for dev/stg and release versions for other environments. Currently, helmfile deps only supports a single lockfile per helmfile. Something like this will not work with…

mumoshu avatar
mumoshu

cc/ @Shane

mumoshu avatar
mumoshu

So I’m considering to add first-class support for toggling releases per env(or even helmfile/state values)

Mical avatar
Mical

That would be nice :–1:

mumoshu avatar
mumoshu
releases:
- name: istio
  chart: [istio.io/istio](http://istio.io/istio)
  if:
     environment:
     - dev
     - staging
mumoshu avatar
mumoshu

@Mical If you had an idea or suggestion on the syntax, I’d appreciate it if you could share it

Andrey Nazarov avatar
Andrey Nazarov

Sounds cool. What about just environment: like

releases:
- name: istio
  chart: [istio.io/istio](http://istio.io/istio)
  environment:
    - dev
    - staging

?

Mical avatar
Mical

Yeah, I would vote for a list of environments like @Andrey Nazarov proposed.

Shane avatar
Shane

Our layout is essentially a global helmfile as our ops people are lazy(me). And a helmfile per team per environment. 90% of our global helmfile things are installed in all environments with the exception of jenkins being installed in only prod and a test service being installed in only dev. Of the two options above I like the second one better, but I still imagine a better method for all of this logic has to exist.

Shane avatar
Shane

Possibly having a global helmfile, but where it includes sub helmfiles?

Shane avatar
Shane
environments:
  prod:
    values:
    - prod/_environment/values.yaml
    secrets:
    - prod/_environment/secrets.yaml
    helmfiles:
    - helmfiles/jenkins.yaml
Shane avatar
Shane

That way you define the helmfile snipets and include them as we already have a list of environments in the helmfile itself I would rather see the environments be the first class citizens.

Andrey Nazarov avatar
Andrey Nazarov
feat: First-class Support for Conditional Release · Issue #781 · roboll/helmfile

I&#39;m seeing emerging use of {{ if eq .Environment.Name &quot;theenv&quot; }} in helmfile.yaml templates for making releases optional for a subset of environments. To me, helmfile templates are n…

Mical avatar
Mical

don’t want to wrap istio releases with {{ if eq .Environment.Name "staging" }}

Mical avatar
Mical

or is it that default applies to all implicitly?

Andrey Nazarov avatar
Andrey Nazarov

Basically you want to set values under envs and use it like:

environments:
  dev:
    values:
     - foo: aaa
  prod:
    values:
     - foo: bbb

releases:
  something: {{ .Environment.Values.foo }}

for helmfile -e dev apply something would be aaa for helmfile -e prod apply something would be bbb

Mical avatar
Mical

yeah that i know, was wondering about the default environment if that is applied to all releases

Mical avatar
Mical

because if i replace default with dev in my case and run helmfile -e dev sync then it only applies to app.yaml releases

Mical avatar
Mical

i expect i was abusing the default

Mical avatar
Mical

but thanks for input @Andrey Nazarov (chose some bad names in my example btw.. updated)

Andrey Nazarov avatar
Andrey Nazarov

Actually, I thought environments don’t affect releases until you define something like mentioned {{ if eq .Environment.Name "app" }}. What you are talking about is something new to me. I mean, that helmfile -e dev syncs only releases in the app.yaml. It it documented somewhere?

Mical avatar
Mical

Not sure if it is, but it works that way at least

Andrey Nazarov avatar
Andrey Nazarov

I hope @mumoshu will comment.

Mical avatar
Mical

but i need {{ if ne .Environment.Name "default" }} around my istio releases to exclude them from the default env

Andrey Nazarov avatar
Andrey Nazarov

Ah, then it’s expected)). At first I thought you somehow managed to do it without if .Environment.Name since you wrote “don’t want to wrap istio releases ”

Andrey Nazarov avatar
Andrey Nazarov

Thanks for the clarification.

Mical avatar
Mical

I meant that i don’t want to wrap it with {{ if eq .Environment.Name "xxx" }}

Mical avatar
Mical

not {{ if ne .... }}

Mical avatar
Mical

would prefer not having to add the not equals conditional though

2019-07-23

Ben avatar

Do you have any knowledge of how people are doing continuous deployment (or nearly CD with some manual gates before running sync) using helmfile? On the surface it seems simple; build, test, publish docker image, update helmfile with new image tag (or chart version if publishing a new chart) and run sync. However, it gets much more complicated once you have to deploy to multiple environments/tenants and also stage helmfile changes. Just wondering if you’ve seen any good approaches?

Erik Osterman avatar
Erik Osterman

We run Helmfile under atlantis

Erik Osterman avatar
Erik Osterman

We centralize our Helmfiles in a repo

Erik Osterman avatar
Erik Osterman

Then use remote Helmfiles to pull them in pinned to a release (kind of like terraform)

Ben avatar

Thanks @Erik Osterman The thing I’m struggling with is how to update centralised helmfiles in an automated way. At the moment I can only see one option; Each microservice git project’s build pipeline creates/publishes docker image, clones the helmfile repo, creates a branch, using a script (sed, etc) updates the chart version or values, commits changes. Then another pipeline checks for helmfile repo changes and deploys to an environment based on branch name (feature/nnn, release/nnn, master). Once the helmfile sync is finished, a tester tests the system in a test env and if happy merges the helmfile repo branch into master. The same pipeline detects the change in git and deploys to prod because changes are now on master. Does that seem reasonable? How does your process differ from this?

Erik Osterman avatar
Erik Osterman

in our model, we run one repo per AWS account.

Erik Osterman avatar
Erik Osterman

we build one container per account based on geodesic

Erik Osterman avatar
Erik Osterman
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Erik Osterman avatar
Erik Osterman

we have a /conf/helmfiles/helmfile.yaml folder that looks like this

Erik Osterman avatar
Erik Osterman

\# Ordered list of releases.
helmfiles:
  - path: "git::<https://github.com/cloudposse/[email protected]/reloader.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/cert-manager.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/prometheus-operator.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/kiam.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/external-dns.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/aws-alb-ingress-controller.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/kube-lego.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/nginx-ingress.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/heapster.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/dashboard.yaml?ref=0.48.0>"
  - path: "git::<https://github.com/cloudposse/[email protected]/codefresh-account.yaml?ref=0.48.0>"
Erik Osterman avatar
Erik Osterman

this allows us to surgically version pin individual accounts

Erik Osterman avatar
Erik Osterman

our docker images inherit from geodesic like this: https://github.com/cloudposse/testing.cloudposse.co/blob/master/Dockerfile

cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman avatar
Erik Osterman

[testing.cloudposse.co](http://testing.cloudposse.co) represents one AWS account.

Erik Osterman avatar
Erik Osterman

we do the same sort of thing for [prod.cloudposse.co>, <http://staging.cloudposse.co|staging.cloudposse.co](http://prod.cloudposse.co)

Erik Osterman avatar
Erik Osterman

in our case, we do this for our customers so our *.[cloudposse.co](http://cloudposse.co) repos are out of date

Ben avatar

Thanks @Erik Osterman, Makes sense. Like the idea of specifying sub-helmfile versions. Do you manually update the helmfiles whenever a chart or docker image changes?

Erik Osterman avatar
Erik Osterman

Yes, everything is deliberate

Erik Osterman avatar
Erik Osterman

Bleeding edge is overrated :-)

1
Erik Osterman avatar
Erik Osterman

Pinning to subhelmfiles has been awesome. We don’t need to worry about breaking changes or forcing updates across environments.

Andrey Nazarov avatar
Andrey Nazarov

@Ben We’ve got a single helmfile with all the releases for all the environments defined. It’s placed in a separate repo, not within the main codebase. Since we’ve got a set of stages (dev, several stagings, prod) and per client installations we are leveraging environments: feature extensively to make it DRY. The workflow is pretty manual indeed, one has to set a new version of the docker image and/or helm chart for the certain environment. To control things pull-request/merge-request workflow could be considered. Not the best approach for sure, but kinda work for us right now.

Ben avatar

Thanks @Andrey Nazarov Sounds similar to what I was envisioning. I think we’ll go down this path and if the manual updates become cumbersome we’ll try and automate them.

Andrey Nazarov avatar
Andrey Nazarov

@Erik Osterman Out of curiosity, what is this Atlantis thing?

Erik Osterman avatar
Erik Osterman
Terraform Pull Request Automation | Atlantis

Atlantis: Terraform Pull Request Automation

:--1:1
Erik Osterman avatar
Erik Osterman

it lets us run plan (diff) and apply using github comments.

Andrey Nazarov avatar
Andrey Nazarov

Thanks. It looks quite interesting. I’m I right that you use TF to execute helmfile?

Erik Osterman avatar
Erik Osterman

No, Atlantis can run freestyle steps

Andrey Nazarov avatar
Andrey Nazarov

Cool, thank you.

2019-07-22

eviln1 avatar
eviln1

hey, just joined the slack, but been using helmfile for a 6+ months; thanks alot for making it

2
1
eviln1 avatar
eviln1

i’ve got a quick question: are environments: [] propagated to children helmfiles: [] ? looks that they aren’t from my experiments, and not sure if its by design

mumoshu avatar
mumoshu
How to inherit values from the parent to the child helmfile? · Issue #725 · roboll/helmfile

We are trying to use helmfile in our pipeline. For this we hoped to use a parent helmfile (with repository configuration and helm-defaults) and subhelmfiles that INHERIT from this master helmfile. …

mumoshu avatar
mumoshu

if you want an easier way to inherit all the values, it isn’t implemented yet, but here’s the feature request https://github.com/roboll/helmfile/issues/762

Allow opting in for inheriting all the values to sub-helmfile · Issue #762 · roboll/helmfile

Helmfile doesn&#39;t inherit values to sub-helmfiles by default today. It does support selectively inheriting some values(#725), but there&#39;s no easy way to inherit all the values. Perhaps it wo…

eviln1 avatar
eviln1

also trying stuff out with bases: [], but i’m probably getting something wrong

mumoshu avatar
mumoshu

bases implicitly inherit all the values from the parent and sub-helmfiles. but there’s a plan to make it explicit

https://github.com/roboll/helmfile/issues/688

breaking: stop `bases` inheriting parents' values by default · Issue #688 · roboll/helmfile

Extracted from #347 (comment) We&#39;ve introduced bases a month ago via #587. I&#39;d like to make this breaking change(perhaps the first, intended breaking change in helmfile) before too many peo…

Ben avatar

Hi, Hoping someone knows off the top of their head why this wouldn’t work

templates:
  dataproduct: &dataproduct
    namespace: dataproduct
    chart: chartmuseum/cdp-chart
    version: 1.0.1
    values:
      - app
          name: {{`"{{.Release.Name}}"`}}
      - {{ .Environment.Name }}.yaml

releases:
  - name: my-dp
    <<: *dataproduct

  - name: another-dp
    <<: *dataproduct
mumoshu avatar
mumoshu

mind sharing the error message you’re seeing?

mumoshu avatar
mumoshu

nm im reading

Ben avatar

Specifically

- app:
  name:
    {{`"{{.Release.Name}}"`}}
Ben avatar

It seems to be an issue with nesting values because if I change it to

values:
  - appName: {{`"{{.Release.Name}}"`}}

it doesn’t fail

Ben avatar

The error is YAML parse error on cdp-chart/templates/rbac.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Release.Name":interface {}(nil)}

mumoshu avatar
mumoshu

@Ben could you copy-paste your actual template once again? I see a trailing : is missing after app in https://sweetops.slack.com/archives/CE5NGCB9Q/p1563858986015700

Hi, Hoping someone knows off the top of their head why this wouldn’t work

templates:
  dataproduct: &dataproduct
    namespace: dataproduct
    chart: chartmuseum/cdp-chart
    version: 1.0.1
    values:
      - app
          name: {{`"{{.Release.Name}}"`}}
      - {{ .Environment.Name }}.yaml

releases:
  - name: my-dp
    <<: *dataproduct

  - name: another-dp
    <<: *dataproduct
mumoshu avatar
mumoshu

anyways this seems to work without emitting such error on my machine:

templates:
  dataproduct: &dataproduct
    namespace: dataproduct
    chart: chartmuseum/cdp-chart
    version: 1.0.1
    values:
      - app:
          name: {{`"{{.Release.Name}}"`}}
      - {{ .Environment.Name }}.yaml

releases:
- name: myapp
  chart: stable/mysql
  namespace: myapp
  <<: *dataproduct
Ben avatar

@mumoshu Sorry, yes the original has the colon

environments:
  default:
    values:
    - default.yaml
  production:
    values:
    - production.yaml

templates:
  dataproduct: &dataproduct
    namespace: dataproduct
    chart: chartmuseum/cdp-chart
    version: 1.0.0
    values:
      - app:
          name: {{`"{{.Release.Name}}"`}}
      - {{ .Environment.Name }}.yaml

releases:
  - name: california-sos
    <<: *dataproduct

  - name: rdc
    <<: *dataproduct
Ben avatar
helm version
Client: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
mumoshu avatar
mumoshu

@Ben Which helmfile version are you using?

Ben avatar
> helmfile -v
helmfile version v0.80.1
mumoshu avatar
mumoshu

thx

mumoshu avatar
mumoshu

I’m doubting something is wrong with your cdp-chart/templates/rbac.yaml. Could you share it?

Ben avatar
{{- if .Values.rbac.create -}}
apiVersion: [rbac.authorization.k8s.io/v1beta1](http://rbac.authorization.k8s.io/v1beta1)
kind: Role
metadata:
  name: {{ .Values.app.name }}
  namespace: {{ .Values.app.namespace }}
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  resourceNames:
  - "{{ .Values.app.name }}-config"
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - secrets
  resourceNames:
  - "{{ .Values.app.name }}-secret"
  - "gitlab-registry"
  - "{{ .Values.app.name }}-tls"
  verbs:
  - get
{{- end -}}
---
apiVersion: [rbac.authorization.k8s.io/v1beta1](http://rbac.authorization.k8s.io/v1beta1)
kind: RoleBinding
metadata:
  name: {{ .Values.app.name }}
  namespace: {{ .Values.app.namespace }}
roleRef:
  apiGroup: [rbac.authorization.k8s.io](http://rbac.authorization.k8s.io)
  kind: Role
  name: {{ .Values.app.name }}
subjects:
- kind: ServiceAccount
  name: {{ .Values.app.name }}
  namespace: {{ .Values.app.namespace }}

mumoshu avatar
mumoshu

Also - enabling debug logs like helmfile --log-level=debug allows you to see the yaml after rendering the template, which would help debugging

Ben avatar

Actually the error states the error occurs in serviceaccount.yaml which looks like

{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ .Values.app.name }}
  namespace: {{ .Values.app.namespace }}
  labels:
    app: {{ .Values.app.name }}
{{- end -}}
mumoshu avatar
mumoshu

It seems to be an issue with nesting values because if I change it to

values:
  - appName: {{`"{{.Release.Name}}"`}}

it doesn’t fail

mumoshu avatar
mumoshu

indeed this can be a bug in helmfile!

mumoshu avatar
mumoshu

ahh it isn’t rendering nested strings.. i’ll open an issue for that

mumoshu avatar
mumoshu

@Ben Is this a blocker to you? (I’ll prioritize this accordingly if so

Ben avatar

Thanks @mumoshu It’s not a blocker but would definitely remove a lot of boilerplate config for us

:--1:1
mumoshu avatar
mumoshu

Thanks! I’ll try to fix it asap. Here’s the issue https://github.com/roboll/helmfile/issues/769

Release template in nested string isn't rendered · Issue #769 · roboll/helmfile

This is reported in our official Slack channel: https://sweetops.slack.com/archives/CE5NGCB9Q/p1563863513024900 This doesn&#39;t work, leaving &quot;{{.Release.Name}}&quot; not rendered: releases: …

Ben avatar

Much appreciated @mumoshu

2019-07-18

Mical avatar
Mical

can i get the exit code from hooks to propagate so that helmfile sync doesn’t exit with 0 on failure?

mumoshu avatar
mumoshu

hey!

generally hemlfile exits with 1 when one of your hooks failed.

for example, running helmfile sync against this helmfile.yaml exists with 1:

releases:
- name: mysql1
  chart: stable/mysql
  namespace: mysql
  hooks:
  - name: myhook
    events: ["presync"]
    command: "sh"
    args:
    - -c
    - echo whatever; exit 1

does this help?

Mical avatar
Mical

hm, it doesn’t exit with 1 if my helm tests fail. will try to get something that can be reproduced

mumoshu avatar
mumoshu

oh. which hook events are you using?

Mical avatar
Mical

postsync

mumoshu avatar
mumoshu

ah, i’ve reproduced it. perhaps i’ve made it ignore non-zero exit codes for postsync hooks for thinking it’s nice

mumoshu avatar
mumoshu

i’ll take this as a chance to redesign it

mumoshu avatar
mumoshu

helmfile processes your releases in parallel. do you want all the ongoing releases to be immediately canceled if one of them failed in postsync?

mumoshu avatar
mumoshu

or complete all the releases anyway and fail helmfile itself only when one or more releases failed in postsync?

Mical avatar
Mical

for me the latter would suffice, since there might be cases where you would want other releases to be processed but as long as we can tell if all passed or not i’m happy

:--1:1
Mical avatar
Mical

but in the long run it might be nice to have that control yourself via configuration

Mical avatar
Mical

Is there an easy way to patch this manually until there’s a new release available? It’s a deal breaker since I want to use helmfile in our ci/cd pipeline and I only have 2 more weeks until vacation

Mical avatar
Mical

since you said that you made it ignore non-zero i was thinking there might be an easy way to fork helmfile and make it not ignore it

mumoshu avatar
mumoshu

yeah probably. i’ll take a look now!

Mical avatar
Mical

thank you

mumoshu avatar
mumoshu

Just change this line to results <- syncResult{errors: []*ReleaseError{err}}

https://github.com/roboll/helmfile/blob/b2a6231dcffef09d9d4045466fe3e953ca718f95/pkg/state/state.go#L425

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Mical avatar
Mical

thanks man

mumoshu avatar
mumoshu

ah wait we need a bit more work

Mical avatar
Mical

ok

mumoshu avatar
mumoshu

Try changing these lines to:

relErrs := []*ReleaseError{}
if relErr == nil {
    relErrs = append(relErrs, relErr)
}

if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
    st.logger.Warnf("warn: %v\n", err)
    relErrs = append(relErrs, &ReleaseError{err})
}

if len(relErrs) > 0 {
    results <- syncResult{errors: relErrs}
} else {
    results <- syncResult{}
}

https://github.com/roboll/helmfile/blob/b2a6231dcffef09d9d4045466fe3e953ca718f95/pkg/state/state.go#L418-L426

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Mical avatar
Mical

thanks i’ll give it a try

mumoshu avatar
mumoshu

make install is handy if you want to install the helmfile binary built from your binary to go/bin (usually ~/go/bin)

Mical avatar
Mical

i’m on the hello world level of go so that’s helpful

mumoshu avatar
mumoshu

helmfile -v to see the version number to verify you’re running the correct binary

mumoshu avatar
mumoshu

then ensuring existence of go 1.12.x+ on your machine will also help. for me its like:

$ go version
go version go1.12.5 darwin/amd6
Mical avatar
Mical

yeah i had go 1.11.x so i’m updating

mumoshu avatar
mumoshu

great

mumoshu avatar
mumoshu

if you’re using make install, ensure your $PATH contains the go/bin in it

Mical avatar
Mical

s/updating/upgrading

Mical avatar
Mical

it does :–1:

:--1:1
Mical avatar
Mical
pkg/state/state.go:425:46: cannot use err (type error) as type *ReleaseSpec in field value
pkg/state/state.go:425:46: too few values in &ReleaseError literal
mumoshu avatar
mumoshu

give me a minute

:--1:1
mumoshu avatar
mumoshu

try changing relErrs = append(relErrs, &ReleaseError{err}) to relErrs = append(relErrs, newReleaseError(release, err))

// updated

Mical avatar
Mical

newReleaseError is not a type

mumoshu avatar
mumoshu

argh! it should be newReleaseError(release, err)

Mical avatar
Mical

thanks that compiled

party_parrot1
Mical avatar
Mical
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb926e7]
mumoshu avatar
mumoshu

would you mind giving me the rest of the message, especially the first several lines of stack trace?

Mical avatar
Mical

full stack trace is too long to post but i’ll paste it in chunks

Mical avatar
Mical
goroutine 1 [running]:
[github.com/urfave/cli.HandleAction.func1(0xc00023ce28)](http://github.com/urfave/cli.HandleAction.func1\(0xc00023ce28\))
        /home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:474 +0x287
panic(0xca5980, 0x1570ff0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
[github.com/roboll/helmfile/pkg/app.context.wrapErrs](http://github.com/roboll/helmfile/pkg/app.context.wrapErrs)(0xc00038c120, 0xc00000b0e0, 0xc000388a00, 0x3, 0x4, 0x15a2d70, 0x0)
        /home/zmiccar/src/helmfile/pkg/app/app.go:627 +0x1c7
[github.com/roboll/helmfile/pkg/app.context.clean(0xc00038c120](http://github.com/roboll/helmfile/pkg/app.context.clean\(0xc00038c120), 0xc00000b0e0, 0xc000388a00, 0x3, 0x4, 0x3, 0x4)
        /home/zmiccar/src/helmfile/pkg/app/app.go:614 +0x13e
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xc0000d5da9|github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xc0000d5da9>, 0xa, 0xc000038700, 0x35, 0x0, 0x0)
        /home/zmiccar/src/helmfile/pkg/app/app.go:334 +0x878
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0xc00003c2a0|github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0xc00003c2a0>, 0x2c)
        /home/zmiccar/src/helmfile/pkg/app/app.go:220 +0x9d
<http://github.com/roboll/helmfile/pkg/app.(*App).within|github.com/roboll/helmfile/pkg/app.(*App).within>(0xc00038c120, 0xc0000d5da0, 0x8, 0xc00004c080, 0xc000258148, 0x2)
        /home/zmiccar/src/helmfile/pkg/app/app.go:181 +0x3f6
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120>, 0xc0000d5da0, 0x13, 0xc000090180, 0x0, 0xdbb2f4)
        /home/zmiccar/src/helmfile/pkg/app/app.go:214 +0x29f
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120>, 0xc0000d5da0, 0x13, 0x15a2d70, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0000386c0, ...)
        /home/zmiccar/src/helmfile/pkg/app/app.go:255 +0xd4
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xdad353|github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xdad353>, 0xd, 0xc0000be3f0, 0x23, 0x0, 0x0)
        /home/zmiccar/src/helmfile/pkg/app/app.go:314 +0x5ad
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0x0|github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0x0>, 0xc0002585b0)
        /home/zmiccar/src/helmfile/pkg/app/app.go:220 +0x9d
<http://github.com/roboll/helmfile/pkg/app.(*App).within|github.com/roboll/helmfile/pkg/app.(*App).within>(0xc00038c120, 0xda48d4, 0x1, 0xc000388780, 0xc0002587a8, 0x2)
        /home/zmiccar/src/helmfile/pkg/app/app.go:162 +0x725
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120>, 0x0, 0x0, 0xc0000d15c0, 0x20, 0xcf6da0)
        /home/zmiccar/src/helmfile/pkg/app/app.go:214 +0x29f
Mical avatar
Mical
<http://github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120>, 0x0, 0x0, 0x15a2d70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /home/zmiccar/src/helmfile/pkg/app/app.go:255 +0xd4
<http://github.com/roboll/helmfile/pkg/app.(*App).VisitDesiredStatesWithReleasesFiltered(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).VisitDesiredStatesWithReleasesFiltered(0xc00038c120>, 0x0, 0x0, 0xc0001c5f60, 0xc000258901, 0xc00038a600)
        /home/zmiccar/src/helmfile/pkg/app/app.go:403 +0x426
<http://github.com/roboll/helmfile/pkg/app.(*App).ForEachState(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).ForEachState(0xc00038c120>, 0xc00038a600, 0xc0001c5f50, 0xd89580)
        /home/zmiccar/src/helmfile/pkg/app/app.go:349 +0x81
<http://github.com/roboll/helmfile/pkg/app.(*App).Sync(0xc00038c120|github.com/roboll/helmfile/pkg/app.(*App).Sync(0xc00038c120>, 0xf50940, 0xc0001c5f50, 0xc0001c5f50, 0xcd1560)
        /home/zmiccar/src/helmfile/pkg/app/app.go:125 +0x6e
main.main.func7(0xc00038c120, 0xc0002fd400, 0x0, 0xc0001c5f30, 0x0)
        /home/zmiccar/src/helmfile/main.go:272 +0x6d
main.action.func1(0xc0002fd400, 0x0, 0x0)
        /home/zmiccar/src/helmfile/main.go:548 +0x121
reflect.Value.call(0xc58980, 0xc0001c5b00, 0x13, 0xda52b3, 0x4, 0xc000258dc8, 0x1, 0x1, 0xc0000d6000, 0x411d03, ...)
        /usr/local/go/src/reflect/value.go:447 +0x461
reflect.Value.Call(0xc58980, 0xc0001c5b00, 0x13, 0xc000258dc8, 0x1, 0x1, 0xc0002efa00, 0xc0002efa48, 0x140)
        /usr/local/go/src/reflect/value.go:308 +0xa4
[github.com/urfave/cli.HandleAction(0xc58980](http://github.com/urfave/cli.HandleAction\(0xc58980), 0xc0001c5b00, 0xc0002fd400, 0x0, 0x0)
        /home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:483 +0x1ff
[github.com/urfave/cli.Command.Run(0xda5963](http://github.com/urfave/cli.Command.Run\(0xda5963), 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0xdd089d, 0x43, 0x0, ...)
        /home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/command.go:186 +0x8d1
<http://github.com/urfave/cli.(*App).Run(0xc00009ca80|github.com/urfave/cli.(*App).Run(0xc00009ca80>, 0xc0000bc020, 0x2, 0x2, 0x0, 0x0)
        /home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:237 +0x601
main.main()
        /home/zmiccar/src/helmfile/main.go:397 +0x2835
mumoshu avatar
mumoshu

ahh if relErr == nil { must be if relErr != nil {

mumoshu avatar
mumoshu

to wrap up, you should change these lines to:

relErrs := []*ReleaseError{}
if relErr != nil {
    relErrs = append(relErrs, relErr)
}

if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
    st.logger.Warnf("warn: %v\n", err)
    relErrs = append(relErrs, newReleaseError(release, err))
}

if len(relErrs) > 0 {
    results <- syncResult{errors: relErrs}
} else {
    results <- syncResult{}
}

https://github.com/roboll/helmfile/blob/b2a6231dcffef09d9d4045466fe3e953ca718f95/pkg/state/state.go#L418-L426

Mical avatar
Mical

that worked party_parrot

fast_parrot1
:1000:1
Mical avatar
Mical

patch in case someone is interested in the tmp fix:

diff --git a/pkg/state/state.go b/pkg/state/state.go
index 99818cd..b0b0849 100644
--- a/pkg/state/state.go
+++ b/pkg/state/state.go
@@ -415,14 +415,20 @@ func (st *HelmState) SyncReleases(affectedReleases *AffectedReleases, helm helme
 					}
 				}
 
-				if relErr == nil {
-					results <- syncResult{}
-				} else {
-					results <- syncResult{errors: []*ReleaseError{relErr}}
+				relErrs := []*ReleaseError{}
+				if relErr != nil {
+					relErrs = append(relErrs, relErr)
 				}
 
 				if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
 					st.logger.Warnf("warn: %v\n", err)
+					relErrs = append(relErrs, newReleaseError(release, err))
+				}
+
+				if len(relErrs) > 0 {
+					results <- syncResult{errors: relErrs}
+				} else {
+					results <- syncResult{}
 				}
 
 				if _, err := st.triggerCleanupEvent(release, "sync"); err != nil {

2019-07-15

mumoshu avatar
mumoshu

Thanks Erik!

@Mical If I have anything to add, there would be only two things:

(1) Many orgs including my company uses helmfile in production

https://github.com/roboll/helmfile/blob/master/USERS.md

(2) Even though it is pre-1.0, Helmfile has never introduced breaking changes to existing feature for a year or so.

We do introduce breaking changes to experimental features but even those happen only after prior discussions with the known users of the feature.

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

:--1:1

2019-07-12

Mical avatar
Mical

@Erik Osterman that’s a valid point.

2019-07-11

Andrey Nazarov avatar
Andrey Nazarov

One more question about {{'{{}}'}} syntax. Say, I set something like this in helmfile.yaml

values:
  - hostname: {{`{{.Release.Namespace}}`}}.[my-domain.com](http://my-domain.com)

But this causes the error during helmfile lint: reading document at index 1: yaml: line 241: did not find expected key. Is this an expected behaviour? Btw, I’m on quite old version: v0.69.0

Andrey Nazarov avatar
Andrey Nazarov

Answering my own question. Probably somebody will find this useful. That was explained previously here: https://sweetops.slack.com/archives/CE5NGCB9Q/p1560168577052300. I wasn’t very attentive at first. But got the answer right after sending the question.

chart: {{` {{  .Environment.Values | get (printf "%sVersion" "tpsvc-config") "" | eq "" | ternary "../.." "talend" }} ``}

evaluates to

chart: {{  .Environment.Values | get (printf "%sVersion" "tpsvc-config") "" | eq "" | ternary "../.." "talend" }}

which looks chart: {{ whatever }}//{{ whatever }} for the yaml parser

Mical avatar
Mical

does helmfile support creation of standalone kubernetes resources? for example if i add istio as a release and want to install a gateway, but without it being contained in a helmchart

Erik Osterman avatar
Erik Osterman

@Mical Helmfile just calls helm. But you’re in luck…

Erik Osterman avatar
Erik Osterman

There is a “raw” chart that does exactly what you want

Erik Osterman avatar
Erik Osterman
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman avatar
Erik Osterman

And we used it also to install an istio gateway

Erik Osterman avatar
Erik Osterman
cloudposse/example-app

Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app

Mical avatar
Mical

@Erik Osterman thanks I’ll check it out!

Mical avatar
Mical

What’s the road map for 1.0.0? We’re looking at different tools to use for decentralizing helm chart development while having a centralized way to test, package and continuously deploy systems containing a variety of charts. So far we’ve been looking at flux and helmfile. Flux is great but very opinionated while helmfile gives us the freedom of control since it is stateless. The fact that helmfile is still <1.0 is an issue for us for obvious reasons.

Erik Osterman avatar
Erik Osterman

I think it’s unfair to judge based on the release version being pre 1.0. Terraform is 0.12 is in use in production by enterprises/banks/etc. Obviously the uptick in traction is a wee bit less for helmfile. From my POV, better indicators to look at is how often the software is released (aka maintained), how responsive are the maintainers to issues, the overall traction of the project/community.

Erik Osterman avatar
Erik Osterman

when you look at these characteristics, then helmfile shines.

Erik Osterman avatar
Erik Osterman

btw, @mumoshu has now written an HelmfileOperator so you can do flux-like things with helmfile (that is a proof of concept though.

Erik Osterman avatar
Erik Osterman
mumoshu/helmfile-operator

Kubernetes operator that continuously syncs any set of Chart/Kustomize/Manifest fetched from S3/Git/GCS to your cluster - mumoshu/helmfile-operator

    keyboard_arrow_up