#helmfile (2019-10)

https://github.com/helmfile/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles

Archive: https://archive.sweetops.com/helmfile/

2019-10-02

Martin Devlin avatar
Martin Devlin

Hi everyone! We’re using Azure Container Registries as our helm repo, as per https://docs.microsoft.com/en-us/azure/container-registry/container-registry-helm-repos

Use Helm repositories in Azure Container Registry

Learn how to use a Helm repository with Azure Container Registry to store charts for your applications

Martin Devlin avatar
Martin Devlin

The integration with helm works fine by adding credentials to ~/.helm/repository/repositories.yaml but it seems helmfile ignores this file so we have to add credentials to helmfile.yaml thus:

# Advanced configuration: You can setup basic or tls auth
- name: roboll
  url: <http://roboll.io/charts>
  certFile: optional_client_cert
  keyFile: optional_client_key
  username: optional_username
  password: optional_password

Is there any way to get helmfile to use ~/.helm/repository/repositories.yaml or any plans to do so?

Martin Devlin avatar
Martin Devlin

Seems adding --skip-deps seems to resolve this

mumoshu avatar
mumoshu

@Martin Devlin Hey!

You seem to have found the answer but yeah, i’d recommend --skip-deps or just not manage the repo from helmfile(just omit it from repositories:)

mumoshu avatar
mumoshu


Now add your Azure Container Registry Helm chart repository to your Helm client using the az acr helm repo add command. This command gets an authentication token for your Azure container registry that is used by the Helm client

After reading this I think there’s no static “password” that can be set in helmfile.yaml’s repositories[].password in this scenario.

This command gets an authentication token for your Azure container registry sounds like it is generating a temporary, short life token that should be regenerated each time you run helm/helmfile

mumoshu avatar
mumoshu

@Martin Devlin If you got some time, I’d greatly appreciate it if you could submit a PR to add some guidnce to acr as a helm repo in the context of helmfile(perhaps a few lines in README.md would suffice

Martin Devlin avatar
Martin Devlin

@mumoshu Thanks for the reply. “sounds like it is generating a temporary, short life token that should be regenerated each time you run helm/helmfile”. The az acr helm repo add command adds a JWT token to ~/.helm/repository/repositories.yaml

Martin Devlin avatar
Martin Devlin

This works fine with helm commands, but not helmfile, which is a little frustrating as it’s a neat way to avoid managing secrets. As I say, --skip-deps makes the problem go away so we’re using that for now.

Martin Devlin avatar
Martin Devlin

I can make a note of that in README.md as you requested

mbilliet avatar
mbilliet

Anyone know how I can configure helmfile to install the latest pre-release versions of my charts? Tried adding --devel to args and omitting version from the releases, which does what I want when I helmfile diff, but when I helmfile apply it will try to install the latest release version.

starets avatar
starets

theoretically, your approach should work. might you try running helmfile --log-level debug apply just to identify root-cause

mbilliet avatar
mbilliet

Thanks for the tip.

Seems like the helmfile apply diff step ignores the args in my helmfile:

exec: helm diff upgrade --reset-values --allow-unreleased account-service xxx/account-service --tiller-namespace acc --kube-context acc --values /var/folders/9h/1pr987354xj3n7y3tbcy5f2d606v15/T/values267183981 --detailed-exitcode

Whereas helmfile diff does not:

exec: helm diff upgrade --reset-values --allow-unreleased account-service xxx/account-service --tiller-namespace acc --kube-context acc --values /var/folders/9h/1pr987354xj3n7y3tbcy5f2d606v15/T/values801725413 --devel

Using helmfile sync instead of helmfile apply ended up solving my problem.

starets avatar
starets

seems to be a bug. Worth creating an issue on github, imo.

mumoshu avatar
mumoshu

@mbilliet @starets Hey! Have you tried setting devel: true in your releases in helmfile.ymal like this?

releases:
- name: myapp
  devel: true
mumoshu avatar
mumoshu

--args is very hard to reason about and use - i’d recommend declaring everything in your helmfile.yaml

Bart M. avatar
Bart M.

anyone know the exact syntax of the --state-values-set flag? I can’t seem to get it to work

starets avatar
starets

haven’t used it, but judging by https://github.com/roboll/helmfile/blob/master/main.go#L64 and https://github.com/roboll/helmfile/blob/master/main.go#L479-L489

it’s exactly what’s stated in Usage:

(can specify multiple or separate values with commas: key1=val1,key2=val2)
roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Bart M. avatar
Bart M.

well - but the “key=value” doesn’t seem to work

Bart M. avatar
Bart M.

I have an ‘image.tag’ in my helm chart, and when I define “image.tag=foo” - it still outputs the wrong image tag in my deployment if I try this out with helmfile template

Bart M. avatar
Bart M.

I tried prefixing with the chartname, release name, Values, bare, …

Bart M. avatar
Bart M.

also, if I try with --log-level debug - and I nowhere see the value I try to override

Bart M. avatar
Bart M.

hmm ok, these only seem to be available in the helmfile files themselves, if I want to propagate them to the charts I have to add them to the values loaded by helmfile…

Bart M. avatar
Bart M.

theres also not much documentation for this

Bart M. avatar
Bart M.

we use helmfile to deploy all our envs, but we have envs per dev team (about 12) - all pretty much with the same config except for some small uniform changes I should be able to influence with that flag I would expect, but I can’t seem to get it to work

mumoshu avatar
mumoshu

--state-values-set should be used like --state-values-set mykey=myvalue so that it becomes available in your helmfile.yaml like:

releases:
- name: myapp
  values:
  - foo: {{ .Values.mykey }}
mumoshu avatar
mumoshu

Or

helmfiles:
- path: path/to/subhelmfile.yaml
  values:
  - foo: {{ .Values.mykey }}
mumoshu avatar
mumoshu

The important point is that any state values are not implicitly propagated to any releases or any sub-helmfiles.

Bart M. avatar
Bart M.

yeah I figured that out by now

Bart M. avatar
Bart M.

thanks anyway!

1

2019-10-03

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Any tips & tricks to share regarding running Helmfile from Atlantis? I’m looking into cross-account auth with EKS and it gets dicey.

mumoshu avatar
mumoshu

@Vlad Ionescu (he/him) Hey! I’ve tried calling helmfile from atlantis with my own “custom stages” feature a year ago

https://github.com/cloudposse/atlantis/pull/20

feat/wip: Custom stages by mumoshu · Pull Request #20 · cloudposse/atlantis

This is currently an alpha-level work of what the subject states. I have not tried to think throughout all the edge-cases, but it should work in normal cases. I want to run arbitrary helmfile comma…

mumoshu avatar
mumoshu

Does atlantis officially have such feature today?

2019-10-09

2019-10-10

Benn Sundsrud avatar
Benn Sundsrud

I’m trying to manage our environments via helmfile and I’m running into issues. I’m trying to do common helmfiles (with release definitions) in helmfiles/ and have config in config/<env>/<proj>/*.values.yaml. Theres a base.yaml which defines repos, helm defaults, and environments (prod + stage for now). My helmfile.yaml includes the base via bases: and then has a helmfiles: directive for helmfiles/*.helmfile.yaml. The individual app helmfiles also include the base. Any values I set on environments aren’t getting through to the app helmfiles. I’ve tried overriding them in the helmfiles: section but that just ends up failing to render because it can’t find the environment value there either. If I remove the bases: from helmfile.yaml, though, and just paste the contents inline, it works. there seems to be some weird interaction with bases that i’m not understanding

2019-10-11

Gourav avatar

I am trying out the helmfile, In base helmfile.yaml have multiple releases, And I am working on only one say cert-manager. While apply the helmfile all the applications are getting deployed.

helmfiles:
  - "releases/prometheus-operator.yaml"
  - "releases/cluster-autoscaler.yaml"
  - "releases/cert-manager.yaml"

I am using the below command

helmfile --file helmfile-preprod.yaml -e preprod apply

Is there anyway to pass argument which only deploys cert-manager.yaml? Kindly suggest

Gourav avatar

Using

--selector value, -l value

we can run a particular release

Alex Siegman avatar
Alex Siegman

You can also run the release helmfile directly with the –file argument

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gourav or you can use the --selector argument

2019-10-13

Gourav avatar

@Erik Osterman (Cloud Posse) Thanks Erik

2019-10-14

Gourav avatar

Need some suggestion, in below section there’s a env KIAM_HOST_CERT_PATH variable is there , if I am not passing any value to it, It will pickup the default one.

extraHostPathMounts:
          - name: "ssl-certs"
            mountPath: "/etc/ssl/certs"
            hostPath: '{{ env "KIAM_HOST_CERT_PATH" | default "/etc/ssl/certs" }}'
            readOnly: true

I wanted to understand, where I need to define this variable KIAM_HOST_CERT_PATH and pass its value. As I do not want to change the default values but wanted to use passed value. Is there any example to achieve the above mentioned?

zeid.derhally avatar
zeid.derhally

that’s an environment variable

zeid.derhally avatar
zeid.derhally

you can define it before you run helmfile or on the same line

KIAM_HOST_CERT_PATH="/path/to/certs" helmfile

Gourav avatar

@zeid.derhally Thanks.. i will try.

Ben avatar

Hi all, I’m attempting to override release values so that devs can quickly change image.tag on their local env for testing. The current pattern is like so;

bases:
  - ../../env/helmfile-environments.yaml

releases:
  - name: myrel
    namespace: dataproduct
    chart: ../../../../charts/data-product-service-chart
    force: true
    atomic: true
    values:
      - image:
          repository: myrepo.com/rel/myrel
          tag: master-1.1.123
      - ../../env/{{ .Environment.Name }}.yaml

and we want to run helmfile like this;

helmfile -e minikube -f helmfile-myrel.yaml --state-values-set image.tag=dev apply

However, the image.tag value remains as master-1.1.123

mumoshu avatar
mumoshu

Would you make it possible to set a default image.tag per environment, too?

Assuming so, I’d guess it should be

releases:
  - name: myrel-{{ env "BRANCH_NAME" | default "master" }}
    namespace: dataproduct-{{ env "BRANCH_NAME" | default "master" }}
    chart: ../../../../charts/data-product-service-chart
    force: true
    atomic: true
    values:
      - image:
          repository: myrepo.com/rel/myrel
          tag: major-1.1.123
      - ../../env/{{ .Environment.Name }}.yaml
      - {{ tag := get "image.tag" "" .Values }}{{ if $tag }}{"image": {"tag": {{ $tag | quote }} } } {{ end }}
mumoshu avatar
mumoshu

The important point here is that state values are not automatically propagated to releases as state values and chart values are completely different things

Ben avatar

We don’t really want the image.tag to vary between environments so setting a default for each environment introduces redundant environment files (essentially boilerplate). Regardless, I tried adding image.tag to env/minikube.yaml and adding your suggested code above but I got the following error;

template: stringTemplate:17: function "tag" not defined
1
mumoshu avatar
mumoshu

ah, perhaps it should be {{ $tag := get not {{ tag := get

mumoshu avatar
mumoshu

and as you aren’t going to define environmental defaults, values can just be

    values:
      - image:
          repository: myrepo.com/rel/myrel
          tag: {{ .Values | get "image.tag" "major-1.1.123" }}
      - ../../env/{{ .Environment.Name }}.yaml
Ben avatar

So should the value of image.tag there come from --state-values-set image.tag=mytag?

Ben avatar

Doesn’t seem to work for me

mumoshu avatar
mumoshu

Yes

mumoshu avatar
mumoshu

Hmm, what would you see if you run helmfile build with log-level=debug?

helmfile --log-level=debug -f helmfile.yaml --state-values-set image.tag=foo build
mumoshu avatar
mumoshu
$ cat helmfile.yaml
releases:
  - name: myapp
    chart: stable/nginx
    values:
    - image:
        tag: {{ .Values | get "image.tag" "default_tag" }}
mumoshu avatar
mumoshu

when no --set-values-set are provided, it does use default_tag

$ helmfile --log-level=debug -f helmfile.yaml build
processing file "helmfile.yaml" in directory "."
first-pass rendering starting for "helmfile.yaml.part.0": inherited=&{default map[] map[]}, overrode=<nil>
first-pass uses: &{default map[] map[]}
first-pass produced: &{default map[] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]}
vals:
map[]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
 0: releases:
 1:   - name: myapp
 2:     chart: stable/nginx
 3:     values:
 4:     - image:
 5:         tag: default_tag
 6:

merged environment: &{default map[] map[]}
---
#  Source: helmfile.yaml

filepath: helmfile.yaml
releases:
- chart: stable/nginx
  name: myapp
  values:
  - image:
      tag: default_tag
templates: {}
Ben avatar

That is pretty much exactly what I’m doing but it always uses the default

mumoshu avatar
mumoshu

and with --state-values-set

$ helmfile --log-level=debug -f helmfile.yaml --state-values-set image.tag=foo build
processing file "helmfile.yaml" in directory "."
first-pass rendering starting for "helmfile.yaml.part.0": inherited=<nil>, overrode=&{default map[image:map[tag:foo]] map[]}
first-pass uses: &{default map[image:map[tag:foo]] map[]}
first-pass produced: &{default map[image:map[tag:foo]] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[image:map[tag:foo]] map[]}
vals:
map[image:map[tag:foo]]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
 0: releases:
 1:   - name: myapp
 2:     chart: stable/nginx
 3:     values:
 4:     - image:
 5:         tag: foo
 6:

merged environment: &{default map[image:map[tag:foo]] map[]}
---
#  Source: helmfile.yaml

filepath: helmfile.yaml
releases:
- chart: stable/nginx
  name: myapp
  values:
  - image:
      tag: foo
templates: {}
mumoshu avatar
mumoshu

@Ben would you mind sharing your logs with --log-level=debug? (but please beware not to leak creds/secrets in it though!

Ben avatar

Seems to be related to environments. This is my exact helmfile

bases:
  - ../../env/helmfile-environments.yaml

releases:
  - name: rdc-{{ env "BRANCH_NAME" | default "master" }}
    namespace: dataproduct-{{ env "BRANCH_NAME" | default "master" }}
    chart: ../../../../charts/data-product-service-chart
    force: true
    atomic: true
    values:
      - image:
          repository: registry.encompasshost.com/encompass/cdp/rdc-cdp
          tag: {{ .Values | get "image.tag" "major-humpback-1.1.5888" }}
      - service:
          internalPort: 8081
      - ../../env/{{ .Environment.Name }}.yaml
Ben avatar

if I comment out the bases section and the last line then image.tag is overriden as expected

Ben avatar

Does the inclusion of environments section clear the state passed from CLI maybe?

mumoshu avatar
mumoshu

Sounds like so - and it might be a bug! I’ll take a deeper look soon

Ben avatar

I should also say that image.tag is only defined in the Chart’s values file and not in the referenced env/minikube.yaml file

2019-10-15

Gourav avatar

Hi.. I am working on kiam helmfile where I wanted to move rbac section from below file to file named “values/kiam.yaml.gotmpl”. So i have included the file as show below under values: section. But I am getting the below message. Anyone got some tips for me?

helmfile --file helmfile-dev-dev.yaml -e dev-dev -l chart=kiam diff

could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read kiam.yaml.part.1: reading document at index 1: yaml: line 148: did not find expected '-' indicator
in ./helmfile-dt-ue2.yaml: in .helmfiles[0]: in releases/kiam.yaml: failed to read kiam.yaml: reading document at index 1: yaml: line 148: did not find expected '-' indicator

- name: "kiam"
  namespace: "kube-system"
  labels:
    chart: "kiam"
    repo: "stable"
    component: "iam"
    namespace: "kube-system"
    vendor: "uswitch"
    default: "true"
  chart: "stable/kiam"
  version: "2.5.2"
  wait: true
  recreatePods: false
  installed: {{ env "KIAM_INSTALLED" | default "true" }}
  hooks:
    # This hoook adds the annotation that allows pods in the kube-system namespace to assume any annotated role
    - events: ["presync"]
      command: "/bin/sh"
      args: ["-c", "kubectl annotate --overwrite namespace kube-system 'iam.amazonaws.com/permitted=.*'"]
    # This hook adds the annotation that instructs stakater/reloader to watch the DaemonSet's secrets and configmaps
    # and reload the DeamonSet when they change.
    - events: ["postsync"]
      command: "/bin/sh"
      args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet --selector=app=kiam reloader.stakater.com/auto=true"]
  values:
    - fullnameOverride: kiam
    - values/kiam.yaml.gotmpl
      #rbac:
        ### Optional: RBAC_ENABLED;
        #create: {{ env "RBAC_ENABLED" | default "false" }}

2019-10-16

Gourav avatar

Hi… I am preparing the helmfile for kiam. In kiam specifications there are seviceAccount section, in that when I am trying to override the serviceAccountName for agent and server it is not happening. Below is the snippet of manifest where I am trying to override

serviceAccount:
  agent:
    create: true
    name: dev-ops-kiam-agent
  server:
    create: true
    name: dev-ops-kiam-server

While doing the helmfile diff I am getting the serviceAcccount manifest files for agent and server. But in that name is not coming as dev-ops-kiam-agent and dev-ops-kiam-server instead coming as kiam-agent and kiam-server.

+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+   labels:
+     app: kiam
+     chart: kiam-2.5.2
+     component: "agent"
+     heritage: Tiller
+     release: kiam
+   name: kiam-agent

Does someone have some pointers for me to resolve this issue with serviceAccount ?

mumoshu avatar
mumoshu
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

mumoshu avatar
mumoshu

it seems like you should use serviceAccounts, not serviceAccount

mumoshu avatar
mumoshu

try

serviceAccounts:
  agent:
    create: true
    name: dev-ops-kiam-agent
  server:
    create: true
    name: dev-ops-kiam-server
1
Gourav avatar

ok… I will try.. thank you @mumoshu

Gourav avatar

Thanks.. it worked…

1

2019-10-17

Bart M. avatar
Bart M.

I have a rather extensive helmfile setup, a few dozen helmfiles (all loaded from a central-one) with multiple releases per helmfile. What would be the best way to set a specific set of values for the charts in every single release accross a deploy? more specifically, we allow the hostAliases in deployments to be set in all our charts - but want this to be managed centrally… I now have to include a central file with these hostaliases in every single release, but I would like to be able to add that from a single central place (preferably from our base helmfiles (which are already included in all our helmfiles)

mumoshu avatar
mumoshu

so probably you want a central place to affect every aspect of every release in every (sub-)helmfile, right?

mumoshu avatar
mumoshu

maybe it isn’t possible today -

if i read correctly, what you might need is

releases:
# render myapp-us-west-1
{{ include "the-central-place/release-template.tpl" (dict "Values" .Values "name" "myapp" "region" "us-west-1" "opts" (dict "key1" "val1")) }}
# render myapp-us-west-2
{{ include "the-central-place/release-template.tpl" (dict "Values" .Values "name" "myapp" "region" "us-west-2" "opts" (dict "key1" "val1")) }}
# ....

and in release-template.tpl, you write a nice go template that generates a release as you like

mumoshu avatar
mumoshu

where include includes the given tpl from the file system(as we can do in helm

mumoshu avatar
mumoshu

if this sounds good to you, would you mind opening a issue to add include to helmfile?

Bart M. avatar
Bart M.

hmm that might be a solution, not sure though

Bart M. avatar
Bart M.

I tried with a template in our base role, but values arrays aren’t merged, so then I can’t set release-specific values anymore

mumoshu avatar
mumoshu

still trying to understand. could you provide me a small example to reproduce what you’ve tried?

Bart M. avatar
Bart M.

well - I have a

templates:
  default: &default
    values:
      - /central/path/to/values.yaml

releases:
  - release-v1:
    <<*: *default
    values:
    - somevalue: "to override"
Bart M. avatar
Bart M.

if I do this, the values from the template are ignored

Bart M. avatar
Bart M.

if I could somehow merge them, I could declare the template in a central base that’s included everywhere

Bart M. avatar
Bart M.

and add additional custom values per release

Bart M. avatar
Bart M.

but that’s all due to ugly YAML trickery I’m afraid

Bart M. avatar
Bart M.

not sure if helmfile could do that in some way

pjbecotte avatar
pjbecotte

Hi, I have what seems like a silly question, didn’t want to open an issue on GitHub. I’m using git for a remote helmfile. But, there doesn’t seem to be any way to have the git repo get updated with git pull other then going into the .helmfile directory and doing it myself?

mumoshu avatar
mumoshu

@pjbecotte Hey! Assuming you’re talking about remote ghelmfiles stored under .helmfile/cache/something - yeah you might need to manually run git-pull there if you’ve specified a changing git branch

helmfile doesn’t try to re-fetch already cached git branches and tags.

my expected usage for the remote helmfile feature was you’d tag your git repo with semantic versions. that is, you change the git tag included in the remote helmfile url(go-getter-style url) from v1.0.0 to v2.0.0 when you need any update.

does it make sense?

pjbecotte avatar
pjbecotte

Sure, that is a workflow that will work. Of course, it was a pain while I was iterating trying to get things working. Was just thinking I probably missed something.

mumoshu avatar
mumoshu

@pjbecotte i hear you.

perhaps it would be nicer if helmfile git-pulled every cached remote helmfile repo by default?

mumoshu avatar
mumoshu

and ability to optionally skip git-pulling cached remote helmfiles?

mumoshu avatar
mumoshu
Ability to DO git-pulls on already cached remote helmfiles? · Issue #901 · roboll/helmfile

Extracted from https://sweetops.slack.com/archives/CE5NGCB9Q/p1571331457038700 Currently helmfile skips git-pulling any remote helmfiles that are already cached. This isn&#39;t actually a bug. The …

pjbecotte avatar
pjbecotte

Oh, neat, thanks!

Naseem avatar

Anyone using helmfile for multiple clusters in different regions? Whats your approach if every cluster has a location specific variable?

mumoshu avatar
mumoshu

@Naseem Hey! Replied to you in the k8s slack also but anyway

mumoshu avatar
mumoshu

It depends, but I guess you should use sub-helmfile per region.

Thati s, there could be a production environment in a “global” helmfile. the global helmfile would delegate per-region deployment to respective sub-helmfile

Naseem avatar

Thanks @mumoshu ! I will try this approach!

Naseem avatar

if you have releases that are almost identical across all regions, with maybe 1 value thats different per region, is this still the approach you would suggest?

mumoshu avatar
mumoshu

Yep.

Try injecting state values for regions like:

helmfiles:
- path: regional.yaml
   values:
   - region: us-west-1
- path: regional.yaml
   values:
   - region: us-west-2
1
Naseem avatar

Regarding globally distributed clusters again:

Currently in a non-globally-distributed setup, I know if a release should be installed based on which environment it is: installed: {{ eq .Environment.Name "staging" }} <— only installs in staging.

How does one achieve the flexibility of: “installed if env is staging AND region is us-west-1” OR “installed if env is staging AND cluster name is Bob” … either of these would be great

Naseem avatar

Can we somehow extract the cluster name from kube context or something?

mumoshu avatar
mumoshu

re the gotmpl expression, it would look like {{ and (eq .Environment.Name "staging") (eq $clusterName "cluster1") }}

mumoshu avatar
mumoshu


Can we somehow extract the cluster name from kube context or something?
I think you should think conversely. You’d provide the cluster name via values, and select appropriate kubeconfig based on that

mumoshu avatar
mumoshu

you should split your kubeconfig per context beforehand with kubectl config view --minify

Naseem avatar

Thanks @mumoshu!

Andrew Nazarov avatar
Andrew Nazarov

Just got an issue, that has never happened before. After the latest helmfile apply some of the releases became FAILED, and corresponding workloads got disappeared. In logs I can see something like

exec: helm tiller run gitlab-managed-apps -- helm upgrade --install --reset-values frontend-e2e-alpha chartmuseum/frontend --version 0.9.0 --timeout 300 --force --namespace e2e-alpha --values /tmp/values674824445 --kube-context=gke_XXXX_europe-west1-b_XXXX: Creating tiller namespace (if missing): gitlab-managed-apps
UPGRADE FAILED
ROLLING BACK
Error: Failed to recreate resource: the server was unable to return a response in the time allotted, but may still be processing the request (post deployments.apps)

After that the deployment got disappeared.

frontend-e2e-alpha      	16      	Thu Oct 17 05:23:17 2019	FAILED  	frontend-0.9.0           	1.0                         	e2e-alpha 

That’s weird. And some deployments now have pods from different revisions. Things got messed up.

mumoshu avatar
mumoshu

hm… maybe helm-tiller timedout/failed in the middle of the installation process and left the cluster in a half-baked state?

mumoshu avatar
mumoshu

i suspect it can potentially a fundamental issue in helm-tiller then(helm3 would help

Andrew Nazarov avatar
Andrew Nazarov

I bumped the K8s version in my cluster and it started working ok. Probably it was a coincidence, but everything is ok so far.

Andrew Nazarov avatar
Andrew Nazarov

I purged a bunch of releases and ran helmfile apply again. Almost all succeeded, except one.

1
Andrew Nazarov avatar
Andrew Nazarov

The same didn’t help for the other bunch of releases at all.

2019-10-18

Bart M. avatar
Bart M.

hmm have another issue… I use

{{`{{.Release.Name}}`}}

in a template section in the values:. If it’s in a filename, it renders properly, if it’s in a - varname: {{...}} it doesn’t render this, and I end up with a literal {{ .Release.Name }} in my templated helm chart output…

mumoshu avatar
mumoshu

Ah maybe we’re missing the implementation for rendering release template expressions within the inlined values

Bart M. avatar
Bart M.

thanks

2019-10-20

TBeijen avatar
TBeijen

Are containers built somewhere having helm3 binary, latest work-in-progress helm-diff (including needed fixes)?

mumoshu avatar
mumoshu

Not yet. but it would be great to have one!

TBeijen avatar
TBeijen

Ok, I might take a shot at that in the coming days.

1
1
TBeijen avatar
TBeijen

(Currently using helmfile as ‘templating engine’ and seems not efficient at this point to invest in Helm2 and all the tiller shenanigans)

Bart M. avatar
Bart M.

it’s ok if you use the tillerless plugin

Bart M. avatar
Bart M.

(which is what we do)

TBeijen avatar
TBeijen

Considering that as well. But afaik there is no clear upgrade path yet to migrate helm2 to helm3 deployments. So you’d have to remove and reinstall which is not ideal. So that’s my main reason for looking directly at helm3.

TBeijen avatar
TBeijen

Do you use a single namespace for all helm2 release data (kube-system and apps)?

mumoshu avatar
mumoshu

That sounds like a good strategy!

mumoshu avatar
mumoshu

Anyways, we will be able to use https://github.com/helm/helm-2to3 for upgrading without reinstalling

helm/helm-2to3

This is a Helm v3 plugin which migrates and cleans up Helm v2 configuration and releases in-place to Helm v3 - helm/helm-2to3

1
TBeijen avatar
TBeijen

Great to see helm3 images are built now! Work (other work) got in the way, so hadn’t found time for that (and need to familiarize myself a bit with the ins/outs of Helmfile ci setup).

1
1
TBeijen avatar
TBeijen

Ran into some CRD-related issues on a first attempt using Helm3 on prometheus-operator. Then again: Not the simplest chart. And stuff moves so fast, might have been fixed already. (Also: Not a helmfile issue).

2019-10-21

Andrew Nazarov avatar
Andrew Nazarov

Actually, I’ve got kinda related question. Do we have any helm2 -> helm3 transition best practices for those using helmfile? Or they are pretty much the same as for Helm? Is helmfile ready to be used with Helm 3 right now? Is there any missing things? Even though I’ve been using helmfile for quite some time, I’d never tried it with Helm 3.

2019-10-22

Marcus Johansson avatar
Marcus Johansson

Hello! Just started looking at helmfile and I think I like it. Our current setup is that we have a Jenkins pipeline for each micro service that creates a docker image and a helm chart which gets pushed/published to our registries. The pipelines also deploys to our environments based on which branch it is, but I want to move all that to a helmfile in a separate git repo, which can be updated using PRs for values changes, but I want the built helm chart deployed automatically and thus want the Jenkins pipeline to do git commits to the repo where the helmfile resides… Anyone got any good way of doing this?

Tiago Meireles avatar
Tiago Meireles

Are there any up to date helmfile examples?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use these everyday

2
Tiago Meireles avatar
Tiago Meireles

Taking https://github.com/costimuraru/helmfile-examples/tree/master/templatization as an example, how can i move cluster-autoscaler.yaml into a sub-directory called releases? I think i keep running into path issues.

mumoshu avatar
mumoshu

@Tiago Meireles hey! you’d also need to update this in the yaml:

bases:
  - envs/environments.yaml

to

bases:
- ../envs/environments.yaml
mumoshu avatar
mumoshu

every refs from a helmfile.yaml should be relative to itself

Tiago Meireles avatar
Tiago Meireles

That is what I expected. It didn’t work when i tried it.

Tiago Meireles avatar
Tiago Meireles
diff --git a/templatization/helmfile.yaml b/templatization/helmfile.yaml
index 5fdf61a..a205404 100644
--- a/templatization/helmfile.yaml
+++ b/templatization/helmfile.yaml
@@ -7,4 +7,4 @@ bases:
 helmfiles:
 #  - "nginx-ingress.yaml"
 #  - "velero/velero.yaml"
-  - "cluster-autoscaler.yaml"
+  - "releases/cluster-autoscaler.yaml"
diff --git a/templatization/cluster-autoscaler.yaml b/templatization/releases/cluster-autoscaler.yaml
similarity index 96%
rename from templatization/cluster-autoscaler.yaml
rename to templatization/releases/cluster-autoscaler.yaml
index 890db9e..d0f17a7 100644
--- a/templatization/cluster-autoscaler.yaml
+++ b/templatization/releases/cluster-autoscaler.yaml
@@ -1,6 +1,6 @@
 ---
 bases:
-  - envs/environments.yaml
+  - ../envs/environments.yaml
 ---
 releases:
 - name: "cluster-autoscaler"
@@ -32,4 +32,4 @@ releases:
       value: {{ .Environment.Values.helm.autoscaler.azure.clientId }}
     - name: azureClientSecret
       value: {{ .Environment.Values.helm.autoscaler.azure.clientSecret }}
-{{ end }}
\ No newline at end of file
+{{ end }}
Tiago Meireles avatar
Tiago Meireles
 ✗ helmfile -e aws template
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read ../envs/environments.yaml.part.0: environment values file matching "envs/aws-env.yaml" does not exist in "."
in ./helmfile.yaml: in .helmfiles[0]: in releases/cluster-autoscaler.yaml: failed to read ../envs/environments.yaml: environment values file matching "envs/aws-env.yaml" does not exist in "."
mumoshu avatar
mumoshu

thx! seems like you also need to fix environments.yaml as well.

2019-10-23

yuri avatar

does anyone know why i’m getting this warning on a specific helmfile?

could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced

i have the same structure for the rest of the helmfiles and they do not return this warning

mumoshu avatar
mumoshu

hey! could you share your configs(or maybe smaler versions of them for reproduction?

all i can say form the sole example is that your config does miss the default anchor undefined in your specific case.

yuri avatar

@mumoshu, sorry for the late response: sub-helmfile:

---
environments:
  {{ .Environment.Name }}:
    values:
    - ../envs/{{ .Environment.Name }}/defaults.yaml
    - ../envs/{{ .Environment.Name }}/charts.yaml
---
{{ readFile "../templates.yaml" }}

releases:
- name: x-creds-{{ .Environment.Name }}
  chart: x-stg/docker-registry-creds-chart
  version: {{ .Environment.Values.charts | getOrNil "x.chartVersion" | default "0.0.1" }}
  installed: {{ .Environment.Values.charts | getOrNil "x.enabled" | default false }}
  <<: *default
  values:
  - imageCredentials:
      username: {{ requiredEnv "X_USER" }}
      password: {{ requiredEnv "X_PASS" }}
  - fullnameOverride: x-creds-{{ .Environment.Name }}

templates.yaml:

---
bases:
- ../repos.yaml
- ../helmdefaults.yaml
---
templates:
  default: &default
    namespace: "{{ .Environment.Values.namespace }}"
    wait: false
    missingFileHandler: Error
mumoshu avatar
mumoshu

thx! ah, so it won’t work at all.

&default needs *default written in the same yaml document. and --- nor bases doesn’t result in concatenating files as texts. they all read and render files independently.

mumoshu avatar
mumoshu

maybe this works?

{{ readFile "../repos.yaml" }}
{{ readFile "../helmdefaults.yaml" }}

templates:
  default: &default
    namespace: "{{ .Environment.Values.namespace }}"
    wait: false
    missingFileHandler: Error
yuri avatar

@mumoshu actually my original config is working, it just prints this error before going forward

mumoshu avatar
mumoshu

ah gotcha! just curious, but you see that warning only with --log-level=debug?

yuri avatar

nope, both with and w/o debugging

mumoshu avatar
mumoshu

interesting.

mumoshu avatar
mumoshu

ah okay you already have this in your original helmfile

{{ readFile "../templates.yaml" }}
mumoshu avatar
mumoshu

maybe helmfile should turn off that warning by default

mumoshu avatar
mumoshu

but anyway… it’s there due to how helmfile’s feature called double rendering work

yuri avatar

when changing my templates yaml to :

{{ readFile "../repos.yaml" }}
{{ readFile "../helmdefaults.yaml" }}

instead of using bases im getting this:

could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read x-creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced
in ./x-creds.yaml: failed to read x-creds.yaml: reading document at index 1: yaml: unmarshal errors:
  line 2: cannot unmarshal !!map into string
mumoshu avatar
mumoshu

there’s a chicken-and-egg problem while loading a helmfile.yaml. that is, you need environment values loaded before rendering any go template in hlemfile.yaml. but to load env values it must be a plain yaml without go template.

yuri avatar

you mean this part:

---
environments:
  {{ .Environment.Name }}:
    values:
    - ../envs/{{ .Environment.Name }}/defaults.yaml
    - ../envs/{{ .Environment.Name }}/charts.yaml

in subhelmfile?

mumoshu avatar
mumoshu

to workaround that, helmfile renders your sub-helmfile twice

mumoshu avatar
mumoshu

partially yes. double rendering occurs on the whole file

mumoshu avatar
mumoshu

that is, helmfile firstly renders

{ readFile "../templates.yaml" }}

releases:
- name: x-creds-{{ .Environment.Name }}
  chart: x-stg/docker-registry-creds-chart
  version: {{ .Environment.Values.charts | getOrNil "x.chartVersion" | default "0.0.1" }}
  installed: {{ .Environment.Values.charts | getOrNil "x.enabled" | default false }}
  <<: *default
  values:
  - imageCredentials:
      username: {{ requiredEnv "X_USER" }}
      password: {{ requiredEnv "X_PASS" }}
  - fullnameOverride: x-creds-{{ .Environment.Name }}

with readFile replaced with a noop func, and with an empty values set

mumoshu avatar
mumoshu

which results in

releases:
- name: x-creds-default
  chart: x-stg/docker-registry-creds-chart
  version: "0.0.1"
  installed: false
  <<: *default
  values:
  - imageCredentials:
      username: VALUE_OF_X_USER
      password: VALUE_OF_X_PASS
  - fullnameOverride: x-creds-default

which indeed miss &default and results in the error on *default

mumoshu avatar
mumoshu

helmfile uses this to load env values used to render your helmfile.yaml. this time readFile is not a noop func and as your helmfile.yaml is correct, it works…

mumoshu avatar
mumoshu

helmfile has no way to know your helmfile.yaml contains environments or not before rendering it

thats why this double rendering thing always happen regardless of there’s environments section defined or not in your helmfile.yaml.

yuri avatar

i get it, but with a simple change in the release i dont see this warning/error, thats what surprises me

yuri avatar

using the same templates.yaml but changing the release to this:

---
environments:
  {{ .Environment.Name }}:
    values:
    - ../envs/{{ .Environment.Name }}/defaults.yaml
    - ../envs/{{ .Environment.Name }}/charts.yaml

---
{{ readFile "../templates.yaml" }}

releases:
- name: my-app
  chart: x-stg/my-app-v2-chart
  version: {{ .Values.charts.myApp.chartVersion }}
  installed: {{ .Environment.Values.charts | getOrNil "myApp.enabled" | default false }}
  <<: *default
  values:
   - ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
   - ../envs/common/ms-affinity-rule.yaml
   - fullnameOverride: my-app-{{ .Environment.Name }}
yuri avatar

the only noticeable change here is the version

mumoshu avatar
mumoshu

wow, really?!

yuri avatar

yes

yuri avatar

i can share privately debug logs if you wish

mumoshu avatar
mumoshu

what do you see as the “first rendering result” when you add --log-level=debug like helmfile --log-level=debug build ?

yuri avatar

im on 0.87 btw

yuri avatar
helmfile --log-level debug -e qa -f myapp.yaml diff                                                                                          [67e4cfa]
processing file "myapp.yaml" in directory "."
first-pass rendering starting for "myapp.yaml.part.0": inherited=&{qa map[] map[]}, overrode=<nil>
first-pass uses: &{qa map[] map[]}
yuri avatar

wait will share the result

1
yuri avatar

sorry it takes time, some sensitive names that i need to remove

yuri avatar

if you wish i can share a debug log after i change the version to use getOrNil and then i get the deduce error

mumoshu avatar
mumoshu

yes, please!

mumoshu avatar
mumoshu

at glance i see the expected thing here:

first-pass rendering input of "my-app.yaml.part.1":
 0: {{ readFile "../templates.yaml" }}
 1:
 2: releases:
 3: - name: myproject-my-app
 4:   chart: bams-stg/myproject-my-app-v2-chart
 5:   version: {{ .Values.charts.myprojectLatiTagger.chartVersion }}
 6:   installed: {{ .Environment.Values.charts | getOrNil "myprojectLatiTagger.enabled" | default false }}
 7:   <<: *default
 8:   values:
 9:    - ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
10:    - ../envs/common/ms-affinity-rule.yaml
11:    - fullnameOverride: myproject-my-app-{{ .Environment.Name }}

i thought this would emit the warning(which didnt

yuri avatar

i wondering if its because in one case im using .Values.charts.somevalue and the other case is: .Environment.Values.charts.somevalue

mumoshu avatar
mumoshu

I read the log but am not yet sure what’s going on here. There’s indeed a few switches that relates to whether you use .Environment.Values or .Values. I’ll take a deeper look in coming days!

mumoshu avatar
mumoshu

Thanks for your cooperation

yuri avatar

Thank you for the support!

yuri avatar

@mumoshu hi did u had the chance to look at this? should i try to use env bases to avoid this warning? i still can’t figure out what causes this behavior

mumoshu avatar
mumoshu

@yuri Yes I have some progress. The main problem was that it was missing some error log that occurred in the first-pass render.

Improving it, I get this:

irst-pass rendering input of "helmfile.3.yaml.part.0":
 0: releases:
 1: - name: myproject-my-app
 2:   chart: bams-stg/myproject-my-app-v2-chart
 3:   version: {{ .Values.charts.myprojectLatiTagger.chartVersion }}
 4:   installed: {{ .Environment.Values.charts | getOrNil "myprojectLatiTagger.enabled" | default false }}
 5:   <<: *default
 6:   values:
 7:    - ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
 8:    - ../envs/common/ms-affinity-rule.yaml
 9:    - fullnameOverride: myproject-my-app-{{ .Environment.Name }}
10:

template syntax error: template: stringTemplate:4:21: executing "stringTemplate" at <.Values.charts.myprojectLatiTagger.chartVersion>: nil pointer evaluating interface {}.myprojectLatiTagger
first-pass rendering output of "helmfile.3.yaml.part.0":
 0: releases:
 1: - name: myproject-my-app
 2:   chart: bams-stg/myproject-my-app-v2-chart
 3:   version:
mumoshu avatar
mumoshu

pls see template syntax error: template: stringTemplate:4:21: executing "stringTemplate" at <.Values.charts.myprojectLatiTagger.chartVersion>: nil pointer evaluating interface

yuri avatar

hmmm

mumoshu avatar
mumoshu

this means that, the first-pass render “stops” at any nil pointer access(it has no way to tolerate it…), which results in the incomplete yaml not containing anything after the installed: ... line

mumoshu avatar
mumoshu

that’s why it doesn’t emit the reading document at index 1: yaml: unknown anchor 'default' referenced warning

mumoshu avatar
mumoshu

anyway, regardless of the warning is emitted or not, the second-pass should produce same same result

yuri avatar

ah ok so if i understand since i have 2 “values” of getOrNil it throws this warning

mumoshu avatar
mumoshu

so probably you don’t need to worry about the warning at all..?(i agree it’s red herring though..

yuri avatar

yes the sync works as expected, it just drives me crazy in the ci process to see some extra messages that i dont wish to see

mumoshu avatar
mumoshu

yeah. maybe we should enhance the debug logs for the first-pass render?

yuri avatar

maybe my “pattern” of templating this is incorrect the idea was to hold some yaml file with

chartName:
  installed: true/false
  vesrion: x.y.z
mumoshu avatar
mumoshu

generally speaking any error in the first-pass is tolerable

mumoshu avatar
mumoshu

hmm maybe. your idea makes sense

mumoshu avatar
mumoshu

what works today would be to use getOrNil whenever you dig Values or Environment.Values

yuri avatar

we have the same application that we want to install with different versions… for example qa/ppe/…

yuri avatar

hmm but i do use getOrNil

mumoshu avatar
mumoshu

ah sry i’m a bit confused

mumoshu avatar
mumoshu

from helmfile’s perspective, emitting reading document at index 1: yaml: unknown anchor 'default' referenced is rather the correct behaviour

yuri avatar

ok so it a matter of log verbosity and maybe should appear in debug ?

mumoshu avatar
mumoshu

so avoiding the warning by exploiting the fact that the first-pass render “stops” at the first template error seems wrong..

mumoshu avatar
mumoshu

yes

mumoshu avatar
mumoshu

i think so

yuri avatar

ok i hope just want to make sure i dont break any logic/patterns in helmfile the can hurt me later

mumoshu avatar
mumoshu

ah and back to your original problem

mumoshu avatar
mumoshu

you should avoid using anchors in your pattern

mumoshu avatar
mumoshu

if

yuri avatar

hmm so no templates with our current release definition?

mumoshu avatar
mumoshu

if you see errors like this one https://sweetops.slack.com/archives/CE5NGCB9Q/p1571818484076100 anywhere other than the first-render

does anyone know why i’m getting this warning on a specific helmfile?

could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced

i have the same structure for the rest of the helmfiles and they do not return this warning

mumoshu avatar
mumoshu

getting reading document at index 1: yaml: unknown anchor 'default' referenced in the first-pass is ok

yuri avatar

ah ok got you

yuri avatar

for now its seems like only the first render

mumoshu avatar
mumoshu

ok great!

mumoshu avatar
mumoshu

then there’s nothing wrong on your side

yuri avatar

thank u again for the support!

mumoshu avatar
mumoshu

the only remaining todo would be - i’d prefix warning from the first-pass render nicely so that it won’t confuse you anymore

yuri avatar

thanks!

mumoshu avatar
mumoshu

my pleasure. thanks for your support and using helmfile!

1
Gourav avatar

@Erik Osterman (Cloud Posse) Is there helmfile for Open Policy Agent? I have checked in helmfiles/releases and there is none for OPA.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haven’t used it

Gourav avatar

Even if helmfile is not there… Are we allowed to create our own helmfile for which stable helm charts are there?

mumoshu avatar
mumoshu

absolutely. just write a helmfile like this:

releases:
- name: opa
  chart: stable/opa
  values:
  - values.yaml

assuming you use https://github.com/helm/charts/tree/master/stable/opa

helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

Gourav avatar

@mumoshu Thank you

2019-10-24

pjbecotte avatar
pjbecotte

Got a question. Anyone have thoughts on a workflow for modifying existing helmcharts without forking them? We have so many forks to do silly stuff like add tolerations or ssl root certs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is not really an answer to what you are asking, but we have the same problem all the time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use our monochart very frequently to get around the perceived shortcomings of a lot of charts out there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles. Works with helmfile.d - cloudposse/helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically the monochart makes implementing most services extremely easy, so we use that combined with Helmfile as our escape hatch

pjbecotte avatar
pjbecotte

Yeah, that is basically how we deploy our services :)

mumoshu avatar
mumoshu

@pjbecotte Hey! I haven’t tested it extensively but helmfile has a secret feature that allows you to jsonpatch/strategicmergepatch manifests before installing a chart:

https://github.com/roboll/helmfile/pull/673

Would it make it unnecessary to fork charts if you use helmfile template to generate the patches dynamically?

feat: experimental integration with helm-x by mumoshu · Pull Request #673 · roboll/helmfile

This enhances helmfile so that it can: Treat K8s manifests directories and Kustomize projects as charts Add adhoc chart dependencies on sync/diff/template without forking or modifying chart(s) (#6…

1
yuri avatar

@pjbecotte what is the reason for the forks? do u change the templates and functionality that the original chart does not provide? or just the values?

pjbecotte avatar
pjbecotte

Changing templates. Like the public chart doesn’t have ‘tolerations’ as a field on a deployment, and we needed to add it. (And many similar examples).

yuri avatar

one option is just to open PR and suggest a change, toleration is a common use case imo. the second option i can think of, is replicated ship, never used it myself but seems to fit here

pjbecotte avatar
pjbecotte

Yeah, PRs of course, but waiting weeks for a public project to accept and release isn’t usually in the cards

2019-10-25

2019-10-26

2019-10-30

Shikhar Goel avatar
Shikhar Goel

Hi

Shikhar Goel avatar
Shikhar Goel

When we do helmfile apply it is printing diff output which conatins sensitive info how can we disable that.

mumoshu avatar
mumoshu

in k8s secrets

mumoshu avatar
mumoshu

?

mumoshu avatar
mumoshu

try adding --suppress-secrets like helmfile apply --suppress-secrets

Shikhar Goel avatar
Shikhar Goel

like i have multiple helm charts in the helmfile and i want only there status which deployments have been deployed etc but not the complete deployment

mumoshu avatar
mumoshu

providing the above flag stops printing sensitive info

mumoshu avatar
mumoshu


i want only there status which deployments have been deployed etc but not the complete deployment

this sounds like a different issue than protecting sensitive info! do you actually need it?(and why?

Shikhar Goel avatar
Shikhar Goel

i only need helm info like

Shikhar Goel avatar
Shikhar Goel

`RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE help 0/1 1 0 33d

==> v1/Pod(related) NAME READY STATUS RESTARTS AGE help-699b97d548-jd9zg 0/1 Terminating 0 2m58s help-74688d5f45-jm6sx 0/1 ContainerCreating 0 76s

==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE help ClusterIP 172.30.115.158 8080/TCP 33d

==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE help help.dev.onprem.dmsuitecloud.com 80 33d

NOTES: Helm Chart installed : help in namespace dmp-system Your release is named : help.`

Shikhar Goel avatar
Shikhar Goel

no the full ` # Source: help/templates/help-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: com.fico.dmp/instanceId: help com.fico.dmp/name: help reloader.stakater.com/auto: “true” labels: com.fico.dmp/instanceId: help com.fico.dmp/name: help name: help`

mumoshu avatar
mumoshu

got it. that’s not possible today. why would you like that?

Shikhar Goel avatar
Shikhar Goel

Actually i made a docker image for installation using helmfile.I want only the helm charts info to be printed in the logs not the complete deployments

mumoshu avatar
mumoshu

btw i’m asking because i’ve seen that everyone has different opinions and requirements for which output they want

mumoshu avatar
mumoshu

i see. how would you debug the installation when it faiiled?

Shikhar Goel avatar
Shikhar Goel

We have implemented the rollback feature for client we wont be giving him the complete info about deplyments.If the deployment fails then our team will look into it locally

mumoshu avatar
mumoshu

but if you like the default output from that helmfile container to include only k8s resources installed/upgraded by helm(https://sweetops.slack.com/archives/CE5NGCB9Q/p1572505407002900?thread_ts=1572505148.001300&cid=CE5NGCB9Q)

how would your team debug it?

`RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE help 0/1 1 0 33d

==> v1/Pod(related) NAME READY STATUS RESTARTS AGE help-699b97d548-jd9zg 0/1 Terminating 0 2m58s help-74688d5f45-jm6sx 0/1 ContainerCreating 0 76s

==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE help ClusterIP 172.30.115.158 8080/TCP 33d

==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE help help.dev.onprem.dmsuitecloud.com 80 33d

NOTES: Helm Chart installed : help in namespace dmp-system Your release is named : help.`

Shikhar Goel avatar
Shikhar Goel

i.e. if rollback happens the the logs will notify the user and user will contact us..so that client dont have to do anything and we will fix the issue and will make a new image and push it

mumoshu avatar
mumoshu

so perhaps you want an ability to configure multiple log output channels? like stdout contains k8s resources affected by helm only, and something like debug.log contains all the logs?

Shikhar Goel avatar
Shikhar Goel

actually for this we will run locally on our cluster the same without disabling output and try..the cluster with us is exact copy of the clients cluster and with the help of helm diff we can see the changes and try to work on them…

Shikhar Goel avatar
Shikhar Goel

mostly there will be image change only in the helm chart so we can recify that easily

mumoshu avatar
mumoshu

ah, it would work out ok!

Shikhar Goel avatar
Shikhar Goel

so is it possible to disable diff output?

mumoshu avatar
mumoshu

no, as i said above

mumoshu avatar
mumoshu

but i’m eager to add a feature to configure what’s included in the helmfile log. does that sound good to you?

Shikhar Goel avatar
Shikhar Goel

yup…i think it will be great if we can add feature to add logs incrementally like if ye want to add diff logs or not and etc

mumoshu avatar
mumoshu

maybe something like helmfile --log-filter helm,exec,helmfile,...

Shikhar Goel avatar
Shikhar Goel

Yup like this only…so can we disable diff output with this currently?

mumoshu avatar
mumoshu

or even helmfile --info-log-filter helm,exec,helmfile,...

mumoshu avatar
mumoshu

nope

mumoshu avatar
mumoshu

all you can do today would be pipe it to tee

mumoshu avatar
mumoshu

and grep only what you want

mumoshu avatar
mumoshu

so try that way if you want something that works today

Shikhar Goel avatar
Shikhar Goel

Yup thanks….but the logs are clattered like we dont know how long will be notes and other thing so it wont work fine…but thanks for your help..

mumoshu avatar
mumoshu

yeah, but i think helmfile apply | grep -v '^\(+|-\) would mostly work

mumoshu avatar
mumoshu

as the diff output mostly begins with any of +, -

Shikhar Goel avatar
Shikhar Goel

Yup thanks…i will check it..thanks alot for your help!..

mumoshu avatar
mumoshu

or even better helmfile apply | grep -v '^\(+|-|Comparing |Release was not present in Helm|\*\)'

mumoshu avatar
mumoshu

my pleasure! good luck

mumoshu avatar
mumoshu

and thanks for using hlemfile

3

2019-10-31

Gourav avatar

Hi again… Need some inputs on a issue facing with Kiam helmfile where I am trying to give the annotations at object level. But somehow annotations are not coming up.. below are the snipped what I am getting and what i need.

While running the helmfile.. I am not getting the annotations at object level

+ # Source: kiam/templates/agent-daemonset.yaml
+ apiVersion: apps/v1beta2
+ kind: DaemonSet
+ metadata:
+   labels:
+     app: kiam
+     chart: kiam-2.5.2
+     component: "agent"
+     heritage: Tiller
+     release: kiam
+   name: kiam-agent
+ spec:
+   selector:
+     matchLabels:
+       app: kiam
+       component: "agent"
+       release: kiam
+   template:
+     metadata:
+       annotations:
+         secret.reloader.stakater.com/reload: kiam-agent-certificate-secret,kiam-ca-cert

Expected output should something like

+ # Source: kiam/templates/agent-daemonset.yaml
+ apiVersion: apps/v1beta2
+ kind: DaemonSet
+ metadata:
+  annotations:
+    secret.reloader.stakater.com/reload: kiam-agent-certificate-secret,kiam-ca-cert
+   labels:
+     app: kiam
+     chart: kiam-2.5.2
+     component: "agent"
+     heritage: Tiller
+     release: kiam
+   name: kiam-agent
+ spec:
+   selector:
+     matchLabels:
+       app: kiam
+       component: "agent"
+       release: kiam
+   template:
+     metadata:
+       
+         
mumoshu avatar
mumoshu

hey!

unfortunately the kiam chart doesn’t seem to support annotations at the daemonset level

mumoshu avatar
mumoshu
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

mumoshu avatar
mumoshu

you’ll see there’s no templates set up for the daemonset annotations

mumoshu avatar
mumoshu

maybe worth a feature request to the kiam chart

1
1
Gourav avatar

Thanks for your inputs @mumoshu .

1
Gourav avatar

I was working to add the annotations in helmfile.. I think we can do something like this in helmfile for kiam

    hooks:
      - events: ["postsync"]
        command: "/bin/sh"
        args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet/{{`{{ .Release.Name }}`}}-agent secret.reloader.stakater.com/reload=kiam-agent-certificate-secret,kiam-ca-cert"]
      - events: ["postsync"]
        command: "/bin/sh"
        args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet/{{`{{ .Release.Name }}`}}-server secret.reloader.stakater.com/reload=kiam-server-certificate-secret,kiam-ca-cert"]
mumoshu avatar
mumoshu

yeah maybe..

mumoshu avatar
mumoshu

did it work?

Gourav avatar

yes.. it worked

mumoshu avatar
mumoshu

awesome!

mumoshu avatar
mumoshu

ideally helmfile should provide a way to patch some resources included in the release, without forking the original chart

1
1
Alex Siegman avatar
Alex Siegman

Opened up a PR for cloudposse’s nginx-ingress helmfile to add PROXY support, tested and works great in my staging cluster: https://github.com/cloudposse/helmfiles/pull/199

[nginx-ingress] Add support for PROXY protocol by asiegman · Pull Request #199 · cloudposse/helmfiles

what [nginx-ingress] This adds configurability (via env NGINX_INGRESS_USE_PROXY_PROTOCOL) to use the PROXY protocol headers to requests why Allow better communication of things like the actual c…

dustinvb avatar
dustinvb

Is there a helmfile icon or logo somewhere? Preferably SVG.

mumoshu avatar
mumoshu

i’d love to but we don’t have one today

Cameron Boulton avatar
Cameron Boulton

Is there a way to treat an entire helmfile as an atomic release? I.e. rollback ALL releases in the helmfile if ANY fail?

mumoshu avatar
mumoshu

hey! currently, no, helmfile doesn’t have such feature. probably worth a feature request?

mumoshu avatar
mumoshu

but why you need that?

i think your releases are usually backward-compatible and therefore you don’t need to rollback successful releases rolled out before the failed release

mumoshu avatar
mumoshu

or perhaps you want helmfile to rollback only the failed release?

Cameron Boulton avatar
Cameron Boulton

Ideally each release would be backwards compatible yes, but we’re maturing to that point as we break apart a single runtime/release into multiples

Cameron Boulton avatar
Cameron Boulton

And as it sits currently, we’d like to ensure the same version of code is deployed at a given time with rollback if it is not.

Cameron Boulton avatar
Cameron Boulton

I’ll open an request. Appreciate the response @mumoshu

mumoshu avatar
mumoshu

that makes sense. thx for clarifying! im looking forward to the feature request

mumoshu avatar
mumoshu

just to be sure, what you want helmfile to do is basically running helm rollback $RELEASE_NAME $(helm history --output json $RELEASE_NAME | jq -r .[].revision | tail -n 2 | head -n 1) for all the affected releases in a failed Helmfile run?

mumoshu avatar
mumoshu

@Cameron Boulton

Cameron Boulton avatar
Cameron Boulton

I think so? Ideally the logic that’s already used when atomic:true for a given release fails

Cameron Boulton avatar
Cameron Boulton

But instead for ALL releases if ANY release in the helmfile release array/list fails

Cameron Boulton avatar
Cameron Boulton

If that makes sense?

mumoshu avatar
mumoshu

jq -r .[].revision | tail -n 2 | head -n 1 is for obtaining the second latest release of the release, assuming the latest release is one created by the failed helmfile run

mumoshu avatar
mumoshu


But instead for ALL releases if ANY release in the helmfile release array/list fails

im still trying to understand. how is it different from for all the affected releases in a failed Helmfile run?

Cameron Boulton avatar
Cameron Boulton

It is not different from for all the affected releases in a failed Helmfile run

Cameron Boulton avatar
Cameron Boulton

But for all the affected releases in a failed Helmfile run is different from current helmfile behavior today correct?

mumoshu avatar
mumoshu


Ideally the logic that’s already used when atomic:true for a given release fails

yeah that makes sense. implementation-wise, we can’t use the exact logic used by --atomic as it’s just a flag provided by helm itself

Cameron Boulton avatar
Cameron Boulton

Ah

mumoshu avatar
mumoshu


But for all the affected releases in a failed Helmfile run is different from current helmfile behavior today correct?

ah, so you’re talking about the current behavior ouf helmfile when a release has atomic: true set?

mumoshu avatar
mumoshu

if so, yes, it rollbacks the failed release only

Cameron Boulton avatar
Cameron Boulton

Yes, we’re using that now

mumoshu avatar
mumoshu

gotcha

mumoshu avatar
mumoshu

then what we need might be

mumoshu avatar
mumoshu

a new flag like helmfile apply --rollback-on-failure instructs helmfile to (1) rollback the failed release if the release didn’t have atomic: true set and (2) all the successful releases rolled out before the failed one with https://sweetops.slack.com/archives/CE5NGCB9Q/p1572565299026800?thread_ts=1572563989.024200&cid=CE5NGCB9Q

just to be sure, what you want helmfile to do is basically running helm rollback $RELEASE_NAME $(helm history --output json $RELEASE_NAME | jq -r .[].revision | tail -n 2 | head -n 1) for all the affected releases in a failed Helmfile run?

mumoshu avatar
mumoshu

i.e. helmfile doesn’t need to explicitly rollback the failed release if the release had atomic: true set

mumoshu avatar
mumoshu

as atomic: true results in helm upgrade --atomic that would rollback the failed release automatically for you

mumoshu avatar
mumoshu

does this make sense?

Cameron Boulton avatar
Cameron Boulton

Yes, the actual failed release(s) themselves would already be rolled back if using atomic: true

Cameron Boulton avatar
Cameron Boulton

So helmfile really only needs to rollback any successful releases if any one failed

mumoshu avatar
mumoshu

absolutely!

mumoshu avatar
mumoshu

i thought we’d better name it helmfile apply --atomic but …

Cameron Boulton avatar
Cameron Boulton

How do you decide between helmfile argument instead of option in helmfile YAML?

Cameron Boulton avatar
Cameron Boulton

Are the latter for helm options only?

mumoshu avatar
mumoshu

generally any helmfile-run-wide operational option is available via flags only

Cameron Boulton avatar
Cameron Boulton

Gotcha

Cameron Boulton avatar
Cameron Boulton

helmfile apply --atomic seems great to me

mumoshu avatar
mumoshu

yeah but i think i have a few questions if we do so

mumoshu avatar
mumoshu

like

mumoshu avatar
mumoshu

should it imply atomic: true in all the releases?

Cameron Boulton avatar
Cameron Boulton

That seems more problematic to me, but still thinking

mumoshu avatar
mumoshu

or maybe we can just deprecate atomic: true in favor of helmfile apply --atomic?

Cameron Boulton avatar
Cameron Boulton

Mmm but I think there are cases such as today’s behavior where people want PER release atomicity, but not across ALL releases (whole helmfile)

Cameron Boulton avatar
Cameron Boulton

Does that make sense?

mumoshu avatar
mumoshu

or evangelize to use atomic: true and not helmfile apply --atomic when you can ensure all the releases are backward compatible and you do need a automated rollback?

Cameron Boulton avatar
Cameron Boulton

Exactly

Cameron Boulton avatar
Cameron Boulton

Backwards compatible or maybe entirely unrelated

Cameron Boulton avatar
Cameron Boulton

Such as kafka and redis

Cameron Boulton avatar
Cameron Boulton

One might not care if kafka failed but redis succeeded

mumoshu avatar
mumoshu

makes sense

Cameron Boulton avatar
Cameron Boulton

Which is the behavior we have today that probably should not change

Cameron Boulton avatar
Cameron Boulton

apply --atomic would be a superset

mumoshu avatar
mumoshu

and helmfile apply --atomic makes atomic: true irrelevant, right?

Cameron Boulton avatar
Cameron Boulton

Seems like it does logically

Cameron Boulton avatar
Cameron Boulton

As in, if you used --atomic but omitted atomic: true and > 0 release failed any successful would be rolled back

mumoshu avatar
mumoshu

as helmfile would rollback the failed release regardless of it had atomic: true or not anyway

Cameron Boulton avatar
Cameron Boulton

Right

Cameron Boulton avatar
Cameron Boulton

But I guess what it lets you do:

mumoshu avatar
mumoshu

exactly

Cameron Boulton avatar
Cameron Boulton

Have atomicity PER release (like today)

Cameron Boulton avatar
Cameron Boulton

And then optionally you COULD use --atomic on demand if/when it was needed

Cameron Boulton avatar
Cameron Boulton

And then not use --atomic but still keep the per release atomic:true behavior in that case

Cameron Boulton avatar
Cameron Boulton

Does that make sense?

mumoshu avatar
mumoshu

yes i believe so

mumoshu avatar
mumoshu

so you basically use helmfile apply --atomic only when one of your new releases has known backward-incompatibility

Cameron Boulton avatar
Cameron Boulton

Exactly

Cameron Boulton avatar
Cameron Boulton

Which for some users might be always

Cameron Boulton avatar
Cameron Boulton

But at least it doesn’t force others with a changed behavior of what we have today

mumoshu avatar
mumoshu

that’s great

mumoshu avatar
mumoshu

Cameron Boulton avatar
Cameron Boulton

Really appreciate the conversation @mumoshu

Cameron Boulton avatar
Cameron Boulton

Cameron Boulton avatar
Cameron Boulton

Would you still like me to open a GitHub issue/feature request @mumoshu?

mumoshu avatar
mumoshu

so am i! it was very inspiring. i’m awaiting your feature request(probably including a link to this slack thread would help to provide ctx to other uses)

Cameron Boulton avatar
Cameron Boulton

mumoshu avatar
mumoshu

yes, i’d appreciate it if you could do so!

mumoshu avatar
mumoshu

it’s a bit annoying task given we already have a great conversation here and settled on something

Cameron Boulton avatar
Cameron Boulton

No problem. I understand the benefits of the formality for tracking, updates and visibility to other users of the project.

mumoshu avatar
mumoshu

i just want to ensure that helmfile looks like being developed openly

Cameron Boulton avatar
Cameron Boulton

Exactly

mumoshu avatar
mumoshu

exactly!

Cameron Boulton avatar
Cameron Boulton

mumoshu avatar
mumoshu

mumoshu avatar
mumoshu

thx for the conversation and your understanding!

Cameron Boulton avatar
Cameron Boulton

Welcome and thank you

    keyboard_arrow_up