#helmfile (2020-07)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-07-01
2020-07-02
hi all,
i am pretty much new to helmfile and currently trying to migrate a huge helm2 umbrella chart to helmfile…
one of my problems is for some reason the needs:
concept doesn’t work in the way it’s written in the doc (or at least i got it wrong):
when i do sync
this:
- name: loggingMaster
chart: elastic/elasticsearch
version: 7.3.0
condition: loggingMaster.enabled
<<: *default
- name: loggingData
chart: elastic/elasticsearch
version: 7.3.0
condition: loggingData.enabled
<<: *default
- name: misc-es
chart: ./charts/misc-es
version: 0.1.0
verify: false
condition: loggingMaster.enabled
needs:
- loggingMaster
i get the following error:
in ./helmfile.yaml: "clusterinfra/misc-es" depends on nonexistent release "loggingMaster"
any idea why?
I’m the last person to advice, but I noticed you have a condition there:
condition: loggingMaster.enabled
is it enabled?
yep, it is
what’s the output of
helmfile -l name=loggingMaster list
?
NAME NAMESPACE ENABLED LABELS
loggingMaster clusterinfra true name:loggingMaster,namespace:clusterinfra,chart:elasticsearch
it’s indeed enabled.
i wouldn’t lie
I see release order is reversed in the example
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
it starts with releases with deps and it ends with releases w/o deps
I’m not sure, but maybe order matters in case of dependencies
o_0 i would never thought about this, but let me try to change the order…
nope, didn’t helped… any more ideas?
the weird thing is that helmfile apply
renders the templates of all the releases correctly, but still throws the error message…
I don’t have issues with similar helmfile
releases:
- name: loggingMaster
chart: elastic/elasticsearch
version: 7.3.0
- name: loggingData
chart: elastic/elasticsearch
version: 7.3.0
- name: misc-es
chart: stable/nginx-ingress
verify: false
needs:
- loggingMaster
but if you change
chart: stable/nginx-ingress
to something that is on the same filesystem?
nothing changes.
that’s odd…
Is there a way I can prevent creation of new config maps every time I deploy through helm.
sure let me try that
2020-07-03
2020-07-05
Hi guys i’m having trouble getting readFile working with requiredEnv
templating.
I currently have dev.yaml
environments:
dev:
values:
- HelmRepo: custom-staging
{{ readFile "./helmfile-cs-base.yaml" }}
and helmfile-cs-base.yaml
<snip>
repositories:
- name: custom
url: <https://chartmuseum.internal.ourdomain.com/master>
certFile: {{ requiredEnv CHARTMUSEUM_CERT_FILE }}
keyFile: {{ requiredEnv CHARTMUSEUM_KEY_FILE }}
- name: custom-staging
url: <https://chartmuseum.internal.ourdomain.com/staging>
certFile: {{ requiredEnv CHARTMUSEUM_CERT_FILE }}
keyFile: {{ requiredEnv CHARTMUSEUM_KEY_FILE }}
<snip>
And when running it i get
err: failed to read helmfile-cs.yaml: reading document at index 1: yaml: unmarshal errors:
line 32: cannot unmarshal !!map into string
line 33: cannot unmarshal !!map into string
line 36: cannot unmarshal !!map into string
line 37: cannot unmarshal !!map into string
in ./helmfile-cs.yaml: failed to read helmfile-cs.yaml: reading document at index 1: yaml: unmarshal errors:
line 32: cannot unmarshal !!map into string
line 33: cannot unmarshal !!map into string
line 36: cannot unmarshal !!map into string
line 37: cannot unmarshal !!map into string
which corresponds to the repositories chunk.
im running helmfile version v0.119.1
Has anyone got any ideas?
Oh i figured it out. For anyone else stuck. using bases:
instead of readfile
makes it work. so my dev.yaml looks like
environments:
dev:
values:
- HelmRepo: custom-staging
- Namespace: default
- ImageTag: latest-staging
- CustomerTag: dev
bases:
- ./helmfile-cs-base.yaml
I had to fix up requiredEnv to have quotes arund the env var. but otherwise it works as expected.
Using bases is the better solution, but the fix to the previous error is probably to rename the file dev.yaml
to dev.yaml.gotmpl
so it’s interpretted as a go template.
ah ok good to know. thanks
2020-07-06
2020-07-08
Hey All, I’m having a really confusing problem that I think someone better than I can maybe easily pinpoint. I have a straight-forward helm file. I pull in a values file to the environment and try to use a top level variable and I immediately get an error. helmfile:
environments:
preprod:
values:
- vars/helmfile/realms/preprod.yaml
releases:
{{- $realm := .Values.realm -}}
...
values-file:
$ cat vars/helmfile/realms/preprod.yaml
realm: preprod
Error:
$ helmfile -e preprod -f helmfile-node-local-dns.yaml lint
in ./helmfile-node-local-dns.yaml: error during helmfile-node-local-dns.yaml.part.0 parsing: template: stringTemplate:18:21: executing "stringTemplate" at <.Values.realm>: map has no entry for key "realm"
Filename extensions matter. Can you update your examples?
Using bases is the better solution, but the fix to the previous error is probably to rename the file dev.yaml
to dev.yaml.gotmpl
so it’s interpretted as a go template.
@Erik Osterman (Cloud Posse) Thank you for the response! I’m a bit confused. The only go templating is happening inside the helmfile, which I didn’t think required .gotmpl
extension?
The environment values file include is just straight hard-coded text yaml
Am I simply mistaken? Do I need the values include to be interpretted as a go template regardless?
Looking at https://github.com/roboll/helmfile#templates , it seems like I’m following directions?
Aha yes you’re right based on your examples
https://github.com/roboll/helmfile/issues/1204 seems to have worked, although i’m still entirely confused. I also have other helmfile setups in this repo that work with the way I was doing it…
this has now presented more challenges
Can anyone see what I’m doing wrong?
Hi guys. I’m trying to figure out how to override the environment values in a helmfile from the CLI and i’m not sure how to go about it, since I think the cli flag provided values --state-values-set ImageTag=latest-master
are being overriden by the environment values specified in my helmfile.
i’m running helmfile -f dev.yaml --debug -e flaky-smalt-cat-aujzyfqd0cc --interactive --state-values-set ImageTag=latest-master apply --context=3
Is there anyway to override what ImageTag is in dev.yaml from the CLI? dev.yaml
environments:
flaky-smalt-cat-aujzyfqd0cc:
values:
- HelmRepo: company-staging
- Namespace: flaky-smalt-cat-aujzyfqd0cc
- ImageTag: latest-staging
- CustomerTag: flaky-smalt-cat-aujzyfqd0cc
- Monolith: true
- Domain: company.dev
bases:
- ./namespace.yaml
namespace.yaml
Ah.. i figured it out.
by adding ---
to dev.yaml
before bases it somehow works now. following https://github.com/roboll/helmfile/issues/1204
This looks like a bug to me. Given this hemlfile.yaml: environments: default: values: - defaults.yaml repositories: # Kubernetes incubator repo of helm charts - name: "kubernetes-incubator&quo…
environments:
default:
values:
- HelmRepo: company
- Namespace: default
- ImageTag: latest
- CustomerTag: default-customer-tag
templates:
chartnamespace: &chartnamespace
namespace: "{{`{{ .Environment.Values.Namespace }}`}}"
# missingFileHandler: Warn
set: &set
setTemplate:
- name: image_tag
value: "{{`{{ .Environment.Values.ImageTag }}`}}"
- name: customerTag
value: "{{`{{ .Environment.Values.CustomerTag }}`}}"
- name: namespace
value: "{{`{{ .Environment.Values.Namespace }}`}}"
- name: baseHost
value: "{{`.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
cs-ui-default: &cs-ui-default
<<: *chartnamespace
setTemplate:
- name: image_tag
value: "{{`{{ .Environment.Values.ImageTag }}`}}"
- name: customerTag
value: "{{`{{ .Environment.Values.CustomerTag }}`}}"
- name: namespace
value: "{{`{{ .Environment.Values.Namespace }}`}}"
- name: config.server_name
value: "{{`{{ .Environment.Values.CustomerTag }}.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
- name: baseHost
value: "{{`.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
default: &default
<<: *set
<<: *chartnamespace
cs-engine: &cs-engine
name: cs-engine{{`{{ if ne .Environment.Values.Namespace "default" }}-{{ .Environment.Values.Namespace }}{{ end }}`}}
chart: "{{`{{ .Environment.Values.HelmRepo }}`}}/cs-engine"
<<: *chartnamespace
setTemplate:
- name: image_tag
value: "{{`{{ .Environment.Values.ImageTag }}`}}"
- name: customerTag
value: "{{`{{ .Environment.Values.CustomerTag }}`}}"
- name: namespace
value: "{{`{{ .Environment.Values.Namespace }}`}}"
- name: baseHost
value: "{{`{{ .Environment.Values.CustomerTag }}.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
- name: config.server_name
value: "{{`{{ .Environment.Values.CustomerTag }}.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
- name: monolith.enabled
value: "{{`{{ .Environment.Values | get \"Monolith\" \"false\" }}`}}"
- name: baseHost
value: "{{`.{{ .Environment.Values | get \"Domain\" \"companycloud.com\" }}`}}"
cs-ui: &cs-ui
name: cs-ui{{`{{ if ne .Environment.Values.Namespace "default" }}-{{ .Environment.Values.Namespace }}{{ end }}`}}
chart: "{{`{{ .Environment.Values.HelmRepo }}`}}/cs-ui"
<<: *cs-ui-default
cs-api: &cs-api
name: cs-api{{`{{ if ne .Environment.Values.Namespace "default" }}-{{ .Environment.Values.Namespace }}{{ end }}`}}
chart: "{{`{{ .Environment.Values.HelmRepo }}`}}/cs-api"
<<: *default
cs-database: &cs-database
name: cs-database{{`{{ if ne .Environment.Values.Namespace "default" }}-{{ .Environment.Values.Namespace }}{{ end }}`}}
chart: "{{`{{ .Environment.Values.HelmRepo }}`}}/cs-database"
<<: *default
repositories:
- name: company
url: <https://chartmuseum.companycloud.com/master>
certFile: {{ requiredEnv "CHARTMUSEUM_CERT_FILE" }}
keyFile: {{ requiredEnv "CHARTMUSEUM_KEY_FILE" }}
- name: company-staging
url: <https://chartmuseum.companycloud.com/staging>
certFile: {{ requiredEnv "CHARTMUSEUM_CERT_FILE" }}
keyFile: {{ requiredEnv "CHARTMUSEUM_KEY_FILE" }}
helmDefaults:
timeout: 600
recreatePods: true
atomic: true
force: true
releases:
- <<: *cs-engine
- <<: *cs-ui
- <<: *cs-api
- <<: *cs-database
2020-07-11
guys? how are you installing cert-manager? this https://github.com/cloudposse/helmfiles/blob/master/releases/cert-manager.yaml will not remove CRDs in helmfile delete phase.
it will probably not work, cuz of diff of CRDs, these are not present in initial install..
i took @Vadim Bauer example:
repositories:
- name: jetstack
url: <https://charts.jetstack.io>
- name: incubator
url: <https://kubernetes-charts-incubator.storage.googleapis.com>
releases:
- name: cert-manager
namespace: cert-manager
chart: jetstack/cert-manager
version: v0.15.2
atomic: true
cleanupOnFail: true
hooks:
- events: ["PostSync"]
command: "/bin/sh"
args: ["-c", "kubectl label namespace \"{{`{{ .Release.Namespace }}`}}\" certmanager.k8s.io/disable-validation=true --overwrite"]
values:
- installCRDs: true
- name: letsencrypt-cert-issuer
namespace: cert-manager
chart: incubator/raw
atomic: true
cleanupOnFail: true
force: true
needs:
- cert-manager/cert-manager
values:
- resources:
- apiVersion: cert-manager.io/v1alpha3
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: <https://acme-staging-v02.api.letsencrypt.org/directory>
email: [email protected]
privateKeySecretRef:
name: letsencrypt-staging
, but still getting:
STDERR:
Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha3"
Error: plugin "diff" exited with error
• k3s version v1.18.4
• helmfile version v0.119.0
• helm version v3.2.4
• helm diff plugin version 3.1.1
Perhaps setting helmfile concurrency to 1. Alternatively, you can make the certissuers creation depend on the certmanager install. example for how I do so can be found here: https://github.com/zloeber/KubeStitch/blob/master/helmfiles/helmfile.certmanager.yaml
Kubernetes deployment stitcher. Contribute to zloeber/KubeStitch development by creating an account on GitHub.
this is a problem https://github.com/databus23/helm-diff/issues/183
helm-diff version: 3.1.0 helm version: 3.1.0 helmfile version: 0.94.1 or 0.100.0 Hi, we have problems installing prometheus-operator with helmfile. We get these error messages: failed processing re…
That diff issue may exist but the error you are showing clearly indicates something being applied out of order (the ClusterIssuer attempting to be created before the CRDs exist).
I just took a look at our latest stuff (not yet available publically) and I see that @Jeremy G (Cloud Posse) has documented this:
We have a target like this in our Makefile
:
NAMESPACE = cert-manager
RELEASE_VERSION = $(shell yq -j r defaults.yaml | jq -r .chart_version)
%.crds: %.kubecfg
kubectl get namespace $(NAMESPACE) >/dev/null 2>&1 || kubectl create namespace $(NAMESPACE)
kubectl apply --validate=false -f <https://github.com/jetstack/cert-manager/releases/download/$(RELEASE_VERSION)/cert-manager.crds.yaml>
%.delete-crds: %.kubecfg
kubectl delete crd certificaterequests.cert-manager.io
kubectl delete crd certificates.cert-manager.io
kubectl delete crd challenges.acme.cert-manager.io
kubectl delete crd clusterissuers.cert-manager.io
kubectl delete crd issuers.cert-manager.io
kubectl delete crd orders.acme.cert-manager.io
Thanks. I just wrote small proposal https://github.com/roboll/helmfile/issues/802#issuecomment-659938624 , not sure if its possible tho
@Erik Osterman (Cloud Posse) Are You using ArgoCD for that deployment? Or is this just for manual bootraping ?
This particular one runs under a Jenkins build.
Btw, see this? https://github.com/jetstack/cert-manager/pull/2775/files
What this PR does / why we need it: In order to support eventually building a 'helm-operator' based OLM plugin (for deploying cert-manager on OpenShift clusters), this PR adds an optional i…
Haven’t explored it further.
It came up in this issue.
Look at:
• https://github.com/roboll/helmfile/pull/1375
• https://github.com/roboll/helmfile/pull/1374
• https://github.com/roboll/helmfile/pull/1373 seems its solved .. I will test it tomorrow
@johncblandii @Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) checkout #1375 - could come in handy
Oh man!! and native support now to fetching helm charts from git
#1374
There are also some fixes on helmfile tf provider… Whole ecosystem seems to be closer and closer to no.1 Last thing is tool for version bumping of helmfiles..
• https://github.com/variantdev/mod does not edit previously created PRs/MRs
• https://docs.renovatebot.com/modules/manager/helmfile/ does not handle go templating = version must be hardcoded ( .. and version constraints may not work )
• https://docs.renovatebot.com/modules/manager/regex/ may work, if values.yaml will be aside of helmfiles
2020-07-12
2020-07-13
I have a repo for which I need to deploy multiple releases: one is the main app, the other is a worker. I want them as separate releases as the worker should be able to scale independently of the main app. I have this part working great with helmfile. The worker listens to an SQS queue and processes the messages as they come in.
In stage I create an environment every time a new branch is deployed, and tear it down when that branch is merged. I’m trying to figure out the best way to specify when the worker should be deployed. Ideally we’d control that via an aws ssm parameter as that’s how we manage our env vars. I was hoping I could do something like:
releases:
- name: worker
installed: <secretref+awsssm://V1/{{> .Values.repo }}/{{ .Environment.Name }}/DEPLOY_WORKER?region=us-west-1
But that doesn’t work. Any ideas on how I could handle this? I’d like to avoid having a separate SQS queue per branch to keep down the number of things that have to be created/torn down.
Did you tried installedTemplate
instead of installed
?
it looks like you need to use ref+awsssm:
instead of secretref+awsssm:
Ooh, I didn’t try installedTemplate
, I’ll give that a shot.
Thanks
2020-07-14
2020-07-15
Been trying to access secrets from GCP Secret manager the following way in my values file:
password: <ref+gcpsecrets://project/secret-name?version=1>
And using a service account, which I can access the secret manually. Despite this, I get an error regarding accessing the secret:
failed to render values files "config/values.yaml": expand <gcpsecrets://project/secret-name?version=1>: failed to get secret: rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
It seems the service account permissions are bad, but they work when accessing manually:
bash-4.3# gcloud auth activate-service-account --key-file=keyfile.json
Activated service account credentials for: [<SERVICE_ACCOUNT>]
bash-4.3# gcloud secrets versions access 1 --secret="secret-name"
Does anyone have any pointers?
I think you need GCP_PROJECT
pointed to a specific gcp project
https://github.com/variantdev/vals/pull/20#issuecomment-613781471
Hi - I've got a working implementation for GCP's Secret Manager here. Testing is not complete yet, however. I'm having a lot of trouble understanding the interface to Load(). Is ther…
I solved it by adding the GOOGLE_APPLICATION_CREDENTIALS
environment variable when running. I was running it through a docker container, and simply had to pass it through using the -e
flag
Great! Thanks for sharing your insight
2020-07-16
How can I override environment values in helmfile? For example:
environments:
default:
missingFileHandler: Warn
values:
- external_dns:
identity:
tenant_id: {{ requiredEnv "EXTERNAL_DNS_IDENTITY_TENANT_ID" }}
subscription_id: {{ requiredEnv "EXTERNAL_DNS_IDENTITY_SUBSCRIPTION_ID" }}
resource_group: {{ requiredEnv "EXTERNAL_DNS_IDENTITY_RESOURCE_GROUP" }}
name: {{ requiredEnv "EXTERNAL_DNS_IDENTITY_NAME" }}
id: {{ requiredEnv "EXTERNAL_DNS_IDENTITY_ID" }}
- values.external-dns.yaml
It should fail if values.external-dns.yaml is not present/populated && env vars are not supplied.
I’m not sure I’m following what you are asking. Does values.external-dns.yaml have those same parameters in it, and you want the lines above it to be optional overrides?
values.external-dns.yaml should be optional, but paramaters here should take precedence over parameters in helmfile from env.
Priority
- values file ( optional )
- env vars from helmfile
- fail
@roth.andy I need to load environment values via file ( static deployment ) or env vars ( fallback, for terraform )
Can you reverse it? Standard convention is for environment variables to override file configuration.
If you can’t, I would change the values file to a .env file, and use this
That way, it is an environment variable either way, but you get to control which value for it gets set last
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
We are looking at storing our helm charts in an Azure ACR. Per https://docs.microsoft.com/en-us/azure/container-registry/container-registry-helm-repos the recommendation is to use helm’s experimental OCI support to publish the charts to the registry. Does anyone know if helmfile supports pulling charts from an OCI repository via helm’s experimental support?
Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry
I have started almost exclusively using the helm-git plugin with helmfile to just point at a git repo. The plugin packages it up for you
Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry
Works great
are you pointing at specific commits?
you can point at any ref, including branches, tags, or specific commits
i’ll take a look at it
Hi Guys , I have a short question , any suggestions would be really helpful .
---
bases:
- helmfile-first.yaml
- helmfile-second.yaml
I would like to conditionally execute the helmfile-second.yaml
without using if , else . is there an easy way I can do it ?
Or another thing that could work for me as well is how do I skip the specific releases in a install . Installed
flag didn’t help . I am sure I can get very nice ideas here , there must be a way without if else
What’s wrong with ifs besides that readability suffers? What kind of condition do you have?
nothing wrong there in if but I have like 15 environments that are going to be skipped from this . so it feels too bad and I am looking to only use like enable/disable option to be used for entire Helmfile
so I have like a values.yaml.gotmpl for my app , in that values file I have a variable which has binary output , if I can use that variable in one of the Helmfile to be used for import or not import this helmfile-second
my end goal is to avoid any releases mentioned in second Helmfile to be deployed.
Guys? How should I version dynamic variables used in k8s deployments created with terraform ? Should I back commit these into git and then use helmfile with these values ? ( these are not secrets, just dynamic vars, not predictable )
helmfile can use remote variables, f.e.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Superb, but still… It need to commit these variables to git repository from terraform in yaml format
it looks like helmfile may pull values from everywhere, so the question is what’s best to use on terraform side to export generated values
Hi everyone ! I have question, In case of nested helmfiles, do children helmfile inherit environment variables from parent helmfile. I’m not able to get the values in child helmfile. Below is my master helmfile and I’m trying to access values from imported-values.yaml
but it’s getting values only from folder/values.yaml
environments:
default:
values:
- "imported-values.yaml"
helmfiles:
- path: folder/helmfile.yaml
values:
- "folder/values.yaml"
No, subhelmfiles don’t inherit anything. Environments: should be defined in every subhelmfile.
@Andrew Nazarov I can’t add this environment values in base of subhelmfile as those can be installed separately or through parent one. Any idea on how to proceed on this ?
2020-07-17
I’m kinda new to helmfile
and haven’t yet figured everything out.
I’d like to use environment variable inside default
function
tags: '{{ env "KUBE_PROMETHEUS_ALERT_MANAGER_OPSGENIE_TAGS" | default ` {{ range .CommonLabels.SortedPairs }}{{ .Name }}:{{ .Value }},{{ end }}` {{ env "STAGE" | default "N/A" }} }}'
For such syntax :point_up: I get:
template: stringTemplate:306: unexpected "{" in operand
The tricky part is that I want to concatenate string{{ range .CommonLabels.SortedPairs }}{{ .Name }}:{{ .Value }},{{ end }}
to another tag which is in environment variable. How to do that?
Got it. Just added it after first interpolation:
tags: '{{ env "KUBE_PROMETHEUS_ALERT_MANAGER_OPSGENIE_TAGS" | default ` {{ range .CommonLabels.SortedPairs }}{{ .Name }}:{{ .Value }},{{ end }}`}}{{ env "STAGE" | default "N/A" }}'
Hi, I get the following error:
STDERR:
Error: no cached repo found. (try 'helm repo update'). error converting YAML to JSON: json: unsupported value: +Inf
Error: plugin "diff" exited with error
COMBINED OUTPUT:
[debug] Created tunnel using local port: '44995'
[debug] SERVER: "127.0.0.1:44995"
Error: no cached repo found. (try 'helm repo update'). error converting YAML to JSON: json: unsupported value: +Inf
Error: plugin "diff" exited with error
if I do a “normal” helm diff upgrade it works with complaining. helmfile template also work fine. but I don’t get helmfile diff to work
Has anyone stumbled into a similar problem? I found this link, but it doesn’t give that much information: https://github.com/helm/helm/issues/2909
helm install –dry-run –debug nicely dumps the templated YAML when there's a parse error (see #1546). However, helm upgrade –install –dry-run –debug does not do this, so I just see this fai…
Are you using helm-tiller?
helm install –dry-run –debug nicely dumps the templated YAML when there's a parse error (see #1546). However, helm upgrade –install –dry-run –debug does not do this, so I just see this fai…
Would there be any chance that you’re using helm2 when you run helm diff upgrade
but helm3 for helmfile, or vice-versa?
2020-07-18
2020-07-20
@Erik Osterman (Cloud Posse) (and others here) how are ya’ll doing Helmfile deployments via CI/CD, promoting per environment with per environment config? AFAIKR CP bundled a version of their helmfiles image with a per env container and slurped config values out of SSM with chamber, but doesn’t look to be the case anymore?
So we currently use a combination of remote helmfiles (for common dependencies like database), with local helmfiles (that define how the app should be deployed), with environments (not environment variables).
We typically have one environment called preview
that is used for “preview environments” (or what we’ve called unlimited staging)
And then another environment for each specific stage.
For secrets, we’re still largely using SSM, but we’re using the built-in support for SSM in helmfile rather than chamber.
(these patterns are not available in cloudposse/helmfiles
, but in the current solutions we’re delivering to customers and haven’t yet open sourced)
So how do you manage promoting a version bump or config change through each stage?
Image tags are passed by environment variable. Example release pipeline here: https://github.com/cloudposse/example-app/blob/master/codefresh/release.yaml
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
Releases are triggered in github by cutting a release.
preview environments always deploy based on commit sha (when deploy
label present)
master environment always updated on merge to master. master != production.
Hi folks, One more question it seems that I am facing issues using {{ .Environment.Values.<> }} in my releases:
inside helmfile.yaml.
my structure is like this
to give more context we do Helmfile apply -f helmfile.yaml
inside Helmfile.yaml we are calling Helmfile-common.yaml
, this common file has all the environments and their values file defined.
Now I have some variable defined inside those environments that I need to use inside Helmfile but it seems {{ .Environment.Values.<> }}
can’t be used in Helmfile releases
, any option anyone can suggest ?
2020-07-21
More love for delete hooks support https://github.com/roboll/helmfile/issues/802#issuecomment-659938624 ? ( crd post delete )
I don’t see the hooks being called when I destroy a chart. I need it to call them though…can it be enabled?
Is anyone here using terraform with helmfile ?
I can’t figure out how to send environment variables to existing helmfile files
Hey. I just realized that you’ve moved this discussion to #terraform
For anyone interested, see https://sweetops.slack.com/archives/CB6GHNLG0/p1595351319054400
Hi, has anyone here used terraform with helmfile together?
helmfile sync --concurrency <x>
seems to only run one at a time no matter what concurrency i set. is this a known issue?
i’m wondering if it’s due to the fact i have an umbrella helmfile of many nested helmfiles that we use labels to deploy selectively.
I think you’ve found this one already, but for anyone interested, see: https://github.com/roboll/helmfile/issues/591
Helmfile should support installing multiple releases at the same time rather than doing them in serial. In addition, there should be a way to set an order of dependencies, in case one install is de…
2020-07-22
Ohoy! I’ve just started using helmfile, and I’m a bit lost about secrets management.
Is it mandatory to have the helm-secrets
plugin installed, even if my secrets are not encrypted? The secret I’m trying to include is not very secret, so I have no reason to add the complexity that comes with SOPS…
If your secret is not a secret you can just treat it as a value and not worry about protecting it. SOPS / helm-secrets is all about protecting the secret so no one sees the plaintext
Alright, cool. How do I go about applying it?
applying what exactly?
My Secrets
object definition. I’ve got a file my-secret.yaml
and would normally run kubectl apply -f my-secret.yaml
since it’s a secret that lives outside the chart.
To be specific: I want to apply a loadDefinition
to the bitnami/rabbitmq
chart. That chart will read the definition from a secret, so I have to create the secret before the chart is deployed.
ah - there’s a few options I think:
- do what you have there in a helmfile presync/postsync hook
- define a release using the
raw
chart (https://github.com/helm/charts/tree/master/incubator/raw) that applies your manifest - Leverage the helm-x support (https://github.com/roboll/helmfile/pull/673) to define a dependency using the
raw
chart I would say #3 is cleanest because it keeps all your stuff bundled as a single release, however it is sorta the most complex
#1 is dirty and I would likely avoid as your manifest is not managed by helm in this case and so you are on the hook (no pun intended) for lifecycle management (ie deleting your secret at some point)
#2 is alright - it does end up creating an extra release just to manage your extra object
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
This enhances helmfile so that it can: Treat K8s manifests directories and Kustomize projects as charts Add adhoc chart dependencies on sync/diff/template without forking or modifying chart(s) (#6…
Awesome, I’ll look into this. Thanks a lot!
no problem
Hm… Could maybe a library chart be a viable alternative too?
a library chart?
https://helm.sh/docs/topics/library_charts/
It was introduced in Helm 3. It let’s you deploy resources that are shared among many charts.
Explains library charts and examples of usage
Specifically, it does not create a release.
At first I thought that was what you meant with 2, but then I realized it wasn’t it.
that may work - i’m not too familiar with them nor how they will interact with helmfile
If I try it and it is a good solution, I’ll report back!
sounds good!
Nah, I think I’m in the wrong here. Library charts are only for providing template definitions that are used in charts, not setting up resources that other releases may use.
I feel like I’m really missing something here. I decided to try out helm-secerts
after all, but still didn’t get no secrets installed…
I’ve got the file in environments/default/load-definitions.yaml
and my helmfile.yaml looks like this (using nginx just because it’s a bit smaller and all I care about in this MWE is to see my secret in kubectl get secrets
):
repositories:
- name: bitnami
url: <https://charts.bitnami.com/bitnami>
releases:
- name: www
namespace: default
chart: bitnami/nginx
environments:
default:
secrets:
- environments/default/load-definitions.yaml
The output of helmfile apply
begins with:
Decrypting secret /home/andreas/tmp/rmq/hf/environments/default/load-definitions.yaml
Decrypting /home/andreas/tmp/rmq/hf/environments/default/load-definitions.yaml
Not encrypted: /home/andreas/tmp/rmq/hf/environments/default/load-definitions.yaml
which sounds promising, but then… nothing.
helm-secrets
isn’t about creating kubernetes secret objects - its for encrypting sensitive values for storage in source control
And what’s environment.default.secrets
in the helmfile for?
those are references to yaml files that helm-secrets
will process - they are expected to be encrypted, and then helmfile will invoke helm-secrets to decrypt them at deployment time
But they have nothing to do with Kubernetes secrets?
correct
Ooooooh.
more recent versions also support variantdev/vals
for secret management, actually pulling sensitive values from a vault at deployment time: https://github.com/roboll/helmfile/blob/master/docs/remote-secrets.md
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
this basically accomplishes the same thing as helm-secrets
except you don’t have to encrypt and check in your secret values; you store them in a supported vault
@voron It can be something different than “supported”. It’s now integrated into Helmfile so you can leverage helm-x features without the additional helm plugin
oh, got it, thanks.
Should someone update readme of helm-x to specify it ?
Probably? (Is anyone using it without helmfile..?
1 of 3 last helm-x bugs doesn’t mention helmfile, just my 2¢
thx! will take a look once i have some time
thank you for your time
Here’s a pattern we’ve been using to create k8s secrets from SSM parameters:
- kind: Secret
apiVersion: v1
metadata:
name: common-secrets
stringData:
DB_PASSWORD: <ref+awsssm://myapp/database/password?region={{> $.Values.region }}
Use this with something like the raw
chart previously suggested.
Instead of ref+awsssm://
, you can use any backend supported by vals
Helm-like configuration values loader with support for various sources - variantdev/vals
Hello! Is it possible to skip diff
stage as part of apply
command? Use case: override changes which were made manually (helm-diff doesn’t show any diff in this case). So, i’m looking for the way of force execute helm upgrade --install
. Thanks
helmfile sync
it is a bit different (sync - sync all resources from state file (repos, releases and chart deps)
ok, thanks. I will try
Hey! I was wondering if there was a way to preview the template rendering of helmfile before actually applying? We often have templating issues that are hard to troubleshoot if we have no way to see what manifests were actually rendered by helmfile.
helmfile template
does that
Excellent, thanks a lot @bradym!
@Adrian Todorov Just FYI!
Also check out helmfile build
- it’s like helmfile template
but for your hemlfile instead of your templates.
Oh OK, that’s probably more what we need indeed!
Thanks @bradym, helmfile template
is doing exactly what we needed (be able to preview rendered templates). Thanks a lot for your quick response!
Awesome, glad it’s working for you
@bradym @Mathieu Frenette @Adrian Todorov I just read the usage of helmfile template
per above . But I am sort of unsure how the output chart gets applied after the rendering is successful , i.e. once you got the directory with the rendered file how to do helmfile apply
on that directory ? what we have had so far was helmfile lint
—> helmfile diff
–> helmfile apply
, this worked well but we recently started to generalize our env and wanted to first render all charts and then apply. May I know what is your workflow/lifecycle in this case ?
I guess it depends on what you’re trying to do. I only use helmfile template
when I’m developing/debugging a chart.
I’m not sure what you mean by generalizing your env, what do you see as the benefit of running helmfile template
before helmfile apply
?
so we have a structure like this where we were running the workflow I specified above. lint--> diff-->apply
. Now we kind of squeeze the whole bunch of these env files into a very generic 3 files (*.gotmpl) files and we are rendering them based on inputs . To render them I used helmfile template
and that spit out the charts perfectly into the directory. But now I want to use those rendered one to apply to my cluster.
but It seems helmfile apply or sync
can’t be applied with the rendered chart files
does that make sense or make it more clear @bradym ?
Sorta? helmfile template
shouldn’t need to be used as part of a deploy pipeline.
When you run helmfile apply / helmfie sync
the charts are rendered and then applied to your cluster as needed. The rendered charts just don’t stick around.
I did temporarily have a setup where I would do helmfile render | kubectl apply -f -
because I was running into issues with the version of helm I was using (pre 3.2.0) - but now I just run helmfile apply
hmm , yes I also thought the same because at the end of the day even if I reduce those all env files and folders into 3 files (which is what I am doing) those 3 files in the end are still gotmpl
files so the use of helmfile apply
should still work the same way as before. ?
Yep
on our side, we only use helmfile template
and helmfile build
as an optional standalone step (that does not participate in the rest of the pipeline) just to preview what will be rendered during helmfile apply
for troubleshooting purposes. A word of warning though, as yesterday we realized that with this step all our secrets were also output in clear into our build logs. We were assuming that Codefresh was obfuscating those secrets automatically, but that feature is only available to the Enterprise tier. So we’ll need to figure an alternative for troubleshooting helmfile rendering.
2020-07-23
Can I include multiple values.yaml files in the values:
section and they will be sequentially applied?
Yes!
Is the needs:
directive broken? It complains to be that “default/y” depends on nonexistent release “x” while I definitely have a release x in my helmfile…
I’ve got a MWE here.
Alright, it seems namespace is mandatory (i.e., default/foo
). I thought it was optional, since it’s in brackets in the documentation.
It seems it’s mandatory because I had defined namespace in the release. If I remove the namespace from the release, I don’t have to define namespace in the needs
either.
Did anyone figure out a way to run helmfile template
only on specific charts/releases?
just specify labels, helmfile -e prod -l app=my-app template
. It will fetch all the charts, but output matched template[s] only.
@voron thanks, i’ve tried using labels before but sync it fetches all charts I thought it’s not working. Do you know of a way to skip fetching all charts?
nope, IDK how to skip fetch
pls file a bug/PR to fix it. --skip-deps
doesn’t affect fetching. I have helmfile with ~60 releases of the same chart and it fetches the same chart for every release
@voron exact same issue here
@mumoshu are you aware of this issue?
I’m not sure why charts need to be fetched at all instead of being templated directly
When running helmfile –environment –selector chart=<MY_CHART> template, all of the charts are fetched beforehand and not just the charts in the selector. Running the same command but with l…
Yeah I’m aware of it. Could anyone submit a PR for that?
The context is https://github.com/roboll/helmfile/issues/338. We opted to fetch all charts to fix 338 without implementing logic to correlate releases
entries to repositories
entries by chart names and other gotachs to update only a subset of repos in helm
With the below helmfile a helmfile repos will result in err: no releases found that matches specified selector() and environment(default), in any helmfile repositories: - name: stable url: https://…
@mumoshu I don’t really understand the issue in 338, also why would the fix influence helmfile template
and not helmfile lint
? Can you point me to the lines within the code which makes these two commands behave differently in the fetch phase?
it looks like both issues were fixed by @mumoshu in v0.125.3
- template
honors labels, and template
doesn’t fetch charts explicitly, with helm3 only, though.
2020-07-26
2020-07-27
2020-07-29
could not find a good answer: can I override name of the chart if syncing from helmfile.yaml?
something like helmfile -f charts/helmfile.yaml --name=notTheOneInHelmfile.yaml
?
reading Cloud Posse docs did this in the helmfile:
- name: {{ env "CUSTOM_NAME" | default "theogname" }}
CUSTOM_NAME=notTheOneInHelmfile helmfile -f helmfile.yaml sync
Ya, I think an environment variable will be your best bet.
That’s assuming the requirement is to pass it on the command line vs using an environment
file - in which case you can reference settings from that.
What’s your business use-case that you want to solve?
there was a chart using {{ Release.Name }} for some template fields that had to be unique between different deployments - so felt like easiest was to change at source, rather than to have to chase all the places it showed up
Ya, makes sense
2020-07-30
Is it possible to have helmfile
label namespaces for me, or can it only put labels on Helm releases?
helmfile can use hooks that call kubectl
and label namespaces. So I would use this if you’re already using helmfile in a big way, but if you want to just use helmfile for this use-case, then it’s a bit of a roundabout way of doing it.
additionally, you can have helmfile create the namespaces using the raw
chart with the appropriate labels, but that assumes you want to use helmfile for that.
How would one go about creating hierarchical environments
in helmfile? Lots of use of yaml anchors?
maybe leverage go template’s {{ define }}
and {{ template }}
, or use {{ readFile "my.env.tmpl" | tpl (dict "some" "param") }}
to dynamically generate your env?
2020-07-31
Has anyone else had issues with repositories not being updated? helmfile seems to be completely ignoring the repositories block in my yaml?
which version of helmfile are you using?
it has been skipping repositories update when there were no “releases” defined in the same helmfile.yaml
and i fixed it in a very recent version of helmfile
Im using the latest, 0.125 but regardless of what I do or if i use subfiles I can’t get any of it working. I was originally using just one big helmfile with 1 environment listed.
so are you basically saying that helmfile ignores repositories in sub-helmfiles only?
Hey All, I’m having some really weird issues about context deadline exceeded
from helmfile reaching my clusters. The issue does not happen if i manually query the cluster or use helm directly. Has anyone else ever seen this ?
the helm timeout on the CLI helmfile is generating us set to 600s, which its timing out WAY before that.
STDERR:
W0731 11:48:08.996622 14631 transport.go:260] Unable to cancel request for *exec.roundTripper
Error: Kubernetes cluster unreachable: Get <https://x.x.x.x:6443/version?timeout=32s>: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helm.go:84: [debug] Get <https://138.1.19.169:6443/version?timeout=32s>: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Kubernetes cluster unreachable
but again, this only happens when executing via helmfile
@jason800 This is interesting. I thought Helmfile doesn’t do much fancy things that can result in such problems
Do you see any suspicious flags passed to helm if you run helmfile with --debug
, like helmfile --debug lint
?
If you have huge amounts of releases and helmfiles managed, can it be due to resource shortage on the machine helmfile is running?
if so im wondering if setting concurrency to a lower value(e.g. 1) makes the timing out issue disappear, at least
from which timing did you start to see this issue? since e.g. upgrading helmfile/helm to a specific version, changed kubeconfig path, kubecontext name, etc
does the issue always occur, or only somtimes?
Hey! You were exactly right! We ended up figuring out we were DoSing ourselves. Set the concurrency to 5 and everything worked great
Woot!
Another thing is we just went back and did a huge refactor across our repo to make massive use of templating since we deploy to many clusters across many regions and such. The performance of linting has degraded to point of not being able to run it in any meaningful way
helmfile lint
is running helm lint
sequentially today.
do you think enhancing helmfile to allow configuring concurrency would help?
honestly i havent done any “scale” testing for helmfile as i dont have such a large dataset for etsting
but i can definitely try to optimize helmfile in any way if you could help
So first, thank you so much for the prompt response. The majority of the delay ended up being on our end with exec and hooks
We were able to optimize and significantly reduce lint and template time.
Great!
However one thing that did come out of the refactor was that when templating , kubecontext is not accurately being taken into account when considering a releases uniqueness ( in terms of whether or errors for duplicate release names)
Wrt the recent patch that enabled kubecontext in that regard
ah
yeah i think we never included kubecontext into the consideration when calculating the unique id of a helmfile release
so you had to tweak release names a bit depending on kubecontext?
basically it only tillerNamespace, namespace, and name of the release today
did the error you’d seen look like duplicate release "RELEASE" found in "NS": there were NUM releases named "RELEASE" matching specified selector
?
Yes we are iterating over cloud regions and then multiple k8s clusters within those regions. We were using a ton of environmental/region specific files with hard coded values to do this but refactored into heavy use of go template
So we use the region and cluster info in the release name to make it unique
Thanks! That makes sense. You seem like already managed it, but for future use I’ve made this fix:
Helmfile has been incorrectly showing releases with the same name but in different kubeContexts as duplicates. This fixes that.
I’ll merge it if it makes sense to you too
basically it only tillerNamespace, namespace, and name of the release today
. Wasn’t this changed in a recent patch? To include kubecontext?
Or you’re just saying it’s not working
AFAIK, no. Did you see any specific issue/pr related to it?
It would appear that I confused https://github.com/roboll/helmfile/pull/1312
This PR allows per-release kubeContext to override kubeContext from helmDefaults by moving helmDefaults's context to the first helm arg and lowering it's priority ( last –kube-context wins…
Gotcha. This was to fix the issue about wrong kubeContext being used in certain cases
Yup I see that now. Thank you. I reviewed the pr you linked and approved. Looks great to me
Thanks for reviewing! Merged.
@jason800 Have you ever encountered this while using kubeContext w/ recent versions of Helmfile?
I upgraded from Helmfile v0.118.5 to v0.119.1, and helmfile is no longer picking up on my kubeContext. It says that is doesn't exist but it works fine in 0.11.8.5 and works fine with Helm, and …
No. We use kubecontext on every release and we template it. Never had any issues
Got it. Thanks for your support!
anyone has objection on renaming --state-values-file
to --prepend-values
and adding --append-values
?
https://github.com/roboll/helmfile/issues/1348
When trying to enable/disable releases via the environment clause in helmfile I came about the issue that overriding those values via CLI seems to be impossible. The apparent merge behaviour seems …