#helmfile (2019-10)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2019-10-02
Hi everyone! We’re using Azure Container Registries as our helm repo, as per https://docs.microsoft.com/en-us/azure/container-registry/container-registry-helm-repos
Learn how to use a Helm repository with Azure Container Registry to store charts for your applications
The integration with helm works fine by adding credentials to ~/.helm/repository/repositories.yaml
but it seems helmfile ignores this file so we have to add credentials to helmfile.yaml
thus:
# Advanced configuration: You can setup basic or tls auth
- name: roboll
url: <http://roboll.io/charts>
certFile: optional_client_cert
keyFile: optional_client_key
username: optional_username
password: optional_password
Is there any way to get helmfile to use ~/.helm/repository/repositories.yaml
or any plans to do so?
Seems adding --skip-deps
seems to resolve this
@Martin Devlin Hey!
You seem to have found the answer but yeah, i’d recommend --skip-deps
or just not manage the repo from helmfile(just omit it from repositories:
)
Now add your Azure Container Registry Helm chart repository to your Helm client using the az acr helm repo add command. This command gets an authentication token for your Azure container registry that is used by the Helm client
After reading this I think there’s no static “password” that can be set in helmfile.yaml’s repositories[].password
in this scenario.
This command gets an authentication token for your Azure container registry
sounds like it is generating a temporary, short life token that should be regenerated each time you run helm/helmfile
@Martin Devlin If you got some time, I’d greatly appreciate it if you could submit a PR to add some guidnce to acr as a helm repo in the context of helmfile(perhaps a few lines in README.md would suffice
@mumoshu Thanks for the reply. “sounds like it is generating a temporary, short life token that should be regenerated each time you run helm/helmfile”. The az acr helm repo add
command adds a JWT token to ~/.helm/repository/repositories.yaml
This works fine with helm
commands, but not helmfile
, which is a little frustrating as it’s a neat way to avoid managing secrets. As I say, --skip-deps
makes the problem go away so we’re using that for now.
I can make a note of that in README.md as you requested
Anyone know how I can configure helmfile to install the latest pre-release versions of my charts? Tried adding --devel
to args and omitting version from the releases, which does what I want when I helmfile diff
, but when I helmfile apply
it will try to install the latest release version.
theoretically, your approach should work.
might you try running helmfile --log-level debug apply
just to identify root-cause
Thanks for the tip.
Seems like the helmfile apply
diff step ignores the args
in my helmfile:
exec: helm diff upgrade --reset-values --allow-unreleased account-service xxx/account-service --tiller-namespace acc --kube-context acc --values /var/folders/9h/1pr987354xj3n7y3tbcy5f2d606v15/T/values267183981 --detailed-exitcode
Whereas helmfile diff
does not:
exec: helm diff upgrade --reset-values --allow-unreleased account-service xxx/account-service --tiller-namespace acc --kube-context acc --values /var/folders/9h/1pr987354xj3n7y3tbcy5f2d606v15/T/values801725413 --devel
Using helmfile sync
instead of helmfile apply
ended up solving my problem.
seems to be a bug. Worth creating an issue on github, imo.
@mbilliet @starets Hey! Have you tried setting devel: true
in your releases in helmfile.ymal like this?
releases:
- name: myapp
devel: true
--args
is very hard to reason about and use - i’d recommend declaring everything in your helmfile.yaml
anyone know the exact syntax of the --state-values-set
flag? I can’t seem to get it to work
haven’t used it, but judging by https://github.com/roboll/helmfile/blob/master/main.go#L64 and https://github.com/roboll/helmfile/blob/master/main.go#L479-L489
it’s exactly what’s stated in Usage:
(can specify multiple or separate values with commas: key1=val1,key2=val2)
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
well - but the “key=value” doesn’t seem to work
I have an ‘image.tag’ in my helm chart, and when I define “image.tag=foo” - it still outputs the wrong image tag in my deployment if I try this out with helmfile template
I tried prefixing with the chartname, release name, Values, bare, …
also, if I try with --log-level debug
- and I nowhere see the value I try to override
hmm ok, these only seem to be available in the helmfile files themselves, if I want to propagate them to the charts I have to add them to the values loaded by helmfile…
theres also not much documentation for this
we use helmfile to deploy all our envs, but we have envs per dev team (about 12) - all pretty much with the same config except for some small uniform changes I should be able to influence with that flag I would expect, but I can’t seem to get it to work
--state-values-set
should be used like --state-values-set mykey=myvalue
so that it becomes available in your helmfile.yaml like:
releases:
- name: myapp
values:
- foo: {{ .Values.mykey }}
Or
helmfiles:
- path: path/to/subhelmfile.yaml
values:
- foo: {{ .Values.mykey }}
The important point is that any state values are not implicitly propagated to any releases or any sub-helmfiles.
yeah I figured that out by now
2019-10-03
Any tips & tricks to share regarding running Helmfile from Atlantis? I’m looking into cross-account auth with EKS and it gets dicey.
@Vlad Ionescu (he/him) Hey! I’ve tried calling helmfile from atlantis with my own “custom stages” feature a year ago
This is currently an alpha-level work of what the subject states. I have not tried to think throughout all the edge-cases, but it should work in normal cases. I want to run arbitrary helmfile comma…
Does atlantis officially have such feature today?
2019-10-09
2019-10-10
I’m trying to manage our environments via helmfile and I’m running into issues. I’m trying to do common helmfiles (with release definitions) in helmfiles/
and have config in config/<env>/<proj>/*.values.yaml
. Theres a base.yaml
which defines repos, helm defaults, and environments (prod + stage for now). My helmfile.yaml
includes the base via bases:
and then has a helmfiles:
directive for helmfiles/*.helmfile.yaml
. The individual app helmfiles also include the base. Any values I set on environments aren’t getting through to the app helmfiles. I’ve tried overriding them in the helmfiles:
section but that just ends up failing to render because it can’t find the environment value there either. If I remove the bases:
from helmfile.yaml
, though, and just paste the contents inline, it works. there seems to be some weird interaction with bases
that i’m not understanding
2019-10-11
I am trying out the helmfile, In base helmfile.yaml have multiple releases, And I am working on only one say cert-manager. While apply the helmfile all the applications are getting deployed.
helmfiles:
- "releases/prometheus-operator.yaml"
- "releases/cluster-autoscaler.yaml"
- "releases/cert-manager.yaml"
I am using the below command
helmfile --file helmfile-preprod.yaml -e preprod apply
Is there anyway to pass argument which only deploys cert-manager.yaml? Kindly suggest
Using
--selector value, -l value
we can run a particular release
@Gourav or you can use the --selector
argument
2019-10-13
@Erik Osterman (Cloud Posse) Thanks Erik
2019-10-14
Need some suggestion, in below section there’s a env KIAM_HOST_CERT_PATH variable is there , if I am not passing any value to it, It will pickup the default one.
extraHostPathMounts:
- name: "ssl-certs"
mountPath: "/etc/ssl/certs"
hostPath: '{{ env "KIAM_HOST_CERT_PATH" | default "/etc/ssl/certs" }}'
readOnly: true
I wanted to understand, where I need to define this variable KIAM_HOST_CERT_PATH and pass its value. As I do not want to change the default values but wanted to use passed value. Is there any example to achieve the above mentioned?
that’s an environment variable
you can define it before you run helmfile or on the same line
KIAM_HOST_CERT_PATH="/path/to/certs" helmfile
@zeid.derhally Thanks.. i will try.
Hi all, I’m attempting to override release values so that devs can quickly change image.tag on their local env for testing. The current pattern is like so;
bases:
- ../../env/helmfile-environments.yaml
releases:
- name: myrel
namespace: dataproduct
chart: ../../../../charts/data-product-service-chart
force: true
atomic: true
values:
- image:
repository: myrepo.com/rel/myrel
tag: master-1.1.123
- ../../env/{{ .Environment.Name }}.yaml
and we want to run helmfile like this;
helmfile -e minikube -f helmfile-myrel.yaml --state-values-set image.tag=dev apply
However, the image.tag
value remains as master-1.1.123
Would you make it possible to set a default image.tag per environment, too?
Assuming so, I’d guess it should be
releases:
- name: myrel-{{ env "BRANCH_NAME" | default "master" }}
namespace: dataproduct-{{ env "BRANCH_NAME" | default "master" }}
chart: ../../../../charts/data-product-service-chart
force: true
atomic: true
values:
- image:
repository: myrepo.com/rel/myrel
tag: major-1.1.123
- ../../env/{{ .Environment.Name }}.yaml
- {{ tag := get "image.tag" "" .Values }}{{ if $tag }}{"image": {"tag": {{ $tag | quote }} } } {{ end }}
The important point here is that state values are not automatically propagated to releases as state values and chart values are completely different things
We don’t really want the image.tag
to vary between environments so setting a default for each environment introduces redundant environment files (essentially boilerplate).
Regardless, I tried adding image.tag to env/minikube.yaml and adding your suggested code above but I got the following error;
template: stringTemplate:17: function "tag" not defined
ah, perhaps it should be {{ $tag := get
not {{ tag := get
and as you aren’t going to define environmental defaults, values can just be
values:
- image:
repository: myrepo.com/rel/myrel
tag: {{ .Values | get "image.tag" "major-1.1.123" }}
- ../../env/{{ .Environment.Name }}.yaml
So should the value of image.tag
there come from --state-values-set image.tag=mytag
?
Doesn’t seem to work for me
Yes
Hmm, what would you see if you run helmfile build
with log-level=debug
?
helmfile --log-level=debug -f helmfile.yaml --state-values-set image.tag=foo build
$ cat helmfile.yaml
releases:
- name: myapp
chart: stable/nginx
values:
- image:
tag: {{ .Values | get "image.tag" "default_tag" }}
when no --set-values-set
are provided, it does use default_tag
$ helmfile --log-level=debug -f helmfile.yaml build
processing file "helmfile.yaml" in directory "."
first-pass rendering starting for "helmfile.yaml.part.0": inherited=&{default map[] map[]}, overrode=<nil>
first-pass uses: &{default map[] map[]}
first-pass produced: &{default map[] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]}
vals:
map[]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
0: releases:
1: - name: myapp
2: chart: stable/nginx
3: values:
4: - image:
5: tag: default_tag
6:
merged environment: &{default map[] map[]}
---
# Source: helmfile.yaml
filepath: helmfile.yaml
releases:
- chart: stable/nginx
name: myapp
values:
- image:
tag: default_tag
templates: {}
That is pretty much exactly what I’m doing but it always uses the default
and with --state-values-set
$ helmfile --log-level=debug -f helmfile.yaml --state-values-set image.tag=foo build
processing file "helmfile.yaml" in directory "."
first-pass rendering starting for "helmfile.yaml.part.0": inherited=<nil>, overrode=&{default map[image:map[tag:foo]] map[]}
first-pass uses: &{default map[image:map[tag:foo]] map[]}
first-pass produced: &{default map[image:map[tag:foo]] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[image:map[tag:foo]] map[]}
vals:
map[image:map[tag:foo]]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
0: releases:
1: - name: myapp
2: chart: stable/nginx
3: values:
4: - image:
5: tag: foo
6:
merged environment: &{default map[image:map[tag:foo]] map[]}
---
# Source: helmfile.yaml
filepath: helmfile.yaml
releases:
- chart: stable/nginx
name: myapp
values:
- image:
tag: foo
templates: {}
@Ben would you mind sharing your logs with --log-level=debug
? (but please beware not to leak creds/secrets in it though!
Seems to be related to environments. This is my exact helmfile
bases:
- ../../env/helmfile-environments.yaml
releases:
- name: rdc-{{ env "BRANCH_NAME" | default "master" }}
namespace: dataproduct-{{ env "BRANCH_NAME" | default "master" }}
chart: ../../../../charts/data-product-service-chart
force: true
atomic: true
values:
- image:
repository: registry.encompasshost.com/encompass/cdp/rdc-cdp
tag: {{ .Values | get "image.tag" "major-humpback-1.1.5888" }}
- service:
internalPort: 8081
- ../../env/{{ .Environment.Name }}.yaml
if I comment out the bases
section and the last line then image.tag is overriden as expected
Does the inclusion of environments
section clear the state passed from CLI maybe?
Sounds like so - and it might be a bug! I’ll take a deeper look soon
I should also say that image.tag
is only defined in the Chart’s values file and not in the referenced env/minikube.yaml
file
2019-10-15
Hi.. I am working on kiam helmfile where I wanted to move rbac section from below file to file named “values/kiam.yaml.gotmpl”. So i have included the file as show below under values: section. But I am getting the below message. Anyone got some tips for me?
helmfile --file helmfile-dev-dev.yaml -e dev-dev -l chart=kiam diff
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read kiam.yaml.part.1: reading document at index 1: yaml: line 148: did not find expected '-' indicator
in ./helmfile-dt-ue2.yaml: in .helmfiles[0]: in releases/kiam.yaml: failed to read kiam.yaml: reading document at index 1: yaml: line 148: did not find expected '-' indicator
- name: "kiam"
namespace: "kube-system"
labels:
chart: "kiam"
repo: "stable"
component: "iam"
namespace: "kube-system"
vendor: "uswitch"
default: "true"
chart: "stable/kiam"
version: "2.5.2"
wait: true
recreatePods: false
installed: {{ env "KIAM_INSTALLED" | default "true" }}
hooks:
# This hoook adds the annotation that allows pods in the kube-system namespace to assume any annotated role
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl annotate --overwrite namespace kube-system 'iam.amazonaws.com/permitted=.*'"]
# This hook adds the annotation that instructs stakater/reloader to watch the DaemonSet's secrets and configmaps
# and reload the DeamonSet when they change.
- events: ["postsync"]
command: "/bin/sh"
args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet --selector=app=kiam reloader.stakater.com/auto=true"]
values:
- fullnameOverride: kiam
- values/kiam.yaml.gotmpl
#rbac:
### Optional: RBAC_ENABLED;
#create: {{ env "RBAC_ENABLED" | default "false" }}
2019-10-16
Hi… I am preparing the helmfile for kiam. In kiam specifications there are seviceAccount section, in that when I am trying to override the serviceAccountName for agent and server it is not happening. Below is the snippet of manifest where I am trying to override
serviceAccount:
agent:
create: true
name: dev-ops-kiam-agent
server:
create: true
name: dev-ops-kiam-server
While doing the helmfile diff
I am getting the serviceAcccount manifest files for agent and server. But in that name is not coming as dev-ops-kiam-agent and dev-ops-kiam-server instead coming as kiam-agent and kiam-server.
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ labels:
+ app: kiam
+ chart: kiam-2.5.2
+ component: "agent"
+ heritage: Tiller
+ release: kiam
+ name: kiam-agent
Does someone have some pointers for me to resolve this issue with serviceAccount ?
@Gourav Hey! Are you talking about how you’d use stable/kiam chart, right?
https://github.com/helm/charts/blob/master/stable/kiam/templates/agent-serviceaccount.yaml#L12
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
it seems like you should use serviceAccounts
, not serviceAccount
try
serviceAccounts:
agent:
create: true
name: dev-ops-kiam-agent
server:
create: true
name: dev-ops-kiam-server
ok… I will try.. thank you @mumoshu
2019-10-17
I have a rather extensive helmfile setup, a few dozen helmfiles (all loaded from a central-one) with multiple releases per helmfile. What would be the best way to set a specific set of values for the charts in every single release accross a deploy? more specifically, we allow the hostAliases in deployments to be set in all our charts - but want this to be managed centrally… I now have to include a central file with these hostaliases in every single release, but I would like to be able to add that from a single central place (preferably from our base helmfiles (which are already included in all our helmfiles)
so probably you want a central place to affect every aspect of every release in every (sub-)helmfile, right?
maybe it isn’t possible today -
if i read correctly, what you might need is
releases:
# render myapp-us-west-1
{{ include "the-central-place/release-template.tpl" (dict "Values" .Values "name" "myapp" "region" "us-west-1" "opts" (dict "key1" "val1")) }}
# render myapp-us-west-2
{{ include "the-central-place/release-template.tpl" (dict "Values" .Values "name" "myapp" "region" "us-west-2" "opts" (dict "key1" "val1")) }}
# ....
and in release-template.tpl
, you write a nice go template that generates a release
as you like
where include
includes the given tpl from the file system(as we can do in helm
if this sounds good to you, would you mind opening a issue to add include
to helmfile?
hmm that might be a solution, not sure though
I tried with a template in our base role, but values arrays aren’t merged, so then I can’t set release-specific values anymore
still trying to understand. could you provide me a small example to reproduce what you’ve tried?
well - I have a
templates:
default: &default
values:
- /central/path/to/values.yaml
releases:
- release-v1:
<<*: *default
values:
- somevalue: "to override"
if I do this, the values from the template are ignored
if I could somehow merge them, I could declare the template in a central base that’s included everywhere
and add additional custom values per release
but that’s all due to ugly YAML trickery I’m afraid
not sure if helmfile could do that in some way
Hi, I have what seems like a silly question, didn’t want to open an issue on GitHub. I’m using git for a remote helmfile. But, there doesn’t seem to be any way to have the git repo get updated with git pull other then going into the .helmfile directory and doing it myself?
@pjbecotte Hey! Assuming you’re talking about remote ghelmfiles stored under .helmfile/cache/something
- yeah you might need to manually run git-pull there if you’ve specified a changing git branch
helmfile doesn’t try to re-fetch already cached git branches and tags.
my expected usage for the remote helmfile feature was you’d tag your git repo with semantic versions. that is, you change the git tag included in the remote helmfile url(go-getter-style url) from v1.0.0 to v2.0.0 when you need any update.
does it make sense?
Sure, that is a workflow that will work. Of course, it was a pain while I was iterating trying to get things working. Was just thinking I probably missed something.
@pjbecotte i hear you.
perhaps it would be nicer if helmfile git-pulled every cached remote helmfile repo by default?
and ability to optionally skip git-pulling cached remote helmfiles?
JFYI: Issued opened https://github.com/roboll/helmfile/issues/901
Extracted from https://sweetops.slack.com/archives/CE5NGCB9Q/p1571331457038700 Currently helmfile skips git-pulling any remote helmfiles that are already cached. This isn't actually a bug. The …
Oh, neat, thanks!
Anyone using helmfile for multiple clusters in different regions? Whats your approach if every cluster has a location specific variable?
@Naseem Hey! Replied to you in the k8s slack also but anyway
It depends, but I guess you should use sub-helmfile per region.
Thati s, there could be a production
environment in a “global” helmfile. the global helmfile would delegate per-region deployment to respective sub-helmfile
Thanks @mumoshu ! I will try this approach!
if you have releases that are almost identical across all regions, with maybe 1 value thats different per region, is this still the approach you would suggest?
Yep.
Try injecting state values for regions like:
helmfiles:
- path: regional.yaml
values:
- region: us-west-1
- path: regional.yaml
values:
- region: us-west-2
Regarding globally distributed clusters again:
Currently in a non-globally-distributed setup, I know if a release should be installed based on which environment it is:
installed: {{ eq .Environment.Name "staging" }}
<— only installs in staging.
How does one achieve the flexibility of: “installed if env is staging AND region is us-west-1” OR “installed if env is staging AND cluster name is Bob” … either of these would be great
Can we somehow extract the cluster name from kube context or something?
re the gotmpl expression, it would look like {{ and (eq .Environment.Name "staging") (eq $clusterName "cluster1") }}
Can we somehow extract the cluster name from kube context or something?
I think you should think conversely. You’d provide the cluster name via values, and select appropriate kubeconfig based on that
you should split your kubeconfig per context beforehand with kubectl config view --minify
Thanks @mumoshu!
Just got an issue, that has never happened before. After the latest helmfile apply
some of the releases became FAILED, and corresponding workloads got disappeared. In logs I can see something like
exec: helm tiller run gitlab-managed-apps -- helm upgrade --install --reset-values frontend-e2e-alpha chartmuseum/frontend --version 0.9.0 --timeout 300 --force --namespace e2e-alpha --values /tmp/values674824445 --kube-context=gke_XXXX_europe-west1-b_XXXX: Creating tiller namespace (if missing): gitlab-managed-apps
UPGRADE FAILED
ROLLING BACK
Error: Failed to recreate resource: the server was unable to return a response in the time allotted, but may still be processing the request (post deployments.apps)
After that the deployment got disappeared.
frontend-e2e-alpha 16 Thu Oct 17 05:23:17 2019 FAILED frontend-0.9.0 1.0 e2e-alpha
That’s weird. And some deployments now have pods from different revisions. Things got messed up.
hm… maybe helm-tiller timedout/failed in the middle of the installation process and left the cluster in a half-baked state?
i suspect it can potentially a fundamental issue in helm-tiller then(helm3 would help
I bumped the K8s version in my cluster and it started working ok. Probably it was a coincidence, but everything is ok so far.
I purged a bunch of releases and ran helmfile apply
again. Almost all succeeded, except one.
The same didn’t help for the other bunch of releases at all.
2019-10-18
hmm have another issue… I use
{{`{{.Release.Name}}`}}
in a template section in the values:
. If it’s in a filename, it renders properly, if it’s in a - varname: {{...}}
it doesn’t render this, and I end up with a literal {{ .Release.Name }}
in my templated helm chart output…
Ah maybe we’re missing the implementation for rendering release template expressions within the inlined values
thanks
2019-10-20
Are containers built somewhere having helm3 binary, latest work-in-progress helm-diff (including needed fixes)?
Not yet. but it would be great to have one!
(Currently using helmfile as ‘templating engine’ and seems not efficient at this point to invest in Helm2 and all the tiller shenanigans)
it’s ok if you use the tillerless plugin
(which is what we do)
Considering that as well. But afaik there is no clear upgrade path yet to migrate helm2 to helm3 deployments. So you’d have to remove and reinstall which is not ideal. So that’s my main reason for looking directly at helm3.
Do you use a single namespace for all helm2 release data (kube-system and apps)?
That sounds like a good strategy!
Anyways, we will be able to use https://github.com/helm/helm-2to3 for upgrading without reinstalling
This is a Helm v3 plugin which migrates and cleans up Helm v2 configuration and releases in-place to Helm v3 - helm/helm-2to3
Great to see helm3 images are built now! Work (other work) got in the way, so hadn’t found time for that (and need to familiarize myself a bit with the ins/outs of Helmfile ci setup).
Ran into some CRD-related issues on a first attempt using Helm3 on prometheus-operator. Then again: Not the simplest chart. And stuff moves so fast, might have been fixed already. (Also: Not a helmfile issue).
2019-10-21
Actually, I’ve got kinda related question. Do we have any helm2 -> helm3 transition best practices for those using helmfile? Or they are pretty much the same as for Helm? Is helmfile ready to be used with Helm 3 right now? Is there any missing things? Even though I’ve been using helmfile for quite some time, I’d never tried it with Helm 3.
2019-10-22
Hello! Just started looking at helmfile and I think I like it. Our current setup is that we have a Jenkins pipeline for each micro service that creates a docker image and a helm chart which gets pushed/published to our registries. The pipelines also deploys to our environments based on which branch it is, but I want to move all that to a helmfile in a separate git repo, which can be updated using PRs for values changes, but I want the built helm chart deployed automatically and thus want the Jenkins pipeline to do git commits to the repo where the helmfile resides… Anyone got any good way of doing this?
Are there any up to date helmfile examples?
Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles
Taking https://github.com/costimuraru/helmfile-examples/tree/master/templatization as an example, how can i move cluster-autoscaler.yaml
into a sub-directory called releases
? I think i keep running into path issues.
@Tiago Meireles hey! you’d also need to update this in the yaml:
bases:
- envs/environments.yaml
to
bases:
- ../envs/environments.yaml
every refs from a helmfile.yaml should be relative to itself
That is what I expected. It didn’t work when i tried it.
diff --git a/templatization/helmfile.yaml b/templatization/helmfile.yaml
index 5fdf61a..a205404 100644
--- a/templatization/helmfile.yaml
+++ b/templatization/helmfile.yaml
@@ -7,4 +7,4 @@ bases:
helmfiles:
# - "nginx-ingress.yaml"
# - "velero/velero.yaml"
- - "cluster-autoscaler.yaml"
+ - "releases/cluster-autoscaler.yaml"
diff --git a/templatization/cluster-autoscaler.yaml b/templatization/releases/cluster-autoscaler.yaml
similarity index 96%
rename from templatization/cluster-autoscaler.yaml
rename to templatization/releases/cluster-autoscaler.yaml
index 890db9e..d0f17a7 100644
--- a/templatization/cluster-autoscaler.yaml
+++ b/templatization/releases/cluster-autoscaler.yaml
@@ -1,6 +1,6 @@
---
bases:
- - envs/environments.yaml
+ - ../envs/environments.yaml
---
releases:
- name: "cluster-autoscaler"
@@ -32,4 +32,4 @@ releases:
value: {{ .Environment.Values.helm.autoscaler.azure.clientId }}
- name: azureClientSecret
value: {{ .Environment.Values.helm.autoscaler.azure.clientSecret }}
-{{ end }}
\ No newline at end of file
+{{ end }}
✗ helmfile -e aws template
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read ../envs/environments.yaml.part.0: environment values file matching "envs/aws-env.yaml" does not exist in "."
in ./helmfile.yaml: in .helmfiles[0]: in releases/cluster-autoscaler.yaml: failed to read ../envs/environments.yaml: environment values file matching "envs/aws-env.yaml" does not exist in "."
thx! seems like you also need to fix environments.yaml
as well.
2019-10-23
does anyone know why i’m getting this warning on a specific helmfile?
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced
i have the same structure for the rest of the helmfiles and they do not return this warning
hey! could you share your configs(or maybe smaler versions of them for reproduction?
all i can say form the sole example is that your config does miss the default
anchor undefined in your specific case.
@mumoshu, sorry for the late response: sub-helmfile:
---
environments:
{{ .Environment.Name }}:
values:
- ../envs/{{ .Environment.Name }}/defaults.yaml
- ../envs/{{ .Environment.Name }}/charts.yaml
---
{{ readFile "../templates.yaml" }}
releases:
- name: x-creds-{{ .Environment.Name }}
chart: x-stg/docker-registry-creds-chart
version: {{ .Environment.Values.charts | getOrNil "x.chartVersion" | default "0.0.1" }}
installed: {{ .Environment.Values.charts | getOrNil "x.enabled" | default false }}
<<: *default
values:
- imageCredentials:
username: {{ requiredEnv "X_USER" }}
password: {{ requiredEnv "X_PASS" }}
- fullnameOverride: x-creds-{{ .Environment.Name }}
templates.yaml:
---
bases:
- ../repos.yaml
- ../helmdefaults.yaml
---
templates:
default: &default
namespace: "{{ .Environment.Values.namespace }}"
wait: false
missingFileHandler: Error
thx! ah, so it won’t work at all.
&default
needs *default
written in the same yaml document. and ---
nor bases
doesn’t result in concatenating files as texts. they all read and render files independently.
maybe this works?
{{ readFile "../repos.yaml" }}
{{ readFile "../helmdefaults.yaml" }}
templates:
default: &default
namespace: "{{ .Environment.Values.namespace }}"
wait: false
missingFileHandler: Error
@mumoshu actually my original config is working, it just prints this error before going forward
ah gotcha! just curious, but you see that warning only with --log-level=debug
?
nope, both with and w/o debugging
interesting.
ah okay you already have this in your original helmfile
{{ readFile "../templates.yaml" }}
maybe helmfile should turn off that warning by default
but anyway… it’s there due to how helmfile’s feature called double rendering
work
when changing my templates yaml to :
{{ readFile "../repos.yaml" }}
{{ readFile "../helmdefaults.yaml" }}
instead of using bases
im getting this:
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read x-creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced
in ./x-creds.yaml: failed to read x-creds.yaml: reading document at index 1: yaml: unmarshal errors:
line 2: cannot unmarshal !!map into string
there’s a chicken-and-egg problem while loading a helmfile.yaml. that is, you need environment values loaded before rendering any go template in hlemfile.yaml. but to load env values it must be a plain yaml without go template.
you mean this part:
---
environments:
{{ .Environment.Name }}:
values:
- ../envs/{{ .Environment.Name }}/defaults.yaml
- ../envs/{{ .Environment.Name }}/charts.yaml
in subhelmfile?
to workaround that, helmfile renders your sub-helmfile twice
partially yes. double rendering occurs on the whole file
that is, helmfile firstly renders
{ readFile "../templates.yaml" }}
releases:
- name: x-creds-{{ .Environment.Name }}
chart: x-stg/docker-registry-creds-chart
version: {{ .Environment.Values.charts | getOrNil "x.chartVersion" | default "0.0.1" }}
installed: {{ .Environment.Values.charts | getOrNil "x.enabled" | default false }}
<<: *default
values:
- imageCredentials:
username: {{ requiredEnv "X_USER" }}
password: {{ requiredEnv "X_PASS" }}
- fullnameOverride: x-creds-{{ .Environment.Name }}
with readFile
replaced with a noop func, and with an empty values set
which results in
releases:
- name: x-creds-default
chart: x-stg/docker-registry-creds-chart
version: "0.0.1"
installed: false
<<: *default
values:
- imageCredentials:
username: VALUE_OF_X_USER
password: VALUE_OF_X_PASS
- fullnameOverride: x-creds-default
which indeed miss &default
and results in the error on *default
helmfile uses this to load env values used to render your helmfile.yaml. this time readFile
is not a noop func and as your helmfile.yaml is correct, it works…
helmfile has no way to know your helmfile.yaml contains environments or not before rendering it
thats why this double rendering
thing always happen regardless of there’s environments
section defined or not in your helmfile.yaml.
i get it, but with a simple change in the release i dont see this warning/error, thats what surprises me
using the same templates.yaml but changing the release to this:
---
environments:
{{ .Environment.Name }}:
values:
- ../envs/{{ .Environment.Name }}/defaults.yaml
- ../envs/{{ .Environment.Name }}/charts.yaml
---
{{ readFile "../templates.yaml" }}
releases:
- name: my-app
chart: x-stg/my-app-v2-chart
version: {{ .Values.charts.myApp.chartVersion }}
installed: {{ .Environment.Values.charts | getOrNil "myApp.enabled" | default false }}
<<: *default
values:
- ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
- ../envs/common/ms-affinity-rule.yaml
- fullnameOverride: my-app-{{ .Environment.Name }}
the only noticeable change here is the version
wow, really?!
yes
i can share privately debug logs if you wish
what do you see as the “first rendering result” when you add --log-level=debug
like helmfile --log-level=debug build
?
im on 0.87 btw
helmfile --log-level debug -e qa -f myapp.yaml diff [67e4cfa]
processing file "myapp.yaml" in directory "."
first-pass rendering starting for "myapp.yaml.part.0": inherited=&{qa map[] map[]}, overrode=<nil>
first-pass uses: &{qa map[] map[]}
sorry it takes time, some sensitive names that i need to remove
if you wish i can share a debug log after i change the version to use getOrNil
and then i get the deduce error
yes, please!
at glance i see the expected thing here:
first-pass rendering input of "my-app.yaml.part.1":
0: {{ readFile "../templates.yaml" }}
1:
2: releases:
3: - name: myproject-my-app
4: chart: bams-stg/myproject-my-app-v2-chart
5: version: {{ .Values.charts.myprojectLatiTagger.chartVersion }}
6: installed: {{ .Environment.Values.charts | getOrNil "myprojectLatiTagger.enabled" | default false }}
7: <<: *default
8: values:
9: - ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
10: - ../envs/common/ms-affinity-rule.yaml
11: - fullnameOverride: myproject-my-app-{{ .Environment.Name }}
i thought this would emit the warning(which didnt
i wondering if its because in one case im using .Values.charts.somevalue
and the other case is:
.Environment.Values.charts.somevalue
I read the log but am not yet sure what’s going on here. There’s indeed a few switches that relates to whether you use .Environment.Values or .Values. I’ll take a deeper look in coming days!
Thanks for your cooperation
Thank you for the support!
@mumoshu hi did u had the chance to look at this? should i try to use env bases to avoid this warning? i still can’t figure out what causes this behavior
@yuri Yes I have some progress. The main problem was that it was missing some error log that occurred in the first-pass render.
Improving it, I get this:
irst-pass rendering input of "helmfile.3.yaml.part.0":
0: releases:
1: - name: myproject-my-app
2: chart: bams-stg/myproject-my-app-v2-chart
3: version: {{ .Values.charts.myprojectLatiTagger.chartVersion }}
4: installed: {{ .Environment.Values.charts | getOrNil "myprojectLatiTagger.enabled" | default false }}
5: <<: *default
6: values:
7: - ../envs/{{ .Environment.Name }}/apps/{{ `{{ .Release.Name }}` }}/values.yaml
8: - ../envs/common/ms-affinity-rule.yaml
9: - fullnameOverride: myproject-my-app-{{ .Environment.Name }}
10:
template syntax error: template: stringTemplate:4:21: executing "stringTemplate" at <.Values.charts.myprojectLatiTagger.chartVersion>: nil pointer evaluating interface {}.myprojectLatiTagger
first-pass rendering output of "helmfile.3.yaml.part.0":
0: releases:
1: - name: myproject-my-app
2: chart: bams-stg/myproject-my-app-v2-chart
3: version:
pls see template syntax error: template: stringTemplate:4:21: executing "stringTemplate" at <.Values.charts.myprojectLatiTagger.chartVersion>: nil pointer evaluating interface
hmmm
this means that, the first-pass render “stops” at any nil pointer access(it has no way to tolerate it…), which results in the incomplete yaml not containing anything after the installed: ...
line
that’s why it doesn’t emit the reading document at index 1: yaml: unknown anchor 'default' referenced
warning
anyway, regardless of the warning is emitted or not, the second-pass should produce same same result
ah ok so if i understand since i have 2 “values” of getOrNil
it throws this warning
so probably you don’t need to worry about the warning at all..?(i agree it’s red herring though..
yes the sync works as expected, it just drives me crazy in the ci process to see some extra messages that i dont wish to see
yeah. maybe we should enhance the debug logs for the first-pass render?
maybe my “pattern” of templating this is incorrect the idea was to hold some yaml file with
chartName:
installed: true/false
vesrion: x.y.z
generally speaking any error in the first-pass is tolerable
hmm maybe. your idea makes sense
what works today would be to use getOrNil
whenever you dig Values or Environment.Values
we have the same application that we want to install with different versions… for example qa/ppe/…
hmm but i do use getOrNil
ah sry i’m a bit confused
from helmfile’s perspective, emitting reading document at index 1: yaml: unknown anchor 'default' referenced
is rather the correct behaviour
ok so it a matter of log verbosity and maybe should appear in debug ?
so avoiding the warning by exploiting the fact that the first-pass render “stops” at the first template error seems wrong..
yes
i think so
ok i hope just want to make sure i dont break any logic/patterns in helmfile the can hurt me later
ah and back to your original problem
you should avoid using anchors in your pattern
if
hmm so no templates with our current release definition?
if you see errors like this one https://sweetops.slack.com/archives/CE5NGCB9Q/p1571818484076100 anywhere other than the first-render
does anyone know why i’m getting this warning on a specific helmfile?
could not deduce `environment:` block, configuring only .Environment.Name. error: failed to read creds.yaml.part.1: reading document at index 1: yaml: unknown anchor 'default' referenced
i have the same structure for the rest of the helmfiles and they do not return this warning
getting reading document at index 1: yaml: unknown anchor 'default' referenced
in the first-pass is ok
ah ok got you
for now its seems like only the first render
ok great!
then there’s nothing wrong on your side
thank u again for the support!
the only remaining todo would be - i’d prefix warning from the first-pass render nicely so that it won’t confuse you anymore
thanks!
@Erik Osterman (Cloud Posse) Is there helmfile for Open Policy Agent? I have checked in helmfiles/releases and there is none for OPA.
Not yet
Haven’t used it
Even if helmfile is not there… Are we allowed to create our own helmfile for which stable helm charts are there?
absolutely. just write a helmfile like this:
releases:
- name: opa
chart: stable/opa
values:
- values.yaml
assuming you use https://github.com/helm/charts/tree/master/stable/opa
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
@mumoshu Thank you
2019-10-24
Got a question. Anyone have thoughts on a workflow for modifying existing helmcharts without forking them? We have so many forks to do silly stuff like add tolerations or ssl root certs.
This is not really an answer to what you are asking, but we have the same problem all the time
We use our monochart very frequently to get around the perceived shortcomings of a lot of charts out there
See our usage here: https://github.com/cloudposse/helmfiles/tree/master/releases
Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles
Basically the monochart makes implementing most services extremely easy, so we use that combined with Helmfile as our escape hatch
Yeah, that is basically how we deploy our services :)
@pjbecotte Hey! I haven’t tested it extensively but helmfile has a secret feature that allows you to jsonpatch/strategicmergepatch manifests before installing a chart:
https://github.com/roboll/helmfile/pull/673
Would it make it unnecessary to fork charts if you use helmfile template to generate the patches dynamically?
This enhances helmfile so that it can: Treat K8s manifests directories and Kustomize projects as charts Add adhoc chart dependencies on sync/diff/template without forking or modifying chart(s) (#6…
@pjbecotte what is the reason for the forks? do u change the templates and functionality that the original chart does not provide? or just the values?
Changing templates. Like the public chart doesn’t have ‘tolerations’ as a field on a deployment, and we needed to add it. (And many similar examples).
one option is just to open PR and suggest a change, toleration is a common use case imo. the second option i can think of, is replicated ship, never used it myself but seems to fit here
Yeah, PRs of course, but waiting weeks for a public project to accept and release isn’t usually in the cards
2019-10-25
2019-10-26
2019-10-30
Hi
When we do helmfile apply it is printing diff output
which conatins sensitive info how can we disable that.
in k8s secrets
?
try adding --suppress-secrets
like helmfile apply --suppress-secrets
like i have multiple helm charts in the helmfile and i want only there status which deployments have been deployed etc but not the complete deployment
providing the above flag stops printing sensitive info
i want only there status which deployments have been deployed etc but not the complete deployment
this sounds like a different issue than protecting sensitive info! do you actually need it?(and why?
i only need helm info like
`RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE help 0/1 1 0 33d
==> v1/Pod(related) NAME READY STATUS RESTARTS AGE help-699b97d548-jd9zg 0/1 Terminating 0 2m58s help-74688d5f45-jm6sx 0/1 ContainerCreating 0 76s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
help ClusterIP 172.30.115.158
==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE help help.dev.onprem.dmsuitecloud.com 80 33d
NOTES: Helm Chart installed : help in namespace dmp-system Your release is named : help.`
no the full ` # Source: help/templates/help-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: com.fico.dmp/instanceId: help com.fico.dmp/name: help reloader.stakater.com/auto: “true” labels: com.fico.dmp/instanceId: help com.fico.dmp/name: help name: help`
got it. that’s not possible today. why would you like that?
Actually i made a docker image for installation using helmfile.I want only the helm charts info to be printed in the logs not the complete deployments
btw i’m asking because i’ve seen that everyone has different opinions and requirements for which output they want
i see. how would you debug the installation when it faiiled?
We have implemented the rollback feature for client we wont be giving him the complete info about deplyments.If the deployment fails then our team will look into it locally
but if you like the default output from that helmfile container to include only k8s resources installed/upgraded by helm(https://sweetops.slack.com/archives/CE5NGCB9Q/p1572505407002900?thread_ts=1572505148.001300&cid=CE5NGCB9Q)
how would your team debug it?
`RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE help 0/1 1 0 33d
==> v1/Pod(related) NAME READY STATUS RESTARTS AGE help-699b97d548-jd9zg 0/1 Terminating 0 2m58s help-74688d5f45-jm6sx 0/1 ContainerCreating 0 76s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
help ClusterIP 172.30.115.158
==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE help help.dev.onprem.dmsuitecloud.com 80 33d
NOTES: Helm Chart installed : help in namespace dmp-system Your release is named : help.`
i.e. if rollback happens the the logs will notify the user and user will contact us..so that client dont have to do anything and we will fix the issue and will make a new image and push it
so perhaps you want an ability to configure multiple log output channels? like stdout
contains k8s resources affected by helm only, and something like debug.log
contains all the logs?
actually for this we will run locally on our cluster the same without disabling output and try..the cluster with us is exact copy of the clients cluster and with the help of helm diff we can see the changes and try to work on them…
mostly there will be image change only in the helm chart so we can recify that easily
ah, it would work out ok!
so is it possible to disable diff output?
no, as i said above
but i’m eager to add a feature to configure what’s included in the helmfile log. does that sound good to you?
yup…i think it will be great if we can add feature to add logs incrementally like if ye want to add diff logs or not and etc
maybe something like helmfile --log-filter helm,exec,helmfile,...
Yup like this only…so can we disable diff output with this currently?
or even helmfile --info-log-filter helm,exec,helmfile,...
…
nope
all you can do today would be pipe it to tee
and grep only what you want
so try that way if you want something that works today
Yup thanks….but the logs are clattered like we dont know how long will be notes and other thing so it wont work fine…but thanks for your help..
yeah, but i think helmfile apply | grep -v '^\(+|-\)
would mostly work
as the diff output mostly begins with any of +
, -
Yup thanks…i will check it..thanks alot for your help!..
or even better helmfile apply | grep -v '^\(+|-|Comparing |Release was not present in Helm|\*\)'
my pleasure! good luck
2019-10-31
Hi again… Need some inputs on a issue facing with Kiam helmfile where I am trying to give the annotations at object level. But somehow annotations are not coming up.. below are the snipped what I am getting and what i need.
While running the helmfile.. I am not getting the annotations at object level
+ # Source: kiam/templates/agent-daemonset.yaml
+ apiVersion: apps/v1beta2
+ kind: DaemonSet
+ metadata:
+ labels:
+ app: kiam
+ chart: kiam-2.5.2
+ component: "agent"
+ heritage: Tiller
+ release: kiam
+ name: kiam-agent
+ spec:
+ selector:
+ matchLabels:
+ app: kiam
+ component: "agent"
+ release: kiam
+ template:
+ metadata:
+ annotations:
+ secret.reloader.stakater.com/reload: kiam-agent-certificate-secret,kiam-ca-cert
Expected output should something like
+ # Source: kiam/templates/agent-daemonset.yaml
+ apiVersion: apps/v1beta2
+ kind: DaemonSet
+ metadata:
+ annotations:
+ secret.reloader.stakater.com/reload: kiam-agent-certificate-secret,kiam-ca-cert
+ labels:
+ app: kiam
+ chart: kiam-2.5.2
+ component: "agent"
+ heritage: Tiller
+ release: kiam
+ name: kiam-agent
+ spec:
+ selector:
+ matchLabels:
+ app: kiam
+ component: "agent"
+ release: kiam
+ template:
+ metadata:
+
+
hey!
unfortunately the kiam chart doesn’t seem to support annotations at the daemonset level
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
you’ll see there’s no templates set up for the daemonset annotations
I was working to add the annotations in helmfile.. I think we can do something like this in helmfile for kiam
hooks:
- events: ["postsync"]
command: "/bin/sh"
args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet/{{`{{ .Release.Name }}`}}-agent secret.reloader.stakater.com/reload=kiam-agent-certificate-secret,kiam-ca-cert"]
- events: ["postsync"]
command: "/bin/sh"
args: ["-c", "kubectl annotate --overwrite --namespace={{`{{ .Release.Namespace }}`}} DaemonSet/{{`{{ .Release.Name }}`}}-server secret.reloader.stakater.com/reload=kiam-server-certificate-secret,kiam-ca-cert"]
yeah maybe..
did it work?
yes.. it worked
awesome!
ideally helmfile should provide a way to patch some resources included in the release, without forking the original chart
Opened up a PR for cloudposse’s nginx-ingress helmfile to add PROXY support, tested and works great in my staging cluster: https://github.com/cloudposse/helmfiles/pull/199
what [nginx-ingress] This adds configurability (via env NGINX_INGRESS_USE_PROXY_PROTOCOL) to use the PROXY protocol headers to requests why Allow better communication of things like the actual c…
Is there a helmfile icon or logo somewhere? Preferably SVG.
i’d love to but we don’t have one today
Is there a way to treat an entire helmfile as an atomic release? I.e. rollback ALL releases in the helmfile if ANY fail?
hey! currently, no, helmfile doesn’t have such feature. probably worth a feature request?
but why you need that?
i think your releases are usually backward-compatible and therefore you don’t need to rollback successful releases rolled out before the failed release
or perhaps you want helmfile to rollback only the failed release?
Ideally each release would be backwards compatible yes, but we’re maturing to that point as we break apart a single runtime/release into multiples
And as it sits currently, we’d like to ensure the same version of code is deployed at a given time with rollback if it is not.
I’ll open an request. Appreciate the response @mumoshu
that makes sense. thx for clarifying! im looking forward to the feature request
just to be sure, what you want helmfile to do is basically running helm rollback $RELEASE_NAME $(helm history --output json $RELEASE_NAME | jq -r .[].revision | tail -n 2 | head -n 1)
for all the affected releases in a failed Helmfile run?
@Cameron Boulton
I think so? Ideally the logic that’s already used when atomic:true
for a given release fails
But instead for ALL releases if ANY release in the helmfile release array/list fails
If that makes sense?
jq -r .[].revision | tail -n 2 | head -n 1
is for obtaining the second latest release of the release, assuming the latest release is one created by the failed helmfile run
But instead for ALL releases if ANY release in the helmfile release array/list fails
im still trying to understand. how is it different from for all the affected releases in a failed Helmfile run
?
It is not different from for all the affected releases in a failed Helmfile run
But for all the affected releases in a failed Helmfile run
is different from current helmfile behavior today correct?
Ideally the logic that’s already used when atomic:true
for a given release fails
yeah that makes sense. implementation-wise, we can’t use the exact logic used by --atomic
as it’s just a flag provided by helm
itself
Ah
But for all the affected releases in a failed Helmfile run
is different from current helmfile behavior today correct?
ah, so you’re talking about the current behavior ouf helmfile when a release has atomic: true
set?
if so, yes, it rollbacks the failed release only
Yes, we’re using that now
gotcha
then what we need might be
a new flag like helmfile apply --rollback-on-failure
instructs helmfile to (1) rollback the failed release if the release didn’t have atomic: true
set and (2) all the successful releases rolled out before the failed one with https://sweetops.slack.com/archives/CE5NGCB9Q/p1572565299026800?thread_ts=1572563989.024200&cid=CE5NGCB9Q
just to be sure, what you want helmfile to do is basically running helm rollback $RELEASE_NAME $(helm history --output json $RELEASE_NAME | jq -r .[].revision | tail -n 2 | head -n 1)
for all the affected releases in a failed Helmfile run?
i.e. helmfile doesn’t need to explicitly rollback the failed release if the release had atomic: true
set
as atomic: true
results in helm upgrade --atomic
that would rollback the failed release automatically for you
does this make sense?
Yes, the actual failed release(s) themselves would already be rolled back if using atomic: true
So helmfile really only needs to rollback any successful releases if any one failed
absolutely!
i thought we’d better name it helmfile apply --atomic
but …
How do you decide between helmfile
argument instead of option in helmfile YAML?
Are the latter for helm
options only?
generally any helmfile-run-wide operational option is available via flags only
Gotcha
helmfile apply --atomic
seems great to me
yeah but i think i have a few questions if we do so
like
should it imply atomic: true
in all the releases?
That seems more problematic to me, but still thinking
or maybe we can just deprecate atomic: true
in favor of helmfile apply --atomic
?
Mmm but I think there are cases such as today’s behavior where people want PER release atomicity, but not across ALL releases (whole helmfile)
Does that make sense?
or evangelize to use atomic: true
and not helmfile apply --atomic
when you can ensure all the releases are backward compatible and you do need a automated rollback?
Exactly
Backwards compatible or maybe entirely unrelated
Such as kafka and redis
One might not care if kafka failed but redis succeeded
makes sense
Which is the behavior we have today that probably should not change
apply --atomic
would be a superset
and helmfile apply --atomic
makes atomic: true
irrelevant, right?
Seems like it does logically
As in, if you used --atomic
but omitted atomic: true
and > 0 release failed any successful would be rolled back
as helmfile
would rollback the failed release regardless of it had atomic: true
or not anyway
Right
But I guess what it lets you do:
exactly
Have atomicity PER release (like today)
And then optionally you COULD use --atomic
on demand if/when it was needed
And then not use --atomic
but still keep the per release atomic:true
behavior in that case
Does that make sense?
yes i believe so
so you basically use helmfile apply --atomic
only when one of your new releases has known backward-incompatibility
Exactly
Which for some users might be always
But at least it doesn’t force others with a changed behavior of what we have today
that’s great
Really appreciate the conversation @mumoshu
Would you still like me to open a GitHub issue/feature request @mumoshu?
so am i! it was very inspiring. i’m awaiting your feature request(probably including a link to this slack thread would help to provide ctx to other uses)
yes, i’d appreciate it if you could do so!
it’s a bit annoying task given we already have a great conversation here and settled on something
No problem. I understand the benefits of the formality for tracking, updates and visibility to other users of the project.
i just want to ensure that helmfile looks like being developed openly
Exactly
exactly!
thx for the conversation and your understanding!
Welcome and thank you