#helmfile (2021-04)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2021-04-06
any idea why it does not work like values ?
skipping missing values file matching "../config/{{ .Release.Name }}/patches.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/{{ .Environment.Name }}-patches.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/merge.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/merge.yaml.gotmpl"
skipping missing values file matching "../config/{{ .Release.Name }}/{{ .Environment.Name }}-merge.yaml"
skipping missing values file matching "../config/app/values.yaml"
Successfully generated the value file at ../config/app/values.yaml.gotmpl. produced:
My templates settings are :
templates:
default: &default
missingFileHandler: Debug
values:
- ../config/{{ .Release.Name }}/values.yaml
- ../config/{{ .Release.Name }}/values.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}.yaml
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}.yaml.gotmpl
secrets:
- ../config/{{ .Release.Name }}/secrets.yaml
- ../config/{{ .Release.Name }}/secrets.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-secrets.yaml
strategicMergePatches:
- ../config/{{ .Release.Name }}/merge.yaml
- ../config/{{ .Release.Name }}/merge.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-merge.yaml
jsonPatches:
- ../config/{{ .Release.Name }}/patches.yaml
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-patches.yaml
Did you find an example that it should work like this ?
no, just tried based on values. it works. so on the same file I have the Release.Name… few lines after i don’t have it ?
I don’t see that strategicMergePatches
and jsonPatches
do support files with/without rendered name.
it may be better to use valuesTemplate in templates, btw, but it’s not related to the issue you’re seeing
so I cannot give dynamic address of the patch file? I need to set it under all apps in release ?
try and see. If hardcoded file name works - you may submit a PR to get generated names to function there, or open an issue.
jsonPatches:
- ../config/app/patches.yaml
then it finds the patch and patches the resources I would like.
and what’s with strategicMergePatches
?
same
if I hardcode the app name it finds… if not it does not replace the {{}} with the value and searches under ../config/{{ .Release.Name }}/patches.yaml
where I don’t have the file
well, sometimes hack with quoting like
{{`{{ .Release.Labels.app }}`}}
may work
skipping missing values file matching "../config/{{ .Release.Name }}/patches.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/{{ .Environment.Name }}-patches.yaml"
skipping missing values file matching "../config/{{{{ .Release.Name }}}}/merge.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/merge.yaml.gotmpl"
same
it looks it templating only values and secrets. is that correct?
as I said above, you may submit a PR to get generated names to function there, or open an issue. it may be some trivial fix, or may not, IDK
@voron I can confirm, we’re using the same approach
dockerized: &dockerized
namespace: {{ .Environment.Values.namespace }}
missingFileHandler: Warn
labels:
group: docker
values:
- envs/dockerized.yaml.gotmpl
- envs/{{`{{ .Environment.Name }}`}}/dockerized.yaml.gotmpl
- envs/common/dockerized/{{`{{ .Release.Name }}`}}.yaml.gotmpl
- envs/{{`{{ .Environment.Name }}`}}/{{`{{ .Release.Name }}`}}.yaml.gotmpl
there is no issue w/ values/valuesTemplate
or secrets
. It’s specific to strategicMergePatches
and jsonPatches
yes. I have issue with patches not with values and secrets.
@Balazs Varga did you tried hack with
{{`{{ ... }}`}}
?
yes I tried.. did not work
file a PR or an issue on GH, that’s all I can advise.
will do. Now I am trying to find where is it templating the secrets and values. I mean the path only because I need the path to be templated. the patch contains hardcoded data.
well, another possible option is to fork chart and fix it to get rid of patches
yeah but this looks more elegant way than modifying charts.
today I will spend to solve this if cannot then will modify charts
Hey! I guess you would like valuesTemplate
@Balazs Varga cc @voron
For more info find valuesTemplate
in
https://github.com/roboll/helmfile/blob/master/docs/writing-helmfile.md#release-template--conventional-directory-structure
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
yes, but if I would like to use here it does not work.
I thought there’s no reason it doesn’t work under templates
.
You might be just merging-in the template afterwards using <<: *default
, right?
How did your helmfile.yaml with templates
and releases
looked like when you found valuesTemplate not working?
templates I mentioned. releases was simple like this:
releases:
- name: init
namespace: default
chart: ../chart/init
<<: *default
so the idea was if I put the values and secrets and patches to the selected folder it will template and use it… it searches under those folders but as you see it does not template the patches lines:
skipping missing values file matching "../config/{{ .Release.Name }}/patches.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/{{ .Environment.Name }}-patches.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/merge.yaml"
skipping missing values file matching "../config/{{ .Release.Name }}/merge.yaml.gotmpl"
skipping missing values file matching "../config/{{ .Release.Name }}/{{ .Environment.Name }}-merge.yaml"
skipping missing values file matching "../config/app/values.yaml"
@Balazs Varga Could you try (literally) valuesTemplate
instead of values
for values, then?
You can’t access release template from within values
, which is supposed to be a yaml array of plain strings
On the other hand each item in valuesTemplate
is considered a go template with accses to release template
yeah will try… few sec
Also if you’d need to do the same templating on file paths in secrets
, unfortunately it isn’t supported today. But it should be a relatively easy addition to Helmfile. Please feel free to open a dedicated feature request for that. It should look like secretsTemplate
thanks. using valuesTemplate it works
that is fine to me if I cannot put merge into secret “folder”.
Oh really? Okay then! I just thought you’d want the same level of reusability for the secrets
array, too
no, I just wanted to have jsonpatches and mergepatches under config folder to have a light helmfile.d file
templates:
default: &default
missingFileHandler: Debug
valuesTemplate:
- ../config/{{ .Release.Name }}/values.yaml
- ../config/{{ .Release.Name }}/values.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}.yaml
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}.yaml.gotmpl
- ../config/{{ .Release.Name }}/merge.yaml
- ../config/{{ .Release.Name }}/merge.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-merge.yaml
- ../config/{{ .Release.Name }}/patches.yaml
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-patches.yaml
secrets:
- ../config/{{ .Release.Name }}/secrets.yaml
- ../config/{{ .Release.Name }}/secrets.yaml.gotmpl
- ../config/{{ .Release.Name }}/{{ .Environment.Name }}-secrets.yaml
this worked to me… if somebody else will have the same issue.
Successfully generated the value file at ../config/test/merge.yaml.gotmpl. produced:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test
spec:
...
Ah, so what I wanted to say was that I’m afraid that this part isn’t working as you might have expected:
secrets:
- ../config/{{ .Release.Name }}/secrets.yaml
Almost certainly this is translated to as-is, without templating, not ../config/test/secrets.yaml
as you might have expected, but ../config/{{ .Release.Name }}/secrets.yaml
that still works
skipping missing values file matching "../config/prometheus/default.yaml"
skipping missing values file matching "../config/prometheus/default.yaml.gotmpl"
skipping missing values file matching "../config/prometheus/merge.yaml"
skipping missing values file matching "../config/prometheus/merge.yaml.gotmpl"
skipping missing values file matching "../config/prometheus/default-merge.yaml"
skipping missing values file matching "../config/prometheus/patches.yaml"
skipping missing values file matching "../config/prometheus/default-patches.yaml"
skipping missing secrets file matching "../config/prometheus/secrets.yaml"
skipping missing secrets file matching "../config/prometheus/secrets.yaml.gotmpl"
skipping missing secrets file matching "../config/prometheus/default-secrets.yaml"
I just don’t use it here.
oh really!!
We use templated names with secrets for some time ago, similar to
secrets:
- ../../live/{{ .Environment.Name }}/{{`{{ base .Release.Chart }}`}}/secrets-{{`{{ .Release.Name }}`}}.yaml
2021-04-07
Hi everyone, this is probably me missing something simple but for some reason I am not able to use the new “waitForJobs” config.
I added it to the release section as follows
releases:
- name: test
...
wait: true
waitForJobs: true
timeout: 60
helmfiles:
...
{{ if eq .Environment.Name "cluster" }}
- path: environments/cluster/test.yaml
{{ end }}
environments:
cluster:
The helmfile apply
command fails with
...
[1] in /home/helmfile/helm-installer/resources/helmfile.yaml: in .helmfiles[1]: in environments/cluster/test.yaml: failed to read test.yaml: reading document at index 1: yaml: unmarshal errors:
[1] line 7: field waitForJobs not found in type state.ReleaseSpec
Helm and helmfile version
bash-5.0# helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}
bash-5.0# helmfile version
helmfile version v0.138.7
Hi, this is a recent addition, see https://github.com/roboll/helmfile/pull/1715 and related commit merged in master: https://github.com/roboll/helmfile/commit/2618cfb38b20d867a977f2b295059893d23e507a but not yet released.
Oh, I did not notice this. I thought 138.7 had it cause the readme mentioned the new config. Thanks for the clarification!
Yep, the README is the one of the master branch
Is there any ETA for the next version?
2021-04-08
2021-04-11
Does anyone rely on helmfile’s current behavior that helmfile -l name=foo apply
NOT failing when foo
had some needs
to other releases?
https://github.com/roboll/helmfile/pull/1772 can be a breaking change to you so please chime-in and leave your comments if you have opinions
Currently, this is going to be a breaking change for whoever relied on the helmfile's existing behavior of helmfile -l foo=bar apply silently ignoring the unfulfilled needs. Since this change, …
2021-04-13
Good evening everyone. I’m trying to learn helmfile
and am struggling to figure out why I’m getting the following errors. My directory structure is:
.(helmfile.d)
├── generic
| ├── helmfile.yaml
│ ├── 01-secrets-management
│ │ ├── dex
│ │ │ ├── helmfile.yaml
│ │ │ └── values.yaml
│ │ ├── helmfile.yaml
│ │ ├── oauth2-proxy
│ │ │ ├── arm64-values.yaml
│ │ │ ├── secrets
│ │ │ ├── values.yaml
│ │ │ └── wait_for_endpoint.sh
│ │ ├── vault-operator
│ │ │ ├── helmfile.yaml
│ │ │ ├── secrets
│ │ │ │ └── vault-cr-secret-dec.yaml
│ │ │ └── values.yaml
│ │ └── vault-secrets-webhook
│ │ └── values.yaml
| └── common
│ ├── config.yaml
│ ├── environments.yaml
│ ├── helmdefaults.yaml
│ └── repos.yaml
└── helmfile.yaml
The helmfile.yaml
in .
is:
---
helmfiles:
- "*/*"
The helmfile.yaml
in generic
is:
helmfiles:
- "*"
The helmfile.yaml
in 01-secrets-management
is:
bases:
- ../common/environments.yaml
- ../common/repos.yaml
- ../common/helmdefaults.yaml
finally:
The helmfile.yaml
in dex
is:
bases:
- ../../common/environments.yaml
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
## ************************************
## Start of DEX installation
## ************************************
releases:
- name: dex
namespace: {{ .Values.dex.namespace }}
createNamespace: true
labels:
tier: "secrets-management"
app: dex
chart: repo/helm-charts
version: {{ .Values.dex.version }}
values:
- values.yaml
I’m running helmfile -e default --log-level debug lint
from the dex
directory. I get the following output/error:
The first error means that it’s not finding/using the common bases. I’m not understanding why. The last error is seemingly cascading from the first.
perhaps I’ve misread/misunderstood the documentation on bases
but re-reading it it seems that helmfile
is trying to render helmfile.yaml
BEFORE rendering the other layers. Is that correct?
so if that’s the case then that means I can’t use anything defined in environments
because it’s not yet read/rendered. Correct?
seems kinda useless to me if that’s the case.
FTR, if this is how bases
works, it seems that it’s not very useful in this case and would be more useful in top-level helmfile.yaml
files.
We can’t resolve that chicken-and-egg problem automagically. I may be still missing something but at glance what you wanted seems like
bases:
- ../../common/environments.yaml
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
---
## ************************************
## Start of DEX installation
## ************************************
releases:
- name: dex
namespace: {{ .Values.dex.namespace }}
createNamespace: true
labels:
tier: "secrets-management"
app: dex
chart: repo/helm-charts
version: {{ .Values.dex.version }}
values:
- values.yaml
Notice ---
so that the first part is rendered as a template to produce a YAML structure that includes bases
. Bases should be loaded and the env values are loaded before rendering the latter part as a template
ahh! Lemme give that a try real fast.
nice!
not outta the woods yet…but that got me past that roadblock
how much time you got @mumoshu? Is it late where you are? I have questions, thoughts, and possibly ideas.
and I’m still very new to helmfile so I’m struggling to get a good working set of helmfiles here.
The error has changed…but I think this is good as it seems that the bases
files are getting read now. These are the new errors.
This looks like possibly over-lapping values between my config.yaml
or environments.yaml
files.
actually, this looks like it’s reading environments.yaml
which in turn references config.yaml
within the directory where environments.yaml
resides. This might be a referential problem with respect to … something.
yes it does. probably https://sweetops.slack.com/archives/CE5NGCB9Q/p1618385513079000?thread_ts=1618381282.078300&cid=CE5NGCB9Q clarifies that a bit?
files referenced from within a sub-helmfile is relative to the sub-helmfile, to make the sub-helmfile portable(not dependent on the parent-helmfile)
taking another look at https://github.com/roboll/helmfile#paths-overview
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
so
Relative paths referenced in the Helmfile manifest itself are relative to that manifest
does this mean in sub-helmfile.yaml files that the path references in those files are relative to them or the relative to the top-level helmfile.yaml?
I’m trying to use paths in sub-files as relative to the sub-files.
wow.
that’s the case.
@mumoshu that’s a super-confusing thing methinks.
it’s also not very portable imo.
why?(i’m not yet sure if I fully understand your usecase
files referenced from within a sub-helmfile is relative to the sub-helmfile, to make the sub-helmfile portable(not dependent on the parent-helmfile)
so to me the current behaviour makes sub-helmfiles portable, which is nice
i need to learn something- what would be ideal behaviour and how would you rewrite your yamls with that ideal behaviour?
well, maybe I’m doing something wrong. So lemme articulate what I’m doing first and then I’ll attempt to explain what and why I’m doing it.
first, I’m still trying to read through the docs and learn helmfile. The docs are good, but not very simple to follow referentially.
so, all the stuff I showed here is relevant
Good evening everyone. I’m trying to learn helmfile
and am struggling to figure out why I’m getting the following errors. My directory structure is:
.(helmfile.d)
├── generic
| ├── helmfile.yaml
│ ├── 01-secrets-management
│ │ ├── dex
│ │ │ ├── helmfile.yaml
│ │ │ └── values.yaml
│ │ ├── helmfile.yaml
│ │ ├── oauth2-proxy
│ │ │ ├── arm64-values.yaml
│ │ │ ├── secrets
│ │ │ ├── values.yaml
│ │ │ └── wait_for_endpoint.sh
│ │ ├── vault-operator
│ │ │ ├── helmfile.yaml
│ │ │ ├── secrets
│ │ │ │ └── vault-cr-secret-dec.yaml
│ │ │ └── values.yaml
│ │ └── vault-secrets-webhook
│ │ └── values.yaml
| └── common
│ ├── config.yaml
│ ├── environments.yaml
│ ├── helmdefaults.yaml
│ └── repos.yaml
└── helmfile.yaml
The helmfile.yaml
in .
is:
---
helmfiles:
- "*/*"
The helmfile.yaml
in generic
is:
helmfiles:
- "*"
The helmfile.yaml
in 01-secrets-management
is:
bases:
- ../common/environments.yaml
- ../common/repos.yaml
- ../common/helmdefaults.yaml
and the way I’m approaching creating this is by trying to get helmfile to work in the dex release first. The ultimate goal, however, is to run helmfile from a different directory (one that references the environment I want to deploy).
so, my thinking is, get it working first in dex
. Then get helmfile to work one directory up from dex
(01-secrets-management
)
all the way up to generic
.
I’m trying to keep this DRY…
I’m not sure any of this makes sense. It’s difficult to articulate.
Sounds good so far
oh good
ok, so, if I run helmfile
from dex
which uses the ../../common/environments.yaml
file to define environments.
the ../../common/environments.yaml
file references a file called config.yaml
but I’m in dex
so the reference to config.yaml
has to be referenced relatively from dex
but ../../common/environments.yaml
will need to be changed when I run helmfile
from a different directory.
that’s what I mean by “not very portable”
it would be better if (at least in the way I’m using this) to relatively reference files from the manifest referencing the file.
So, I’m in /var/tmp/helmfile-work/helmfile/helmfile.d/generic/01-secrets-management/dex
and ../../common/environments.yaml
contains:
environments:
default:
values:
- ../../common/config.yaml
production:
values:
- ../../common/config.yaml
the ../../common/environments.yaml file references a file called config.yaml but I’m in dex so the reference to config.yaml has to be referenced relatively from dex but ../../common/environments.yaml will need to be changed when I run helmfile from a different directory. (edited)
ah gotcha!
fwiw, it doesn’t look like what bases
is supposed to help today.
the contents of common
is:
at 00:58:51 ❯ ls -ltr ../../common
total 16
-rw-rw-r-- 1 jimconn jimconn 540 Apr 13 00:02 repos.yaml
-rw-r--r-- 1 jimconn vboxsf 486 Apr 13 22:10 config.yaml
-rw-rw-r-- 1 jimconn jimconn 129 Apr 13 23:20 environments.yaml
-rw-rw-r-- 1 jimconn jimconn 37 Apr 13 23:26 helmdefaults.yaml
ah
ok
so I’m not using it properly?
I usually recommend using
{{ readFile "common.yaml.gotmpl" | tpl . $someData }}
---
# releases, repositories, etc
ok
I can give that a try in a moment
one more quick question then…something that’s not really documented very well I think
so I’m gonna switch gears on you for a min
this might be a really easy question for you to answer.
i thought there were opening issue(s) about adding parameters to bases
but i cant get the exact link urls for them now
ok
all right, here’s the helmfile.yaml
again for dex
:
---
bases:
- ../../common/environments.yaml
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
---
## ************************************
## Start of DEX installation
## ************************************
releases:
- name: dex
namespace: {{ .Values.dex.namespace }}
createNamespace: true
labels:
tier: "secrets-management"
app: dex
chart: repo/dex
version: {{ .Values.dex.version }}
values:
- values.yaml.gotmpl
set:
- name: var.aws_nlb_with_tls_termination_at_lb
value: {{ env "AWS_NLB" | default false }}
- name: var.arm64_support
value: {{ env "ARM64" | default false }}
I’m trying to use a templatized values.yaml
my values.yaml.gotmpl
(partial) is:
# DO NOT increase replicas to >1 during the initial install.
# There is a bug which causes the GRPC TLS certs not to be issued because of webhook race conditions.
# Bug : <https://github.com/helm/charts/issues/24229>
#replicas: 3
{{- if var.arm64_support }}
image: ghcr.io/dexidp/dex
imageTag: "v2.26.0"
{{- end }}
tolerations:
- key: "dedicated"
operator: "Equal"
value: "utility"
effect: "NoSchedule"
I am not able to reference the set
parameters in the values.yaml.gotmpl
— not sure what I’m doing wrong here.
ah well I haven’t really tried to use it that way but after rereading https://github.com/roboll/helmfile/issues/688
this might work:
values:
- someValuePassedToBases: "foobar"
---
bases:
- common.yaml
---
#releases, repositories, etc
Extracted from #347 (comment) We've introduced bases a month ago via #587. I'd like to make this breaking change(perhaps the first, intended breaking change in helmfile) before too many peo…
values.yaml.gotmpl
is rendered by helmfile. you have access to helmfile-managed values only there
where set
sets values passed to helm
ahhh
ok, so I need to set those in a helmfile
values somewhere. Maybe the thing you just linked?
values:
- key: "something"
?
you usually use helmfile (environment) values to produce a series of set
entries and values
files
set
and values
entries rendered and merged by helmfile are passed to helm
ok, so I need to set those in a helmfile values somewhere. Maybe the thing you just linked?
absolutely!
right, so I’m trying to localize those to just dex in this case. Obviously, I’m not just going to use helmfile to deploy dex but it will be used to deploy a bunch of stuff. So, trying to find a way to make this work both “locally” (for just dex) and then from more of an environment deployment perspective is where I’m getting lost.
ok! Let me think through how to do that.
wishing your good luck!
let me also note that i believe it would worth trying https://sweetops.slack.com/archives/CE5NGCB9Q/p1618387448086400?thread_ts=1618381282.078300&cid=CE5NGCB9Q
ah well I haven’t really tried to use it that way but after rereading https://github.com/roboll/helmfile/issues/688
this might work:
values:
- someValuePassedToBases: "foobar"
---
bases:
- common.yaml
---
#releases, repositories, etc
helmfile is a rich ecosystem of stuff. Trying to wrap my head around it all for my use-case is complicated.
I appreciate your help!
sounds right- keep posting comments/questions/feedbacks here. i or other people will respond soon!
2021-04-14
2021-04-15
hey guys, im trying to add helmfile to a docker image (dockerfile) and not sure if https://github.com/roboll/helmfile/releases/download/v0.138.7/helmfile_linux_amd64 is the actual binary?
can we set a “security” to avoid misdeploys? I mean if I accidentally deploy from a wrong branch to a cluster. ? a warning message or interactive promt… and not -i because we can forget it. in default config maybe?
I have the following directory structure where helmfile.d
has the helmfile manifests. The helmfile
directory tree has my environments and this is from where I want to invoke environmental helmfile runs:
├── helmfile
│ ├── envs
│ │ ├── dev
│ │ │ └── cluster-a
│ │ ├── preprod
│ │ └── production
│ └── shlib
└── helmfile.d
└── generic
├── 01a-network-and-proxies
│ ├── ambassador
│ ├── external-dns
│ └── ingress-nginx
├── 01b-secrets-management
│ ├── certmanager
│ ├── dex
│ ├── oauth2-proxy
│ │ └── secrets
│ ├── vault-operator
│ │ └── secrets
│ └── vault-secrets-webhook
└── common
In cluster-a
there’s a helmfile.yaml
and config.yaml
:
❯ \cat helmfile.yaml
environments:
default:
values:
- config.yaml
helmfiles:
- "../../../../helmfiles.d/*"
The concept is that any specific configuration values for cluster-a
are specified in the config.yaml
inside the cluster-a
directory. However, when I test invoking helmfile
from this path, I get the error:
envvals_loader: loaded config.yaml:map[values:map[keyname:foobar]]
no matches for path: ../../../../helmfiles.d/*
merged environment: &{default map[values:map[keyname:foobar]] map[]}
helm:XVlBz> v3.4.1+gc4e7485
0 release(s) found in helmfile.yaml
err: no releases found that matches specified selector() and environment(default), in any helmfile
I don’t understand why.
one thing to note is there’s a helmfile.yaml
in each directory which specifies helmfiles
property:
.(helmfile.d)
├── generic
│ ├── 01a-network-and-proxies
│ │ └── helmfile.yaml
│ ├── 01b-secrets-management
│ │ └── helmfile.yaml
│ └── helmfile.yaml
└── helmfile.yaml
in helmfile.d/helmfile.yaml
---
helmfiles:
- "generic/*"
in generic/helmfile.yaml
helmfiles:
- "*/*.yaml"
won’t the attribute in helmfiles:
cause helmfile to “walk up the chain” so-to-speak?
sigh, figured it out.
for posterity sake, the helmfile.yaml
in the cluster-a
directory had a misspelled directory name: - "../../../../helmfiles.d/*"
(note helmfiles.d vs helmfile.d)
@mumoshu https://github.com/roboll/helmfile/issues/1045#issuecomment-820870785 might be interesting to you?
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
sigh. I’m really struggling to understand certain points in this documentation. I keep getting blocked on aspects of how to properly use helmfile
and I either can’t find documentation to meet my needs or the documentation I think will help me either doesn’t make sense or is not as verbose as necessary to understand the full aspect of what the documentation is trying to point out.
For instance, I’m blocked on certain aspects of this project (tickets and questions are submitted) and I’m moving on to other aspects of this project where I’m not blocked. One of those aspects is the ability to logically select charts. It doesn’t work the way I’m trying to implement so I’m reading the documentation, which doesn’t make sense to me. I don’t understand the context of inheritance with respect to sub-helmfiles and the inherited properties. That point is not well spelled out. What do you mean by inherited in context? Everything? Certain properties? The selectors? I don’t understand.
The use-case I’m trying to solve is to simply use a selector to run helmfiles only identified by that selector. Everything will run exactly the same as if I didn’t use any selector except ONLY the helmfiles specified by that selector (or a negated selector) would run. The details of how that works should not be something the end-user is concerned about if it’s clear-cut as what I was hoping would be the case. I’m simply not understanding, which I think is because the documentation is a little too sparse, but there is a lot of documentation. I can’t put my finger on the problem. It might be me, which I can concede. It shouldn’t be this complicated to understand what software can do and I keep running into roadblocks here. I want to use helmfile because I believe it does what we need it to do. Some assistance would be greatly appreciated.
inherited in context?
Hey! Helmfile doesn’t inherit anything to sub-helmfiles. Inheritance of helmfile environments and values usually happen only between the parent helmfile.yaml and bases. Does that clarify it a bit?
That’s also the foundational thing under my comment on your previous question https://github.com/roboll/helmfile/issues/1045#issuecomment-821910447
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
Selectors aren’t inherited. It’s just that the user specifies the selector and helmfile uses it to filter releases across all the involved helmfie.yaml.
Environment and environment values aren’t inherited. That’s why this doesn’t work:
environments:
default:
values:
- config.yaml
---
helmfiles:
- "../../../../helmfile.d/"
The documentation may be outdated or incomplete or simply incorrect, seeing your frustration.
Did you find any specific documentation saying that environments and values are inherited down to sub-helmfiles? If so, we’d definitely need to fix it.
Does the documentation say if anything’s inherited to sub-helfmiles or bases?
I thought I even didn’t explicitly say in the documentation that the bases inherit parent’s environments and values, as trying to depend on the parent helmfile.yaml always sounded like a bad idea to me.
Everything will run exactly the same as if I didn’t use any selector except ONLY the helmfiles specified by that selector (or a negated selector) would run.
--selector
(-l
) works that way. To do so, Helmfile requires you to make each sub-helmfile independently consumable. That’s why environments arent inherited down to sub-helmfiles, and the basis of comments starting from https://sweetops.slack.com/archives/CE5NGCB9Q/p1618707461115400?thread_ts=1618551534.114100&cid=CE5NGCB9Q
inherited in context?
Hey! Helmfile doesn’t inherit anything to sub-helmfiles. Inheritance of helmfile environments and values usually happen only between the parent helmfile.yaml and bases. Does that clarify it a bit?
Would there be anything I’ve not yet answered?
Hey @mumoshu also have a selector question, are there maybe reserved words helmfile doesn’t allow to be label selectors?
Consider a simple example with 3 labels in 2 releases:
repositories:
- name: datawire
url: <https://www.getambassador.io>
- name: incubator
url: <https://charts.helm.sh/incubator>
releases:
- name: ambassador
namespace: ambassador
labels:
chart: ambassador
namespace: ambassador
foo: bar
chart: datawire/ambassador
version: 6.6.2
- name: raw
namespace: ambassador
labels:
chart: ambassador
namespace: ambassador
foo: bar
chart: incubator/raw
version: 0.2.5
If I run helmfile -l namespace=ambassador diff
both releases are selected in the diff
If I run helmfile -l foo=bar diff
both releases are selected in the diff
However, if i run helmfile -l chart=ambassador diff
only the first release is selected in the diff. This is on v0.138.6.
Yes. As far as I remeber, chart
and name
are reserved, each has the respective value of the release
hah, thank you! it was confusing me for the longest time (why some releases weren’t being applied)
Ah, that makes sense! Probably we’d better enhance the docs to note about reserved labels and make it a validation error when you’ve tried to override a reserved label
Would you mind opening issues?
Thank you for the response @mumoshu! Today is my Sunday and I have other obligations today but I will answer your questions here tomorrow and also your comment for my GH issue! Very much appreciated!
@mumoshu I updated my comments in the GH issue
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
2021-04-16
Do I have to explicitly define a reference to each of the values I include under a named environment block into a release or is there a tidier way to bulk include the contents of the environment block ontop of the release values?
I feel daunted compared to Jim’s previous question ;) but I managed to break this down to a simple example and suspect i’m just “doing helmfile wrong” though the environments block in the README implies to me that this should work?
basically, i did helm create my-chart
and then wrote this helmfile.yaml for it.
environments:
dev:
values:
- fullnameOverride: my-chart-dev
live:
values:
- fullnameOverride: my-chart-live
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
# values:
# - fullnameOverride: {{ .Values.fullnameOverride }}
so helmfile should deploy my chart where the full name for objects is overridden by the environment specific value.
however, helmfile doesn’t inherit those values unless I uncomment the last two lines and explicitly declare each variable I want to make environmentally conditional? Are all the environment values not inherited? I’m hoping to not have to write out each variable in the releases block otherwise I might as well go for the - ./ values/{{ environment.Name }}.yaml
method and I wanted to avoid having two full copies of the values.yaml..?
@jedineeper Hey! Well, honestly speaking I don’t understand how the documentation can make you think that environment values are inherited to release values. Probably the documentation needs to specifically say it doesnt?
Environment values are used to render helmfile.yaml templates and helmfile’s values gotmpl files only.
Helmfile doesn’t automatically pass those values to Helm(as release values).
Uncommenting the last two lines is the way I understand using helmfile with environments.
That the usage of {{ .Values.fullnameOverride }}
in the chart files and the usage of {{ .Values.fullnameOverride }}
in helmfile refer to separate things.
Inside the chart, .Values
refer to the helmchart release’s values:
metadata:
labels:
foo: {{ .Values.fullnameOverride }}
refers to this value:
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
values:
- fullnameOverride: bar
So the label foo=bar.
Inside the helmfile, .Values
refer to the environment’s values:
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
values:
- fullnameOverride: {{ .Values.fullnameOverride }}
refers to these values:
environments:
dev:
values:
- fullnameOverride: my-chart-dev
live:
values:
- fullnameOverride: my-chart-live
and you have to select between them with the -e
environment argument.
The last two lines bridge the two cases together.
@jedineeper If you want to have different “helm” values.yaml per each helmfile named environment, I’d do this:
environments:
dev: {}
live: {}
---
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
values:
- environments/{{ .Environment.Name }}/values.yaml
@vicken Great explanation!
We can even merge the above examples:
---
environments:
dev:
values:
- fullnameOverride: my-chart-dev
live:
values:
- fullnameOverride: my-chart-live
---
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
values:
- environments/{{ .Environment.Name }}/values.yaml
- fullnameOverride: {{ .Values.fullnameOverride }}
A common addition to this is when you’d like to make the environment-specific helm values.yaml optional, add missingFileHandler
:
environments:
dev:
values:
- fullnameOverride: my-chart-dev
live:
values:
- fullnameOverride: my-chart-live
---
releases:
- name: my-chart
chart: ../.
namespace: chart-test
version: 0.1.0
missingFileHandler: Warn
values:
- environments/{{ .Environment.Name }}/values.yaml
- fullnameOverride: {{ .Values.fullnameOverride }}
Super useful, thanks both. Looks like I was on the wrong track with my understanding and that’s cleared up now :)
@mumoshu this might help me. I’d still like to read your thoughts outlined in my gh issue but I might be able to make progress with this information.
@Jim Conner In the scenario 3, are you trying to feed environments defined in bluescape/ops/helmfile-project/helmfile/envs/dev/cluster-n/config.yaml
into every helmfile yamls defined under bluescape/ops/helmfile-project/helmfile.d
?
yes
@mumoshu
This was where I was getting confused with the concept of inheritance in trying to understand what inheritance was with respect to helmfile
though you corrected me in one of your responses that helmfile
doesn’t inherit. So, let’s not use that term, but I am trying to propagate the values defined in config.yaml
in scenario three across the board of whatever I deploy using helmfile
from any respective environment directory
It’s much better to treat there’s no inheritance, I think. (Where did you find helmfile does inheritance? I think that doc should be corrected if it’s confusing
Gotcha so
It’s documentation that I read in other github issues…not in your documentation
Thx I see! Firstly I’d repeat https://github.com/roboll/helmfile/issues/1045#issuecomment-821910447 - envs defined in config.yaml doesn’t get automatically passed to sub-helmfiles under helmfile.d
.
I guess this should work:
helmfiles:
- path: "../../../../helmfile.d/"
values:
- {{ .Values | toYaml | nindent 6 }}
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
Lemme give that a try…
taking a read real fast.
I admit this isn’t elengant, but not sure if there’s any better way than this now..
ok. Lemme take a look.
Lemme ask you this…
is the method of how I want to use helmfile
odd or something?
It seems logical to me…and helmfile
is a very flexible tool, but am I doing it wrong?
Yours seem totally valid
Ok, that’s good to know!
I guess it’s just that no one has ever (loudly) tried to do it in a such elegant way
Wow. Thank you for that comment. Very nice.
I believe Helmfile should add something like the below to better support your usecase(actually this was my original plan was no one has ever requested it or sent me any question that leads to this
so, it seems like it could potentially be complicated to code depending on the library(ies) you might be using to mux objects…
but if you have a solid top-level then inheritance probably isn’t that tough to enable.
taking a look at this now…gimme a few please.
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
So from your comment in the thread, it’s not very clear to me which helmfile.yaml
that you are suggesting change to.
also, do you see merit in making a feature request out of this use-case? I’d LOVE to see this functionality.
I've been trying to use environments in my main helmfile and have multiple sub helmfiles in my releases folder. I wanted all the sub helmfiles to pick up whatever the value I defined in a speci…
also, in that thread, I need to define the config.yaml
which has the values defined in the top-level directory but it seems your example only defines a scriptsDir
which I suppose is the defined directory where the top-level config.yaml
file would reside; but if I point the scriptsDir
to the same path as where my top-level is then it seems that would cause a circular dependency unless I can specifically say, “use this file” — not just “use this directory”
@mumoshu is there a way to get helmfile template --selector foo=bar
to work?
if I point the scriptsDir to the same path as where my top-level is then it seems that would cause a circular dependency
This wasn’t clear to me, circular dependency between what, do you mean?
@Jim Conner Did you literally mean the way? You need to place --selector foo=bar
before template
.
ah
ok, lemme check that out.
ah perfect! Thank you
it’s not very clear to me which helmfile.yaml that you are suggesting change to.
I meant helmfile/envs/dev/cluster-a/helmfile.yaml
, assuming that’s where you’re trying to say “inherit envs defined in config.yaml to ../../../../helmfile.d/”
ah! Ok. So scriptsDir
points to the top-level directory?
Ah, no. It’s just an arbitrary value for illustration purpose.
It is’nt a reserved value in any way so you can’t assume scriptsDir points to anywhere wihout you specifically set it
k, so right now, the following seems to be working but I’m still attempting to test (this is my top-level):
environments:
default: {}
---
helmfiles:
- path: "../../../../helmfile.d/"
values:
- config.yaml
I don’t have all of the sub-helmfile.yaml manifests completely correct yet, except for dex but unfortunately, my selector is not working.
so I’m trying to figure that out.
0 release(s) matching tier=secrets-management,app=dex found in helmfile.yaml
have you tried helmfile --debug $SUBCOMMAND
? it would print a lot of logs to help debugging
yup
I’m reading through it all now to see if I can figure out where my problem lies
funny enough, the error I just posted is exactly after the repo updates. So, gotta figure it out.
0 release(s) matching tier=secrets-management,app=dex found in helmfile.yaml
will be printed on any sub-helmfile that contain other releases. But I thought it doesn’t result in an error?
I mean, you’ll see 0 release(s) matching tier=secrets-management,app=dex found in helmfile.yaml
in debug logs on every sub-helmfile that didn’t have releases matching the selector. But Helmfile won’t fail when any sub-helmfile had one or more releases that matched.
just a sec. still reading through the output. I found out that I had --selector
specified twice on accident on the command line so that was one problem.
the problem seems to be that helmfile
is not traversing recursively into certain paths…
could you push your project in github so that i can reproduce?
sure!
gimme a sec.
sorry for taking so long. Wife was in here talking to me for a min. Now, I’m scrubbing the repo as it’s not open source, but there’s nothing of great value in here yet. So, I’ll create in my personal repo temporarily and provide you with the link.
My shell stuffs. Contribute to notjames/jimconn-shell development by creating an account on GitHub.
I removed secrets files, so if you run this and you get errors about that, that’s why
Thanks!
just a sec
need to do something
the problem seems to be that helmfile is not traversing recursively into certain paths…
what were certain paths
and what helmfile command did you run to see it?
do a pull real fast
and then cd <repo>/temp/helmfile/envs/dev/atreus
done
thx
now in that directory I ran: helmfile --debug --selector app=dex template
err: error during helmfile.yaml.part.0 parsing: template: stringTemplate:5:30: executing "stringTemplate" at <.Environment.Values.namespace>: map has no entry for key "namespace"
changing working directory back to "/home/mumoshu/p/jimconn-shell/temp/helmfile.d/generic/01a-network-and-proxies"
changing working directory back to "/home/mumoshu/p/jimconn-shell/temp/helmfile.d"
changing working directory back to "/home/mumoshu/p/jimconn-shell/temp/helmfile/envs/dev/atreus"
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[0]: in generic/01a-network-and-proxies/helmfile.yaml: in .helmfiles[1]: in external-dns/helmfile.yaml: error during helmfile.yaml.part.0 parsing: template: stringTemplate:5:30: executing "stringTemplate" at <.Environment.Values.namespace>: map has no entry for key "namespace"
when you run that, you’ll notice that helmfile
doesn’t backup the tree after descending into 01a-…
to traverse into 01b-…
ahh…you’re saying that the reason is because the external-dns
“chart” is broken….that makes sense.
I’ll fix real fast. I haven’t polished all of these yet so I’m not concerned about that being broken
seems so! helmfile traverses sub-helmfile in some alphabetical order and fail-fast
ah!! OK, that’s good to know
lemme fix that real fsat.
does helmfile
descend into dot directories?
nm. I’ll rm -rf
external-dns and ingress-nginx
haven’t tried that but as long as the exact path or a glob pattern that matches the dotted directory/file, it should just work (I thought I have not programmed helmfile to explicitly ignore dot dir/file
ok
all right, so the error I’m getting now is:
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[0]: in generic/01a-network-and-proxies/helmfile.yaml: in .helmfiles[1]: in ambassador/helmfile.yaml: [Malformed label: tier="network-and-proxies". Expected label in form k=v or k!=v]
That tier
is defined in generic/01a-network-and-proxies/helmfile.yaml
so that’s interesting
the idea was, I can specify a tier at a higher level in the directory structure…but I wasn’t sure that would work.
maybe I did something wrong in the helmfile, though
the error seems to state as much
ah, I see
seems like you tried to use "
in the selector which isn’t supported?
well, I tried using proper yaml first. That fails.
just tried it again.
first-pass produced: &{default map[] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]}
second-pass rendering result of "helmfile.yaml.part.0":
0: helmfiles:
1: - "*/helmfile.yaml"
2: - path: */helmfile.yaml
3: selectors:
4: - tier=secrets-managment. <<== apparently not OK
5
❯ \cat helmfile.yaml
helmfiles:
- "*/helmfile.yaml"
- path: "*/helmfile.yaml"
selectors:
- tier: network-and-proxies
gives:
n ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[0]: in generic/01a-network-and-proxies/helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 5: cannot unmarshal !!map into string
needs to be tier=network-and-proxies
is selectors:
requiring a list perhaps?
definitely!
k, lemme fix and try
it must be a lint of strings
with no "
?
in where?
as long as it’s a yaml string, it would be okay
- "foo=bar"
works
ok, lemme try
- foo="bar"
is a valid string but invalid in terms of the selector syntax
so
❯ \cat helmfile.yaml
helmfiles:
- "*/helmfile.yaml"
- path: "*/helmfile.yaml"
selectors:
- "tier=network-and-proxies"
and this is temp/helmfile.d/generic/01a-network-and-proxies/helmfile.yaml
seems good
OK cool; that error is gone. But now it still can’t find the selector.
merged environment: &{default map[ambassador:map[namespace:ambassador version:6.6.2] arm64_support:false aws_nlb_with_tls_termination_at_lb:false blscauxclusterissuer:map[namespace:certmanager version:0.1.0] certmanager:map[namespace:certmanager version:1.0.4] cluster_id:<nil> dex:map[namespace:dex version:2.15.2] domain_name:<nil> externaldns:map[namespace:externaldns version:v20210203-v0.7.6-28-g44288212-arm64v8] name:somename nginxingress:map[namespace:ingress-nginx version:3.15.2] oauth2proxy:map[namespace:auth-system version:3.2.5] postgres:map[namespace:grafana version:x.x.x] semver:0.0.1 vaultoperator:map[namespace:vault version:1.8.1] vaultsecretswebhook:map[namespace:secrets-webhook version:1.8.2]] map[]}
0 release(s) matching tier=network-and-proxies found in helmfile.yaml
changing working directory back to "/home/jimconn/projects/src/personal/jimconn-shell/temp/helmfile.d/generic/01a-network-and-proxies"
0 release(s) matching app=dex found in helmfile.yaml
err: no releases found that matches specified selector(app=dex) and environment(default), in any helmfile
hmm….
is helmfile
looking at the labels in releases:
object too or just helmfile objects?
what’s the command now? I’m pretty lost
mumoshu@m75q2a:~/p/jimconn-shell/temp/helmfile.d/generic/01a-network-and-proxies$ $HOME/p/helmfile/helmfile template
Adding repo datawire <https://www.getambassador.io>
"datawire" has been added to your repositories
in ./helmfile.yaml: in .helmfiles[0]: in ambassador/helmfile.yaml: failed to render values files "values.yaml.gotmpl": failed to render [values.yaml.gotmpl], because of template: stringTemplate:63:20: executing "stringTemplate" at <.Values.resources.limits.cpu>: map has no entry for key "resources"
is helmfile looking at the labels in releases: object too or just helmfile objects?
i don’t get it. helmfile “object” doesn’t have labels. only releases have labels to be matched by selectors
lol. I’m sorry. So the command is being run from temp/helmfile/envs/dev/atreus
and the command I’m using is: helmfile --debug --selector app=dex template
well
if you notice that I was able to provide a selector
in the helmfiles
object of a helmfile.yaml
for 01a-…
and that at least linted.
yep. that means you’re letting helmfile to run helmfile -l $SELECTOR_HERE template
on the sub-helmfile
yup
that way, helmfile uses the selector to filter releases defined in the sub-helmfile
cool
so I was thinking that I could also use a selector for a specific helm release given a selector
so in other words, let’s say I’m in my cluster-a
environment
that should work
all I want to do is release one single app
so I name all my apps in releases
with a selector (which I know this is a helm selector but helmfile
seems to understand those, right)?
the point is that no helmfile.yaml under helmfile.d/generic/01a-network-and-proxies/*
has releases that have the label app: dex
oh sure! I know. It’s in 01b-security-management/dex
i see! then you need to provide that to helmfile.yaml. currently it doesn’t refer to it
$ cat helmfile.yaml
helmfiles:
- "*/helmfile.yaml"
- path: "*/helmfile.yaml"
selectors:
- tier=network-and-proxies
my dex
helmfile bundle is done so I’m just testing the “e2e” from cluster-a
(or atreus) to see if my top-level config works.
ah!
were you trying to say this?
helmfiles:
- "../*/helmfile.yaml"
- path: "*/helmfile.yaml"
selectors:
- tier=network-and-proxies
ok
well, no
ok. but you need to add e.g. 01b-security-management/dex
to helmfiles
section at least
k. just a sec. Lemme see
so in my 01b-security-manangement
helmfile.yaml
I have:
helmfiles:
- "*/helmfile.yaml"
- path: */helmfile.yaml
selectors:
- tier=secrets-managment
I was thinking that if I place a selector on the command line the helmfile
will read everything and simply use anything found matching the requested selector
yep you seem to be correct
and in this case, I have a bundle named dex
which only deploys dex and it’s under 01b-security-management/dex
and the helmfile.yaml
in there is:
---
bases:
- ../../common/environments.yaml
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
---
releases:
- name: dex
namespace: {{ .Values.dex.namespace }}
createNamespace: true
labels:
app: dex <<<<<=== THERE
chart: stable/dex
version: {{ .Values.dex.version }}
values:
- values.yaml.gotmpl
so helmfile
should find that right?
yep. but where are you running helmfile from?
the atreus
directory
and the content of helmfile.yaml under atreus
?
oh, lemme post it
environments:
default: {}
---
helmfiles:
- path: "../../../../helmfile.d/"
values:
- config.yaml
so helmfile should read the helmfile.yaml
in helmfile.d
which only matches all *.yaml files in every directory
that directory contains the 01a-…
and 01b-…
directories which have helmfile.yaml
files which define a tier selector for that directory of bundles.
so this is what im getting
mumoshu@m75q2a:~/p/jimconn-shell/temp/helmfile/envs/dev/atreus$ $HOME/p/helmfile/helmfile -l app=dex template
Adding repo stable <https://charts.helm.sh/stable>
"stable" has been added to your repositories
Adding repo jetstack <https://charts.jetstack.io>
"jetstack" has been added to your repositories
Adding repo bitnami <https://charts.bitnami.com/bitnami>
"bitnami" has been added to your repositories
Adding repo prometheus <https://prometheus-community.github.io/helm-charts>
"prometheus" has been added to your repositories
Adding repo banzaicloud-stable <https://kubernetes-charts.banzaicloud.com>
"banzaicloud-stable" has been added to your repositories
Adding repo cloudposse <https://charts.cloudposse.com/incubator/>
"cloudposse" has been added to your repositories
Adding repo datawire <https://www.getambassador.io>
"datawire" has been added to your repositories
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[1]: in generic/01b-secrets-management/helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: line 3: did not find expected alphabetic or numeric character
isn’t this due to that helmfile failing fast due to that it failed to parse generic/01b-secrets-management/helmfile.yaml
?
huh. werid.
seems like it. I don’t get that though. Lemme commit/push what I have here and you can run it again and see if you get the same result.
go ahead and pull
i think you’d need to fix each helmfile.yaml to work alone
oh crap
I am getting that
I did that first.
but I might have changed something during all this. Lemme check that real fast.
at glance this is invalid yaml
$ cat ../../../../helmfile.d/generic/01b-secrets-management/helmfile.yaml
helmfiles:
- "*/helmfile.yaml"
- path: */helmfile.yaml
selectors:
- tier=secrets-managment
should be
helmfiles:
- "*/helmfile.yaml"
- path: "*/helmfile.yaml"
selectors:
- tier=secrets-managment
ok
I’ll add the quotes.
awesome!
that seemed to work
well
ish
great
err: failed to read environments.yaml: environment values file matching "../../common/versions.yaml" does not exist in "."
changing working directory back to "/home/jimconn/projects/src/personal/jimconn-shell/temp/helmfile.d"
changing working directory back to "/home/jimconn/projects/src/personal/jimconn-shell/temp/helmfile/envs/dev/atreus"
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[2]: in generic/common/environments.yaml: failed to read environments.yaml: environment values file matching "../../common/versions.yaml" does not exist in "."
yeah, so that is an interesting error that I’m still fuzzy on in terms of the cause.
this goes back to part of the conversation we had last week with respect to paths…but I still can’t articulate that very well, so I could just be wrong about its implementation
well lets start reading it
assuming we’re running helmfile-template in atreus
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[2]: in generic/common/environments.yaml: failed to read environments.yaml: environment values file matching "../../common/versions.yaml" does not exist in "."
means that
yes
atreus/helmfile.yaml
loaded ../../../../helmfile.d/helmfile.yaml
, which in turn loaded generic/common/environments.yaml
.
while helmfile’s trying to process the last file, environemnts.yaml
, it failed to find ../../common/versions.yaml
let’s see what’s in environments.yaml
---
environments:
default:
values:
- ../../common/versions.yaml
- ../../common/values.yaml.gotmpl
production:
values:
- ../../common/versions.yaml
- ../../common/values.yaml.gotmpl
$ cat ../../../../helmfile.d/generic/common/environments.yaml
---
environments:
default:
values:
- ../../common/versions.yaml
- ../../common/values.yaml.gotmpl
production:
values:
- ../../common/versions.yaml
- ../../common/values.yaml.gotmpl
yep
yes
and this was where I was super confused
because it should have been values.yaml.gotmpl
and versions.yaml
but that didn’t work
this was why I asked you the question the other day…if you will recall.
the paths
document says that helmfile will use paths relative to the yaml
file requesting the path/file
as you’re including it into some helmfile yaml under generic
, those paths should be releative to generic
this my finding is that this isn’t consistent
that applies only to helmfile.yaml
which confused the poo outta me
ahhh
ok
that makes more sense as far as what the documentation means
so bases are base configuration or some form of skeleton of your helmfile.yaml
so what about other manifests? How do paths work with respect to others?
if bases are evaluated in the directory of the base helmfile yaml, it would prevent you from reusing it in any useful way
so every path is relative to the helmfile.yaml being processed
ah. hmm
bases modifies the helmfile.yaml
so that might be problematic for how I’m trying to do this.
probably. so you want to use base
to codify some convention of where each helmfile.yaml
should locate environment values files, right?
it worked nicely if I could make reference to these files relative to the manifest that references files.
yes
and so…it might be a little easier than that
and you wanted all the environment values files to be shared across all the helmfile.yaml files, right?
basically yes
essentially, the way I’m doing this, everything is using the same files whether it’s running as individual releases, directory releases, or environment releases.
lastly, do you want values.yaml.gtoml
and versions.yaml
to be under the same directory as environments.yaml
(base)?
yes as those are basically global settings for everything but I want to be able to redefine stuff set in those (specifically the stuff in values.yaml.gotmpl) with the config.yaml
in cluster-n
environments.
and you seem to be using the base environments.yaml
from helmfile.yaml files in various levels
generic/01a-network-and-proxies/ambassador
is trying to load generic/common/environments.yaml
yes, because that’s a common
file — is how I’m using it
ok
well it isn’t how base
is supposed to be used but i can see how you’d like to use it
some people want to use bases
to be evaluated in the context of the helmfile.yaml
loaded it so helmfile is currently designed aroud that
yeah…which I think I wanted to do it this way because it makes sense to me in terms of a structure. Set up a default set and then allow the ability for that default set to change based on overriding configurations
ok
can you just embed all the values directly into environments.yaml
then?
what’s an example of what that looks like?
something like
templates:
values: &values
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
certmanager:
version: 1.0.4
namespace: certmanager
blscauxclusterissuer:
version: 0.1.0
namespace: certmanager
externaldns:
version: v20210203-v0.7.6-28-g44288212-arm64v8
namespace: externaldns
nginxingress:
version: 3.15.2
namespace: ingress-nginx
postgres:
version: x.x.x
namespace: grafana
oauth2proxy:
version: 3.2.5
namespace: auth-system
dex:
version: 2.15.2
namespace: dex
vaultoperator:
version: 1.8.1
namespace: vault
vaultsecretswebhook:
version: 1.8.2
namespace: secrets-webhook
environments:
default:
values:
- *values
production:
values:
- *values
ahhh. I see… this might be doable.
well i used wrong term in the example but you get what i meant..
afk ill post compelte example later
yeah, use the yaml reference capability in conjunction with a template
ok!
so i think this should work
templates:
versions: &versions
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
certmanager:
version: 1.0.4
namespace: certmanager
blscauxclusterissuer:
version: 0.1.0
namespace: certmanager
externaldns:
version: v20210203-v0.7.6-28-g44288212-arm64v8
namespace: externaldns
nginxingress:
version: 3.15.2
namespace: ingress-nginx
postgres:
version: x.x.x
namespace: grafana
oauth2proxy:
version: 3.2.5
namespace: auth-system
dex:
version: 2.15.2
namespace: dex
vaultoperator:
version: 1.8.1
namespace: vault
vaultsecretswebhook:
version: 1.8.2
namespace: secrets-webhook
values: *values
aws_nlb_with_tls_termination_at_lb: {{ .Values | get "aws_nlb_with_tls_termination_at_lb" false }}
arm64_support: {{ .Values | get "arm64_support" false }}
domain_name: {{ .Values | get "domain_name" (env "DOMAIN_NAME") }}
cluster_id: {{ .Values | get "cluster_id" (env "CLUSTER_ID") }}
---
environments:
default:
values:
- *versions
- *values
production:
values:
- *versions
- *values
ugh. I just tried validating the configuration of the template test run I did a while ago (I had to re-establish the values.yaml.gotmpl contents, which I removed from my private repo) and unfortunately, I’m observing that the template run shows that the settings in the config.yaml
values in atreus
are not getting asserted during the template run meaning it seems that the helmfile.yaml
isn’t asserting the config from the top-level. :(
lemme try your suggestion real fast though.
I’m observing that the template run shows that the settings in the config.yaml values in atreus are not getting asserted during the template run meaning it seems that the helmfile.yaml isn’t asserting the config from the top-level.
what do you mean by “asserting” here?
are you saying that values defined in config.yaml
in the below atreus/helmfile.yaml
is not accessible from within sub-helmfiles ../../../.../helmfile.d
?
environments:
default: {}
---
helmfiles:
- path: "../../../../helmfile.d/"
values:
- config.yaml
i have not yet fully understood your whole config, but from what i can guess from our conversation so far
yeah, seems so.
you may be missing to pass values in the middle
oh! I need to pass values?
is there a generic way to do that without having to specify every variable?
Yes. As I have said in elsewhere, you must always explicitly pass values to sub-helmfiles
ahhhh….
hence, no inheritance which you mentioned
yeah
so what works today is https://sweetops.slack.com/archives/CE5NGCB9Q/p1618896775125900?thread_ts=1618580632.114500&cid=CE5NGCB9Q
Thx I see! Firstly I’d repeat https://github.com/roboll/helmfile/issues/1045#issuecomment-821910447 - envs defined in config.yaml doesn’t get automatically passed to sub-helmfiles under helmfile.d
.
I guess this should work:
helmfiles:
- path: "../../../../helmfile.d/"
values:
- {{ .Values | toYaml | nindent 6 }}
lemme give that a quick try, too
and what we’ll likely to add to helmfile in (near) future https://sweetops.slack.com/archives/CE5NGCB9Q/p1618896996128700?thread_ts=1618580632.114500&cid=CE5NGCB9Q
helmfiles:
- path: "../../../../helmfile.d/"
inheritValues: true
got it!
one sec
beware the number you pass to nindent
function. it’s dependent on the context.
- {{ .Values | toYaml | indent 2
- {{ .Values | toYaml | indent 4 }}
make sense, but
- {{ .Values | toYaml | indent 2 }}
or
- {{ .Values | toYaml | indent 4 }}
doesn’t.
this usage of toYaml with nindent is a common trick seen in wirting helm charts but I thought ir worth being explained
in my case, the indent is the same for all of the files and 6 is accurate I believe.
first-pass produced: &{default map[values:[map[aws_nlb_with_tls_termination_at_lb:false] map[arm64_support:false] map[domain_<name:atreus.dev.domain.io>] map[cluster_id:atreus]]] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[values:[map[aws_nlb_with_tls_termination_at_lb:false] map[arm64_support:false] map[domain_<name:atreus.dev.domain.io>] map[cluster_id:atreus]]] map[]}
second-pass rendering result of "helmfile.yaml.part.0":
0: ---
1: helmfiles:
2: - "generic/*"
3: - path: "generic/*"
4: values:
5: -
6: values:
7: - aws_nlb_with_tls_termination_at_lb: false
8: - arm64_support: false
9: - domain_name: atreus.dev.domain.io
10: - cluster_id: atreus
11:
12:
err: failed to read helmfile.yaml: reading document at index 1: yaml: line 6: did not find expected node content
changing working directory back to "/project/helmfile-project/helmfile/envs/dev/atreus"
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: line 6: did not find expected node content
So the failure seems to be coming from the line I just added (helmfile.d/helmfile.yaml
---
helmfiles:
- "generic/*"
- path: "generic/*"
values:
- {{ .Values | toYaml | nindent 6 }}
it’s not dereferencing .Values
oh wait
it is
I need to fix the config.yaml
just a sec
i’m now counting the size of the indentation there
oh ok
well shouldn’t it be 8?
---
helmfiles:
- "generic/*"
- path: "generic/*"
values:
- {{ .Values | toYaml | nindent 8 }}
with 8 it should be rendered to
helmfiles:
- "generic/*"
- path: "generic/*"
values:
- foo: bar
baz: 1
It looks like it should be 4
I’ll make it 8 and see what happens
why 4? you pasted the helmfile.yaml with wrong indentation?
just going strictly by the output which was:
4: values:
5: -
6: values:
but I was looking at the wrong values:
those numbers from 4 to 6 are line numbers
yeah, I wasn’t looking at those
ah ok
I was looking at the indentation of the values
on line 6 instead of line 4
cool
I think we got past that
nice
now we’re back on the environments.yaml
the template
err: failed to read ../../common/environments.yaml: reading document at index 1: yaml: line 4: mapping values are not allowed in this context
looks like a syntax thing likely
ah, line 4 does have wrong indentation
templates:
versions: &versions
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
i wrote this but this should be
I see it
it’s 2+ and it needs to be 2-
templates:
versions: &versions
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
OK, so getting closer. If you are OK with helping me fix this one thing then I’ll stop bugging you for the rest of the day…lol (I need to go to bed) and then I’ll try and fix up the rest on my own and if I get stuck, I’ll let you know and hopefully you can help another time?
yeah sure! but expect i won’t be as responsive as today tomorrow. today was my day off.
doh! maybe I’ll just stay up…lol
well, let’s see
There’s something I need to do to fix the other helmfile.yaml files for each chart which rely on environments.yaml
in common
err: failed to read ../../common/environments.yaml: reading document at index 1: yaml: unmarshal errors:
line 4: field semver not found in type state.TemplateSpec
line 7: field ambassador not found in type state.TemplateSpec
line 10: field certmanager not found in type state.TemplateSpec
line 13: field blscauxclusterissuer not found in type state.TemplateSpec
line 16: field externaldns not found in type state.TemplateSpec
line 19: field nginxingress not found in type state.TemplateSpec
line 22: field postgres not found in type state.TemplateSpec
line 25: field oauth2proxy not found in type state.TemplateSpec
line 28: field dex not found in type state.TemplateSpec
line 31: field vaultoperator not found in type state.TemplateSpec
line 34: field vaultsecretswebhook not found in type state.TemplateSpec
line 38: field aws_nlb_with_tls_termination_at_lb not found in type state.TemplateSpec
line 39: field arm64_support not found in type state.TemplateSpec
line 40: field domain_name not found in type state.TemplateSpec
line 41: field cluster_id not found in type state.TemplateSpec
changing working directory back to "/project/helmfile-project/helmfile.d/generic/01a-network-and-proxies"
changing working directory back to "/project/helmfile-project/helmfile.d"
changing working directory back to "/project/helmfile-project/helmfile/envs/dev/atreus"
in ./helmfile.yaml: in .helmfiles[0]: in ../../../../helmfile.d/helmfile.yaml: in .helmfiles[0]: in generic/01a-network-and-proxies/helmfile.yaml: in .helmfiles[0]: in ambassador/helmfile.yaml: failed to read ../../common/environments.yaml: reading document at index 1: yaml: unmarshal errors:
line 4: field semver not found in type state.TemplateSpec
line 7: field ambassador not found in type state.TemplateSpec
line 10: field certmanager not found in type state.TemplateSpec
line 13: field blscauxclusterissuer not found in type state.TemplateSpec
line 16: field externaldns not found in type state.TemplateSpec
line 19: field nginxingress not found in type state.TemplateSpec
line 22: field postgres not found in type state.TemplateSpec
line 25: field oauth2proxy not found in type state.TemplateSpec
line 28: field dex not found in type state.TemplateSpec
line 31: field vaultoperator not found in type state.TemplateSpec
line 34: field vaultsecretswebhook not found in type state.TemplateSpec
line 38: field aws_nlb_with_tls_termination_at_lb not found in type state.TemplateSpec
line 39: field arm64_support not found in type state.TemplateSpec
line 40: field domain_name not found in type state.TemplateSpec
line 41: field cluster_id not found in type state.TemplateSpec
this looks like an indentation issue and I think I probably need to fix the reference to match what we did with the other helmfile.yaml files.
it will be helpful if you could push the latest snapshot of your whole setup before asking questions that would reduce forth-and-back
seems to
sure! do you need that now or just in case I need more help tomorrow?
btw, sincerely appreciate your assistance. Very kind!
just update your git repo immediately before you add another question so that i can try to replicate your issue and think concretely
state.TemplateSpec
is the underlying go struct that maps to templates
in helmfile.yaml and in your case environments.yaml
cool! will do. Due to the nature of the company policies, I just need to sync between the public repo and my private repo here and then commit/push which will take a few mins.
ah! ok, that’s good context.
ah ok turns out we were only able to define some fields under templates.foo
well then we can’t leverage templates
that way..
looks like I’ll need to add helmdefaults to the template
oh, we can’t?
helmDefaults contains default values for releases[]
so it might be useful elsewhere, but i think it doesn’t work for this
ok
well trying to come up with a workaround..
k
so I guess I’m not understanding what’s wrong with the template method we’re working on right now.
the helmfile.yaml
files I updated a while ago were just the ones in the middle
I didn’t update any of the chart helmfile.yaml files yet
how about this…
environments:
template:
values:
- versions: &versions
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
certmanager:
version: 1.0.4
namespace: certmanager
blscauxclusterissuer:
version: 0.1.0
namespace: certmanager
externaldns:
version: v20210203-v0.7.6-28-g44288212-arm64v8
namespace: externaldns
nginxingress:
version: 3.15.2
namespace: ingress-nginx
postgres:
version: x.x.x
namespace: grafana
oauth2proxy:
version: 3.2.5
namespace: auth-system
dex:
version: 2.15.2
namespace: dex
vaultoperator:
version: 1.8.1
namespace: vault
vaultsecretswebhook:
version: 1.8.2
namespace: secrets-webhook
values: *values
aws_nlb_with_tls_termination_at_lb: {{ .Values | get "aws_nlb_with_tls_termination_at_lb" false }}
arm64_support: {{ .Values | get "arm64_support" false }}
domain_name: {{ .Values | get "domain_name" (env "DOMAIN_NAME") }}
cluster_id: {{ .Values | get "cluster_id" (env "CLUSTER_ID") }}
---
environments:
default:
values:
- *versions
- *values
production:
values:
- *versions
- *values
lemme give it a shot
ah probably this doesnt work due to yaml anchors not persisnt across ---
what if you removed ---
environments:
template:
values:
- versions: &versions
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
certmanager:
version: 1.0.4
namespace: certmanager
blscauxclusterissuer:
version: 0.1.0
namespace: certmanager
externaldns:
version: v20210203-v0.7.6-28-g44288212-arm64v8
namespace: externaldns
nginxingress:
version: 3.15.2
namespace: ingress-nginx
postgres:
version: x.x.x
namespace: grafana
oauth2proxy:
version: 3.2.5
namespace: auth-system
dex:
version: 2.15.2
namespace: dex
vaultoperator:
version: 1.8.1
namespace: vault
vaultsecretswebhook:
version: 1.8.2
namespace: secrets-webhook
values: *values
aws_nlb_with_tls_termination_at_lb: {{ .Values | get "aws_nlb_with_tls_termination_at_lb" false }}
arm64_support: {{ .Values | get "arm64_support" false }}
domain_name: {{ .Values | get "domain_name" (env "DOMAIN_NAME") }}
cluster_id: {{ .Values | get "cluster_id" (env "CLUSTER_ID") }}
environments:
default:
values:
- *versions
- *values
production:
values:
- *versions
- *values
removing ---
isn’t usually a good idea as this gudes you into the deep sea of helmfile “double-rendering” hack
yeah, it seems that one needs to understand when and when not to use ---
ok, so quick question…
I’m trying to set up the ambassador/helmfile.yaml
to use the passed .Values
and I’m not 100% sure how to properly do that. The old helmfile.yaml
for ambassador was:
---
bases:
- ../../common/environments.yaml
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
---
releases:
- name: ambassador
namespace: {{ .Values.ambassador.namespace }}
createNamespace: true
labels:
app: ambassador
chart: datawire/ambassador
version: {{ .Values.ambassador.version }}
values:
- values.yaml.gotmpl
the one I just tried, which failed was:
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
---
values:
- {{ .Values | toYaml | nindent 8 }}
releases:
- name: ambassador
namespace: {{ .Values.ambassador.namespace }}
createNamespace: true
labels:
app: ambassador
chart: datawire/ambassador
version: {{ .Values.ambassador.version }}
values:
- values.yaml.gotmpl
wrong indentation there
the last resort- try this if removing ---
didn’t work
environments:
{{ .Environment.Name }}:
values:
- versions:
semver: 0.0.1
name: somename
# Versions here reflect the chart version, NOT the app version.
ambassador:
version: 6.6.2
namespace: ambassador
certmanager:
version: 1.0.4
namespace: certmanager
blscauxclusterissuer:
version: 0.1.0
namespace: certmanager
externaldns:
version: v20210203-v0.7.6-28-g44288212-arm64v8
namespace: externaldns
nginxingress:
version: 3.15.2
namespace: ingress-nginx
postgres:
version: x.x.x
namespace: grafana
oauth2proxy:
version: 3.2.5
namespace: auth-system
dex:
version: 2.15.2
namespace: dex
vaultoperator:
version: 1.8.1
namespace: vault
vaultsecretswebhook:
version: 1.8.2
namespace: secrets-webhook
values:
aws_nlb_with_tls_termination_at_lb: {{ .Values | get "aws_nlb_with_tls_termination_at_lb" false }}
arm64_support: {{ .Values | get "arm64_support" false }}
domain_name: {{ .Values | get "domain_name" (env "DOMAIN_NAME") }}
cluster_id: {{ .Values | get "cluster_id" (env "CLUSTER_ID") }}
you may need to rename environments.yaml
to environments.yaml.gotmpl
. i forgot full details but try renaming the file if it failed to render go template at all
oh! yeah, that might be required
values:
- {{ .Values | toYaml | nindent 8 }}
seems like a invalid indentation
should be 4
if you removed ` - ../../common/environments.yaml ` from the ambassador helmfile.yaml, it should end up like that
because you’ve tried todefine ambassador.namespace in the versions
, which was to be loaded via environments.yaml
you just removed
but that should be passed in using the intermediate .Values
now, though shouldn’t it?
Intermediate .Values
doesn’t contain values defined in versions
neither, as you’ve omitted it from bases
hmm.
ok
values:
- {{ .Values | toYaml | nindent 8 }}
is noop.
you are basically assigning Values to Values
ah, yes. I was originally going to try just:
---
{{ .Values | toYaml }}
in helmfiel.yaml?
yeah
(that wasn’t the whole thing…just the values portion)
that would break helmfile.yaml, as it can’t contain arbitary key values like that
oh
I could put it under bases:
?
you can only have artibrary key values under values
of some kind. like releases[].values
or environments[].values
it’s not a free-form yaml
aahhh. OK, that’s good to know.
so let me do this then: … (one min)
nope. bases
has its own schema
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
#config.yaml
---
environments:
default:
values:
- {{ .Values | toYaml | nindent 8 }}
---
releases:
- name: ambassador
namespace: {{ .Values.ambassador.namespace }}
createNamespace: true
labels:
app: ambassador
chart: datawire/ambassador
version: {{ .Values.ambassador.version }}
values:
- values.yaml.gotmpl
?
seems ok, except that nindent should be 6
or I just need to push the indentation out 2 to match everything else.
dang flexibleness of yaml
maybe..?
i thought you usually align yaml dict key values at the level indicated by the end of the selection shown in the below picture
which is 6
yaml allows either or
it can be aligned or pushed out
wow really!
I tend to push it out because I’m old school
yep
the new school way seems to be aligned, though
but make two files using both pragmas and yamllint them
it will work on both
oh well but i still don’t get it
lets say you had tis values
foo: bar
bar: baz
environments:
default:
values:
- {{ .Values | toYaml | nindent 8 }}
renders to
environments:
default:
values:
- foo: bar
bar: baz
oh, maybe I should qualify what I’m talking about…I’m talking only for lists.
this does really work?
ah gotcha
then you should still say nindent 6
there
environments:
default:
values:
-
{{ .Values | toYaml | indent 6 }}
if it’s indent
rather than nindent
this should work
or even this
environments:
default:
values:
-
{{ .Values | toYaml | indent 8 }}
what’s the difference in go templating between nindent
and indent
btw? I haven’t looked up the docs on that yet.
fixed wrong indentations in indent
examples
yep that’s very important
that’s why nindent of 8 ends up with https://sweetops.slack.com/archives/CE5NGCB9Q/p1618909727197500?thread_ts=1618580632.114500&cid=CE5NGCB9Q here
environments:
default:
values:
- {{ .Values | toYaml | nindent 8 }}
renders to
environments:
default:
values:
- foo: bar
bar: baz
this is very common trick in writing not only helmfile templates but also helm templates so i’d highly recommend getting used to this
the things so far
note that I renamed environments.yaml
to environments.yaml.gotmpl
to respect helmfile
to render go templatized file
but I’m not sure that was enough…I feel like I need to tell helmfile to actually use the gotmpl file.
yes
but where?
you need to add that back to bases
oh
i thought you just removed it from bases
earlier?
which helmfile.yaml
file ? All of them?
I thought you said it needed to be removed.
I might have misunderstood you though
well no
because we were trying to solve the directory path issue…
the directory path issue would have been resolved when we embedded versions.yaml and values.yaml.gotmpl into environments.yaml.gotmpl
neither environments.yaml
nor environments.yaml.gotmpl
is going to be included in bases
or loaded before rendering helmfile.yaml
, except you explicitly add it to bases
you need to be explicit about what values to load, what bases to use, what values to be passed to sub-helmfile
ok. so that last thing is pretty good info. That resonates with me…and I’ll need to think about my project in those terms now methinks.
that’s actually something that should probably be added to the documentation like…very. clearly.
let me make I am understanding what you’re saying, too…
definitely. the point is that i know too much about helmfile already and can’t imagine what parts are missing to what levels of people
for a project, one must be explicit (for each helmfile.yaml) about what values to load, what bases to load and then when layering, what values to pass “down the chain”
so if you could contribute doc addition/fix based on your experience, that would be awesome
absolutely!
yep
and put it in terms of this project
so let me see if I understand how we’ve done things so far in terms of that concept and this project
I envisioned a project where I could deploy dex via helmfile from within its own directory and all it needs to know is where to find the bases, environments values (if any) and any additional values (for this project in this context, that would be passed in via env vars)…
brb…2 mins
i’ll be afk for dinner but keep posting
oh, and I think it worth a new slack thread now
lol. OK
its getting really long and takes a bit of time to fully load, and im also afraid if every update to this thread is being notified to anyone commented on this thread only in very beginning
@jedineeper is the concept I’m trying to accomplish making sense?
just want to make sure I’m not daft
cool, I created a new thread that we can move this to if you want.
This started with me misunderstanding the use of environments so some of it is beyond the scale of my understanding so I’m not a good judge :)
2021-04-17
2021-04-18
2021-04-19
2021-04-20
@mumoshu we can continue the thread here
here goes
@mumoshu I’ve pushed all my changes; and stuff is broke but it’s like almost 430 AM here and I need to get to bed.
if you don’t mind though, give it a looksee. I’ll see if I can make heads or tails of it tomorrow
will try to find some time!
I’m not thinking straight though, anymore. So, again, really appreciate all your help.
thank you very much! I’ll chat you up later today/tomorrow your time.
g’night
Hello! I’m new to helmfile and need some guidance on a use case. I’m adding a nested helmfile inside a monorepo-like helmfile repository. The parent repo so far doesn’t use environments since all services are global, but my child helmfile benefits greatly from them, because it defines production and staging namespaces and deployments. My helmfile looks basically like this:
environments:
staging:
values:
- staging.yaml
production:
values:
- production.yaml
releases:
- name: app
chart: foo/app
namespace: app-{{.Environment.Name}}
values:
- values.yaml.gotmpl
This works great as long as I specify -e staging
or -e production
during helmfile apply
. But since the parent repo has no concept of environments, it breaks the deploy process.
I guess my question is, what’s the best way to “inline” environments into the releases
list? i.e. I would like something like this:
releases:
- name: app-staging
chart: foo/app
namespace: app-staging
values:
- staging.yaml
- values.yaml.gotmpl
- name: app-production
chart: foo/app
namespace: app-production
values:
- production.yaml
- values.yaml.gotmpl
However, it’s attempting to “merge” the values files, rather than use the first to fill in the template of the second. Is there a better way to do this, rather than just duplicating the mostly-identical values files?
hey @Alex Genco have you tried using dependencies
? you can define a release with its dependencies so that the child chart will inherit values with <child_chart> prefix - see https://github.com/roboll/helmfile/issues/1762 for reference
Hi all. I am facing an issue with local dependencies declaration part of a release, as follows: releases: - name: foo_rel namespace: foo_ns chart: foo_example dependencies: - name: dep_chart reposi…
I guess my question is, what’s the best way to “inline” environments
im not sure if i understand fully, but it seems like you’ve already made it concise and DRY enough.
If i were you i won’t try to make it DRY further, thinking doing so would hurt readability
if really want, you can still do this with standard go template techniques
like
{{ define "t" }}
- name: app-{{ .env }}
chart: foo/app
namespace: app-{{ .env }}
values:
- {{ .env }}.yaml
- values.yaml.gotmpl
{{ end }}
releases:
{{ template "t" (dist "env" "staging" ) }}
{{ template "t" (dist "env" "production" ) }}
i believe you’d better use a more serious configuration language like CUE if you need to dynamically generate helmfile.yamls at this level. FYI, CUE is https://cuelang.org/
Validate and define text-based and dynamic configuration
but i thought i’ve seen many people happy with https://sweetops.slack.com/archives/CE5NGCB9Q/p1618995133219700?thread_ts=1618934290.213300&cid=CE5NGCB9Q so i can imagine you’ll like it, too
like
{{ define "t" }}
- name: app-{{ .env }}
chart: foo/app
namespace: app-{{ .env }}
values:
- {{ .env }}.yaml
- values.yaml.gotmpl
{{ end }}
releases:
{{ template "t" (dist "env" "staging" ) }}
{{ template "t" (dist "env" "production" ) }}
But since the parent repo has no concept of environments, it breaks the deploy process.
Well, i guess the reality is that the parent repo does have the concept of environments, but they are hidden. If the parent has no environnment, children shouldn’t have environments
If the parent envs are just hidden and you don’t want to write
environments:
staging: {}
production: {}
in parent, you may prefer
environments:
{{ .Environment.Name }}: {}
---
# releases, helmfiles, etc
Hi, I’m doing hemlfile sync
to change from a internet-facing
alb to a internal
alb and I tried destroy
, sync
and diff
but even though there is changes the alb does not get get created, anyone had and issue like this before?
@jose.amengual Hey! It sounds like issues in the helm chart you’re using. Helmfile’s just calling helm uprade --install
or helm delete
or whatever according to the definitions in your helmfile.yaml so that kind of issues can be very likely to be caused by the chart, not helmfile
interesting ok, I will have a look
I used a different namespace and everything works
if I destroy
, and delete other resources lingering and use the same namespace that did not work before it does not create the alb
so there must be something still there
now I wonder how I can find it
from the docs on NLBs…
Do not modify the service annotation service.beta.kubernetes.io/aws-load-balancer-type on an existing service object. If you need to modify the underlying AWS LoadBalancer type, for example from classic to NLB, delete the kubernetes service first and create again with the correct annotation. Failure to do so will result in leaked AWS load balancer resources.
There are two issues i faced while trying to use single load balancer feature new ingress resource is failing to add ALB with new ingress group name (works only when the resource group name is chan…
that @Andriy Knysh (Cloud Posse) found is real…
if you modify resources created by the controller int he console, you will have orphan resources
in my case the ingress what still there
so I run kubectl patch ingress ingress-name -n namespace-name -p '{"metadata":{"finalizers":[]}}' --type=merge
and then the ingress was gone and I was able to run helmfile sync
and helmfile destroy
and everything worked as expected
@jose.amengual it seems like it would be a better use-case for terraform ¯_(ツ)_/¯ — Do you have some helmchart you’re using to deploy your ALB?
yes we have some we use
@Alex Genco I’d try to help you but I’m in the same boat as you man. Still learning.
2021-04-21
is there any way to ignore helm errors for a specific release and continue?
generally no. but im curious why you want to do that?
also, what’s the exact error you’re trying to ignore?
so there is an issue I created a week ago https://github.com/roboll/helmfile/issues/1778
When using helmfile apply with repositories: - name: prometheus-community url: https://prometheus-community.github.io/helm-charts releases: - name: test namespace: test chart: prometheus-community/…
and I am trying to workaround it
we need to override some default rules from kube prometheus stack, my original idea was to use a patch, but since that didn’t work now I am trying another approach
unfortunately it involves a duplicate resource inside the kube prometheus release: I use additionalPrometheusRules with the name of already existing rule
so helm fails when it sees the duplicate, I was under impression that everything still works fine after that but now I see that this actually happens in the middle of deployment, so some resources are missing after it
so probably ignoring the error won’t be actually helpful in this case
but I have another question now:
if a release fails then further apply attempts do not show any errors but also do not deploy missing resources - is that expected behaviour?
(that’s what led me to believe that everything is find after the error)
one possible workaround is to create a hook in which you run a script that performs a helm template
on the chart and then you kubectl apply -f -
on the template output patching whatever is necessary on the broken bits on a pipe in between template
and kubectl
— essentially you’d just fix the manifests as they’re being templatized. It’s ugly and a total hack and might be bug prone…but it’s an idea maybe?
thank you for the idea, Jim
@Eugene Korekin I’ve tried to reproduce your issue but had no luck. please see https://github.com/roboll/helmfile/issues/1778#issuecomment-824441138
When using helmfile apply with repositories: - name: prometheus-community url: https://prometheus-community.github.io/helm-charts releases: - name: test namespace: test chart: prometheus-community/…
perhaps your issue had already been fixed later prs?
if you still have some issue, sharing the reproduction steps on a brand-new cluster would be helpful
@mumoshu the behaviour is definitely different with the latest version from master
When using helmfile apply with repositories: - name: prometheus-community url: https://prometheus-community.github.io/helm-charts releases: - name: test namespace: test chart: prometheus-community/…
i have managed to let helmfile fail with another error and it does seem like a bug
the one i saw seems happen when you do have strategicmergepatches/jsonpatches + CRDs
right, but there is another issue
whats it?
looks like there is some regression in the master comparing to the stable version
just a moment
so here is a simple helmfile
repositories:
- name: prometheus-community
url: <https://prometheus-community.github.io/helm-charts>
releases:
- name: test
namespace: test
chart: prometheus-community/kube-prometheus-stack
version: ~14.4.0
it works without any issue with the stable version please note that it works without disableValidation
and I ensured that the crds weren’t present before the installation (deploying a new clean cluster right now to be 100% sure)
whoa really? i cant imagine how it can work without disableValidation now…
ok, that’s probably because I used --skip-diff-on-install
ah that makes sense
it’s me who added helm diff --disable-validation
to avoid helm-diff failing when trying to diff custom resources for CRD that isn’t installed yet
so yeah, --skip-diff-on-install
would make disableValidation
unnecessary. got it
btw, would it be good idea if helmfile from master will print something different as its version number, not the same string as the stable version?
I am having issues trying to understand which version I use sometimes, cause --version
gives the same results for stable and master
ah sounds good. i thought the version number if set via TAG = $(shell git describe --tags --abbrev=0 HEAD)
in Makefile
i was innocently believing that it would return any tag only on tagged commit but apparently not
Ah okay i see
git-describe
The command finds the most recent tag that is reachable from a commit
looks like there is some regression in the master comparing to the stable version
ok, please disregard this
I just checked and they both work in the same way
great. thanks for testing!
so, do you need any other information from my side regarding this issue with jsonpatches and crds?
i believe you’ve provided me enough! thx. i’m now wondering how i could fix that
seems helmfile needs to extract CRDs from the patched YAML and put it under crds
dir in the temp chart…. shouldn’t there be any easier way than this
thank you very much
if it won’t be possible to fix, I think specifying that json and strategic merge patches don’t work with CRDs in documentation would be very useful
i cant help wishing there was helm template --crds-only
yep true
Matt Farina is a friend of mine. I can suggest that to him.
i would generally prefer putting crds in a separate chart and then you’re fine
I used to work with him.
cool!
@Eugene Korekin another potential solution would be to enhance helmfile to accept skipCRDs: true
in helmfile.yaml.
Then you could have a separate release for CRDs only and another for kube-promehtheus-stack with skipCRDs: true
where in the skipCRDs
ed kube-promehtue-stack release, you can freely use patching
that separate release for CRDs, should it be created manually in that case?
or is there some automatic way to just install CRDs (from kube-prometheus-stack for example)
you can defintiely create em manually, but can i ask you why?
nope. so i mean you need to copy CRDs from the kube-prometheus-stack into your own local chart
ok, I see yes, that’s what I meant by doing that manually
ah ok
well just a sec
as opposed to providing some option to helmfile so only CRDs will be installed from a chart
can’t we use our go-getter integration to grab and install CRDs only, assuming kube-promehtue-stack is hosted on GitHub?
that would be great
but couldn’t CRDs also use some templating like other helm chart resources?
maybe no..? i have never heard of CRDs being dynamically generated in practice
I mean, it might be easy to get and install them or manually copy them from a chart if they are just static, but what if not?
maybe. it all depends on what we want? if you’d like to avoid manual steps using go-getter integration to install CRDs only in a separate chart can be a good workaround
otherwise just create a local chart containing CRDs only manually, or even install CRDs before running helmfile make sense
sure, it will work if they are always static and having this integration would be helpful because they could change with a new version of a chart
right
I was just thinking about an edge case when they are dynamic, in that case the go-getter integration will break, right?
but I have little understanding about how they are usually used in charts, so probably I am just missing something, nevermind
I was just thinking about an edge case when they are dynamic, in that case the go-getter integration will break, right?
absolutely.
just an additional thought: if the root of this issue is in chartify behaviour, maybe the proper way to fix it would be on the chartify level?
I don’t know what chartify is though and how it relates to helm
but if it generates helm charts then this:
the temporary chart generated by chartify is not a correct helm3 chart
looks like a bug in it, right?
yes. i consider it’s a bug in chartify and i’m trying to fix it there. the implementation is going to be tedious so that’s why i wished if there was helm template --only-crds
earlier in this thread
ah, got it
this seems to be working https://github.com/variantdev/chartify/commit/55b23f9e9d43ae1105a536a659fa35f004806b2a
i will try integrate this into helmfile tomorrow or so
This version fixes that charitfy not to fail when you used the combination of (1)helm 3 and (2)strategicMergePatches/jsonPatches/transformers etc that triggers chartify on (3)a chart that contains …
2021-04-22
Victor Farcic spread the word about Helmfile on his channel: https://www.youtube.com/watch?v=qIJt8Iq8Zb0
2021-04-23
guys, is there any replacement for deprecated inclubator/raw chart ?
A few people have forked it and published their version on artifacthub, but none of them have gained community adoption. Personally I still feel fine using the deprecated chart from helm/charts/incubator since it is stable and doesn’t need many updates. It set out to accomplish this one thing people need, attained it, and now just sits, not really needing much maintenance
That deprecation warning in latest version of chart breaks yaml..
Of course, I can use older one, but that breaks other things for me
Seems that bitnami has one..
Even cloudposse, monochart..
Hi guys , Do you have a sample repository which installs kubernetes cluster auto scaler that works with TF 15.0 properly I was using cookie labs which broken after upgrade. Thanks Leia https://www.linkedin.com/in/leia-renee/
2021-04-24
@mumoshu I hope that helps more (the example I just gave in the issue. I’m heading to bed now…
Note that the example manifests from upstream (vault) show no single tics around the property. I added those for testing. With or without, it seems to be valid yaml. You can apply it successfully with kubectl
but helmfile
doesn’t process it right. Thanks for your help.
I can’t even imagine how i could use that yaml in combination with vault-operator! You’d need to provide a step-by-step guide and your original goal.
You can apply it successfully with kubectl but helmfile doesn’t process it right. Thanks for your help.
Why are you comparsing kubectl and helmfile? Should you compare helm and helmfile…?
Is vaultconfig
you’ve shown in https://github.com/roboll/helmfile/issues/1798#issuecomment-826049375 is considered a valid k8s manifest yaml that can be kubectl-apply
ed?
From vault chart documentation to show official documentation about the format of the policy: https://github.com/banzaicloud/bank-vaults/blob/master/charts/vault/values.yaml (note that the link abo…
But it doesn’t look so. I’m super confused
Im not even sure why you’re correlating
56: externalConfig:
57: policies:
58: - name: allow_secrets
59: rules: 'path "secret/*" { capabilities = ["create", "read", "update", "delete", "list"] }'. <<== fails
60: auth:
61: - type: kubernetes
...
with
err: failed to read helmfile.yaml: reading document at index 1: yaml: line 68: did not find expected ',' or ']'
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: line 68: did not find expected ',' or ']'
the line number doesn’t match
vaultconfig: |-
kind: Vault
name: vault
namespace: vault
What’s the extra indentation before name
and namespace
here…?
Shouldn’t that be
vaultconfig: |-
kind: Vault
name: vault
namespace: vault
I’m not sure how this vaultconfig
is supposed to be used. It isn’t look like a valid values accepted by vault-operator chart nor a valid K8s manfist that can be applied.
Are you using it to dynamically render helmfile.yaml?
But then it should be a yaml dict, not a yaml string.
if it’s supposed to be a yaml dict, you should’nt have ` | - ` in: |
vaultconfig: |-
kind: Vault
name: vault
namespace: vault
spec:
it should be
vaultconfig:
kind: Vault
name: vault
namespace: vault
spec:
You say your example is enough for reproduction. This is so tough…. I tried my best but have no idea what you’re trying to achieve, what you tried, why/how this is supposed to work, etc.
Strange indentation here
externalConfig:
policies:
- name: allow_secrets
rules: 'path "secret/*" { capabilities = ["create", "read", "update", "delete", "list"] }'
auth:
- type: kubernetes
roles:
- name: default
bound_service_account_names:
auth:
- type: kubernetes
roles:
should be
auth:
- type: kubernetes
roles:
same here
auth:
- type: kubernetes
roles:
- name: default
bound_service_account_names:
secrets:
- path: secret
secrets: should be indented one more
From vault chart documentation to show official documentation about the format of the policy: https://github.com/banzaicloud/bank-vaults/blob/master/charts/vault/values.yaml (note that the link abo…
seriously, you shouldn’t embed a complete YAML in a YAML string. That can obfuscate all kinds of yaml errors, I believe.
OK. Gotta a lot here I have to address.
- I somehow managed to miss
metadata
in my example in the issue. Fixed. - The CR I had in my repository was wrong, as well. It was missing the
apiVersion
: Fixed.
The way this installation works is that the vault-operator is installed via helm and then this manifest, which is the vault configuration (kind: Vault
) is installed. The vault operator picks up the requested config and asserts it for vault.
an example of how this works can be seen in this comment
Is your feature request related to a problem? Please describe. current use-case involves attempting to install kube-prometheus-stack/alertmanager, which contains three directives we want to store/r…
I’ve never heard of a problem embedding yaml in yaml given placing the yaml in an appropriate property using an appropriate yaml modifier. It is common practice in every place I’ve worked so I’ve never heard of anyone say not to do it and I’ve been doing this a loooong time. So, meh, not sure. Agree to disagree on that? =]
There are a couple of important points to make about the indentation issues you pointed out:
- it was from copy and paste so something between the copy, paste, and gh markdown might have borked the actual indentation.
- The pasted yaml is coming from sops decrypted data and sops re-arranges yaml from the original input to its own from the time a document is first encrypted. For instance, notice the
- key:
entries? I never write yaml where there is more than one space after the dash. SOPS does that.
I’ll go through and fix yaml indentations issues I find. I was in a hurry to get to bed. It was late and I was keeping my wife up be staying up.
one final thing. The yaml config I gave you was not a values.yaml file. I guess I should have just called it a CR manifest. I probably worded that in a weird way where confusion ensued. I provided the values.yaml example in the beginning to point out the property pointed out was sanely written as that was the only example, at the time, that I had and it was from a prominent example; upstream. So, that’s all I was using the values file for. We don’t use that values manifest. Moreover, given the sensitivity of what’s in this repo, I can’t just give you a copy. I have to scrub everything. So, the best example I can give you now on how to reproduce this is that GH comment #1270 I gave you, which is another issue I’m working with upstream on to figure out a problem we’re seeing with vault secrets webhook.
I just tried using your fixed manifest from GH and it still fails for me so there might be a different issue. Lemme show you my helmfile.yaml
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
- ../../common/versions.yaml
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
---
environments:
default:
secrets:
- secrets.yaml
---
releases:
- name: vault-operator
namespace: {{ .Values.vaultoperator.namespace }}
createNamespace: true
labels:
app: vault-operator
chart: {{ .Values.vaultoperator.repo }}
version: {{ .Values.vaultoperator.version }}
hooks:
- events: ["postinstall"]
command: "bash"
args: ["-c","echo \"{{ .Values.vaultconfig }}\"", "kubectl -n {{ .Values.vaultoperator.namespace }} -f -"]
showlogs: true
values:
- values.yaml
The CR I’m trying to load is the secrets.yaml
Now, I’m trying to test the postinstall hook, but yaml isn’t even getting that far yet.
When I run helmfile template
I get the following:
and that’s using the fixed manifest you provided with the fixed missing entries I found.
ah geez. I just thought of something. I don’t need to load this into a value at all. Sigh. I’ll just use the postinstall hook and call sops to decrypt pipe it into kubectl directly. It would have been nice just use the internal secrets processing but this way I can get around this bug.
args: ["-c","echo \"{{ .Values.vaultconfig }}\"", "kubectl -n {{ .Values.vaultoperator.namespace }} -f -"]
This looks bad. You have "
in vaultconfig
so the second arg in args
is delimited there, which seems unexpected to you
yeah, I just refactored all of that. I wasn’t sure it would work or not
you seem to be trying to read stdin here "kubectl -n {{ .Values.vaultoperator.namespace }} -f -" but you you’re missing | | between echo and kubectl |
yeah…I don’t expect a pipe would work in the args
property anyway, which I was using bash-fu to get around, which didn’t work…plus I neglected to even use apply
to kubectl
, but like I said, I was trying to get to a point to test that.
but I’ve switched gears and not even going to use helmfile/helm/secrets et al
since I’m applying a cr, which I should have thought about this before. I’m just gonna use sops in a hook and decrypt into kubectl. I’m writing a small wrapper shell function to handle all the things
fyi a common way to install arbitrary k8s resources using helmfile is to use incubator/raw
chart
releases:
- name: resources
namespace: vault
createNamespace: true
chart: incubator/raw
values:
- resources:
- {{ .Values.vaultconfig | nindent4 }}
btw, I’m not sure how well you know bash stuffs…but the method of input I was going for was:
$ < file command
which a stdin redirect into command
…it’s gooder
to let it not fail on first install you’d definitely want to have disableValidation
too
releases:
- name: resources
namespace: vault
createNamespace: true
chart: incubator/raw
values:
- resources:
- {{ .Values.vaultconfig | nindent4 }}
disableValidation: true
I’ve never heard of the raw chart. Very interesting
to make it installed only after youve installed vault-operator, add needs
releases:
- name: resources
namespace: vault
createNamespace: true
chart: incubator/raw
values:
- resources:
- {{ .Values.vaultconfig | nindent4 }}
needs:
- {{ .Values.vaultoperator.namespace }}/vault-operator
I’ll give that a try but first I’d need helmfile to successfully read the cr
yep. but that should be easy as long as you won’t break yaml somehow.
so but that’s the problem I was trying to surface. I wasn’t breaking any yaml though things didn’t get into the issue very nicely…so that caused problems for you. I was in a rush so I fixed all that this morning.
so did you try to address my comment and you still get some error?
args: ["-c","echo \"{{ .Values.vaultconfig }}\"", "kubectl -n {{ .Values.vaultoperator.namespace }} -f -"]
This looks bad. You have "
in vaultconfig
so the second arg in args
is delimited there, which seems unexpected to you
well, I already refactored all of it, but lemme run a test real fast with your suggestion.
gimme like 5 mins.
I’ve never heard of a problem embedding yaml in yaml given placing the yaml in an appropriate property using an appropriate yaml modifier. It is common practice in every place I’ve worked so I’ve never heard of anyone say not to do it and I’ve been doing this a loooong time. So, meh, not sure. Agree to disagree on that? =]
well sorry but what i wanted to say is that you should really avoid using go template and yaml in such way if it makes debugging harder for you.
it did made debugging harder for you, right?
cant’ you just create a your own local helm chart dedicated to installing Vault
resources, so that you don’t even need templating the whole helmfile.yaml?
so, there are lotsa ways to skin this cat in my estimation…I’m trying to employ the easiest method I find that works.
it looked like the easiest, but apparently not, right? :)
apparently
ok
I’ve never heard of a problem embedding yaml in yaml given placing the yaml in an appropriate property using an appropriate yaml modifier
maybe you were thinking about embedding yaml in yaml without go templating here, right?
so this is the test I just ran
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
- ../../common/versions.yaml
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
---
environments:
default:
secrets:
- secrets.yaml
---
releases:
- name: vault-operator
namespace: {{ .Values.vaultoperator.namespace }}
createNamespace: true
labels:
app: vault-operator
chart: {{ .Values.vaultoperator.repo }}
version: {{ .Values.vaultoperator.version }}
hooks:
- events: ["postinstall"]
command: "echo "
#args: ["-c","cat<<E {{ .Values.vaultconfig }}\nE", "|", "kubectl -n {{ .Values.vaultoperator.namespace }} apply -f -"]
args: "{{ .Values.vaultconfig }}"
showlogs: true
values:
- values.yaml
is there anything wrong with that so far?
whats in values.yaml now?
yes, you need to remove
#args: ["-c","cat<<E {{ .Values.vaultconfig }}\nE", "|", "kubectl -n {{ .Values.vaultoperator.namespace }} apply -f -"]
` at all
values.yaml or secrets.yaml (secrets.yaml is what’s getting read)
ok, I’ll remove it
#args: ["-c","cat<<E {{ .Values.vaultconfig }}\nE", "|", "kubectl -n {{ .Values.vaultoperator.namespace }} apply -f -"]
is clearly breaking your yaml because #
will be applied to the first line of your loooong yaml string vaultconfig
oh yeah…actually that was another thing I noticed about helmfile…comments are not always handled very nicely…but I might just misunderstand something there
you’re somehow treating go template expression can be commeted out with yaml’s #
but that’s not the case
you need to be extra sure that you’re writing a go template to generate a valid yaml yourself.
#
is a yaml directive so it won’t disable go template in the line
hmm. that’s actually new to me. Interesting. I’ll have to look that up.
OK…
here’s the test I just ran…
in an encrypted secrets.yaml:
and here’s the helmfile.yaml
again:
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
- ../../common/versions.yaml
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
---
environments:
default:
secrets:
- secrets.yaml
---
releases:
- name: vault-operator
namespace: {{ .Values.vaultoperator.namespace }}
createNamespace: true
labels:
app: vault-operator
chart: {{ .Values.vaultoperator.repo }}
version: {{ .Values.vaultoperator.version }}
hooks:
- events: ["postinstall"]
command: "echo "
args: "{{ .Values.vaultconfig }}"
showlogs: true
values:
- values.yaml
when I run this, I get the following error:
$ helmfile --debug template
...
...
...
err: failed to read helmfile.yaml: reading document at index 1: yaml: line 11: did not find expected key
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: line 11: did not find expected key
what’s in the line 11 of your rendered helmfile.yaml shown in the debug log?
could you provide me the whole log?
so what I’m seeing is happening is that the echo command is reading the yaml file and then it seems to be trying to assert it to release in the yaml:
this is confusing:
man…!
hooks:
- events: ["postinstall"]
command: "echo "
args: "{{ .Values.vaultconfig }}"
this seems wrong
yeah, lemme get you the whole log…
shouldn’t it be
hooks:
- events: ["postinstall"]
command: "echo "
args:
-|
{{ .Values.vaultconfig | nindent 8 }}
uhhh
see this!!!
11: args: "apiVersion: "someapi/endpoint"
12: manifest:
should it be? That goes against any convention of using a shell command with args I’ve ever seen…so this would be great to have documented.
I’m just trying to get the example simply echo out the variable
yes. but you’re breaking yaml there
sorry. this is starting to get frustrating. I think there’s a fundamental lack of understanding of how this hook is supposed to work.
It seems like using a hook with a bash command that “args” is not actually supposed to cause yaml interpretation
I’m not trying to assert yaml in this example
hmmm… well yes, you seem to have a fundamental misunderstanding
I just want to see the yaml output…and I’m doing that with a bash eecho
why is helmfile trying to assert that?
"{{ .Values.vaultconfig }}"
doesn’t automatically escape the yaml embedded in valutconfig
does that clarify it a bit?
not really. unfortunately.
maybe I can explain how this looks to the enduser
hm. then try
args: {{ .Values.vaultconfig | toYaml }}
hooks:
- events: ["postinstall"]
command: "echo "
args: "{{ .Values.vaultconfig }}"
showlogs: true
this tells me the following
there’s a hook event to be executed during postinstall
well not really
use the echo
command and use the args {{ .Values.vaultconfig }}
and echo that out…
do nothing else
hooks:
- events: ["postinstall"]
command: "echo "
args: "{{ .Values.vaultconfig }}"
showlogs: true
tells you that you render this snippet of go template to generate a yaml
and only after that helmfile’s able to read the hooks definition
Helmfile triggers various events
while it is running. Once events
are triggered, associated hooks
are executed, by running the command
with args
. The standard output of the command
will be displayed if showlogs
is set and it’s value is true
.
the documentation says nothing about rendering
yeah true. but that only happens all the go template expressions are rendered
it specifically says executed
yeah true. this go template thing applies to helmfile, not only to hooks
that’s why the doc doesn’t bother repeating all over the places….
if you don’t use go template at all, you can treat it helmfie.yaml as a plain yaml file so there’s no issue at all
and i suppose that might be the fundamental misunderstanding you had
so…
the use of command:
to me indicates some kind of terminal based command can be executed
and args:
indicates arguments which will get passed to command
if helmfile reads yaml before rendering go template, even this doesn’t work
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
- ../../common/versions.yaml
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
because
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
is clearly invalid as yaml
yes
the use of command: to me indicates some kind of terminal based command can be execute
and args: indicates arguments which will get passed to command
correct
so all your understanding on hooks, command, args seem correct
but you’re breaking yaml before they are defined at all.. which is what helmfile’s complaining about.
well, ok, but lemme finish my thought real fast
so, it seems that helmfile
should only be looking for an exit code from the command…what ever else the command performs should be interpreted by helmfile. I would think.
should reading {{ .Values.vaultconfig }}
just be the entire yaml output from the file? In it’s own right, it’s valid yaml
but helmfile seems to be concatenating it to everything else
yep, because that’s how go template works…
maybe
well internally helmfile does differentiate each arg in args
.
so
but helmfile seems to be concatenating it to everything else
if you’re referring to how go template is working here, you’re correct
so
i can imagine a valid feature request here would be
somehow enhancing args
so that you can source a file content into one arg
or a helmfile value content into one arg
like
hooks:
- events: ["postinstall"]
command: "echo "
args:
- fromHelmfileValue: vaultconig
showlogs: true
so you’re failing here…
11: args: " <<<< ==== everything in the template here should be passed to bash and ignored by helmfile
12: apiVersion: "someapi/endpoint"
The yaml in the file, which is read into .Values.vaultconfig
by itself is valid yaml.
yeah but you arent embedding it correctly
I’m not trying to embed it!!! lol. That’s the thing. I don’t understand why helmfile is trying to embed that to args
in releases
— that’s not making sense to me.
why not use
args :
- |
{{ .Values.vaultconfig | nident SOME_NUMBER
as i’ve suggested
so perhaps you can help me understand that one part.
I’m trying to find that magic number…just a sec
I don’t understand why helmfile is trying to embed that to args in releases
Becase you’re telling helmfile to do so here…
args: “{{ .Values.vaultconfig }}”
this is with nindent 10
but I’m not!! I’m telling helmfile to pass the transliterated template to echo
or at least that’s what I’m trying to do
I don’t want helmfile to attach anything in args
to the releases
object.
are you really using:
args:
- |
{{ .Values.vaultconfig | nindent SOME_NUMBER }}
in the above run?
I just want it to pass the “stuff” in args
to echo
nope.
then it won’t work
1 ---
2 bases:
3 - ../../common/repos.yaml
4 - ../../common/helmdefaults.yaml
5 - ../../common/versions.yaml
6 {{- if not .Values }}
7 - ../../common/values.yaml.gotmpl
8 {{- end }}
9
10 ---
11 environments:
12 default:
13 | secrets:
14 | | - secrets.yaml
15
16 ---
17 releases:
18 - name: vault-operator
19 | namespace: {{ .Values.vaultoperator.namespace }}
20 | createNamespace: true
21 | labels:
22 | | app: vault-operator
23 | chart: {{ .Values.vaultoperator.repo }}
24 | version: {{ .Values.vaultoperator.version }}
25 | hooks:
26 | | - events: ["postinstall"]
27 | | | command: "echo "
28 | | | args: "{{ .Values.vaultconfig | nindent 10 }}"
29 | | | showlogs: true
30 | values:
31 | | - values.yaml
ok, so maybe let me see if understand how helmfile
works then in this regard…
hooks:
- events: ["postinstall"]
command: "echo "
args:
- |
{{ .Values.vaultconfig | nindent SOME_NUMBER }}
try this
hooks:
- events: ["postinstall"]
command: "echo "
args:
- {{ .Values.vaultconfig | nindent 12 }}
showlogs: true
yeah it should end up like that..
why you used
args:
-
instead of what i suggested above:
args:
- |
?
tried both.
same error
9: - events: ["postinstall"]
10: command: "echo "
11: args:
12: -
13: apiVersion: "someapi/endpoint"
14: manifest:
15: kind: Vault
16: name: vault
err: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 14: cannot unmarshal !!map into string
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 14: cannot unmarshal !!map into string
although, indentation is off in second one. Lemme fix that
1: args:
12: -
13: apiVersion: "someapi/endpoint"
14: manifest:
15: kind: Vault
err: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 14: cannot unmarshal !!map into string
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 14: cannot unmarshal !!map into string
25 | hooks:
26 | | - events: ["postinstall"]
27 | | | command: "echo "
28 | | | args:
29 | | | - {{ .Values.vaultconfig | nindent 10 }}
30 | | | showlogs: true
31 | values:
32 | | - values.yaml
where”s the - |
now? mind sharing me the whole helmfile.yaml after you’ve wrote - |
?
---
bases:
- ../../common/repos.yaml
- ../../common/helmdefaults.yaml
- ../../common/versions.yaml
{{- if not .Values }}
- ../../common/values.yaml.gotmpl
{{- end }}
---
environments:
default:
secrets:
- secrets.yaml
---
releases:
- name: vault-operator
namespace: {{ .Values.vaultoperator.namespace }}
createNamespace: true
labels:
app: vault-operator
chart: {{ .Values.vaultoperator.repo }}
version: {{ .Values.vaultoperator.version }}
hooks:
- events: ["postinstall"]
command: "echo "
args:
- {{ .Values.vaultconfig | nindent 10 }}
showlogs: true
values:
- values.yaml
wheres - |
?
I used this for your recommendation: https://sweetops.slack.com/archives/CE5NGCB9Q/p1619300555282000?thread_ts=1619249294.249600&cid=CE5NGCB9Q
hooks:
- events: ["postinstall"]
command: "echo "
args:
- {{ .Values.vaultconfig | nindent SOME_NUMBER }}
try this
did I miss a |-
?
should that be after args:
?
yes
ah no
well, do this
args:
- |
{{ .Values.vaultconfig | nindent SOME_NUMBER }}
k, sec
ok. so no errors that time…
checking through all the output.
btw this might also work
args: ["{{`{{.Values.vaultconfig}}`}}"]
probably you’ve already read https://github.com/roboll/helmfile/#hooks but totally missed the existence of “go template comment” there?
yeah, I was considering trying that next actually
I haven’t read that actually
really don’t forget making it a go template comment if you try the above
{{`
`}}
ok
I found an example helmfile.yaml that used that in a hook so I was going to try it
regardless of you use go template comment or not, embed yaml into args
at the helmfile.yaml level or before helmfile running the hook command, all i can say is
https://sweetops.slack.com/archives/CE5NGCB9Q/p1619299403271500?thread_ts=1619249294.249600&cid=CE5NGCB9Q
hooks:
- events: ["postinstall"]
command: "echo "
args: "{{ .Values.vaultconfig }}"
showlogs: true
tells you that you render this snippet of go template to generate a yaml
helmfile firstly renders the whole helmfile.yaml content as a go template. helmfile reads the result as yaml
you can use go template comment like https://sweetops.slack.com/archives/CE5NGCB9Q/p1619301420288500?thread_ts=1619249294.249600&cid=CE5NGCB9Q so that you can “defer” the go template rendering.
BUT it doesn’t mean that helmfile stopped rendering the go template.
really don’t forget making it a go template comment if you try the above
{{`
`}}
OK, that’s good info there
even if you had no go template expression in helmfile.yaml, helmfile tries to render it as a go template before parsing as yaml.
a valid yaml can easily become a valid go template too. so, as long as you don’t go template in helmfile.yaml at all, helmfile.yaml looks like a vanilla yaml
So, I’m running this with helmfile sync --args --dry-run
— will the hook work during a helm dry-run?
I’m not getting errors now but I’m also not seeing the echo of the stuff in the file.
haven’t tried. helmfile --args
is not well supported
meh, ok. Lemme just sync and see what happens
nope. dang it.
not even any logs
wait a sec
helmfile --args
is a historical artifact that only worked in the very beginning of the helmfile project. at that time helmfile sync
only ran a single helm command.
nowadays a helmfile run involves many helm commands so --args
doesn’t make sense. Which custom args should be passed to helm template
, helm template
, helm diff
, helm repo up
, helm dep build
, etc? Impossible to deduce
oh boy. That’s not very good because sometimes I want to see what helm debug looks like and what it would template out
that’s a valid point though
seems like it should be a good idea to allow the user to pass dry-run to upgrade --install
though.
i agree. sounds like a good chance for you to write a feature request
lol. can do
so I’m confused here in what is happening in my testing here…
no echo command from postinstall
ah geez
oh well, you said postinstall?
yeah…don’t say it. I figured it out
i remember someone has requested it before… found it. https://github.com/roboll/helmfile/issues/1291#issuecomment-638153828
I am just wondering exactly when presync and postsync hooks are run, specifically: Are they run if a release is being uninstalled? Are they run if a release is already uninstalled and installed: fa…
yeah, a postinstall
for this use-case would be prime
ok, giving this a shot now…
27 | | | command: "echo"
28 | | | args: ["{{ `{{ .Values.vaultconfig }}` }}","|","kubectl -n {{ `{{ .Vault.vaultoperator.namespace }}` }} apply -f -"]
created an issue for that at #1805. please feel free to add your voice w/ expected use-case there https://github.com/roboll/helmfile/issues/1805
Just wanted to have a dedicated issue for this. Would anyone use it if Helmfile added a new postinstall hook? It has been originally proposed by @cdunford in #1291. Althought I thought it was great…
Does adding space between {{
and the backtick like that work?
yup. that’s how we do gotemplate comments for everything (it’s our convention)
I thought go template comment had to start with
{{
`
ok then
just gotta fixup a couple things here and then this might work….
hm, i think i’ve found another potential issue in your config
oh, cool. I’m listening
maybe yo’ve already noticed, but you should use bash
or alike instead of directly calling echo
yeah, just fixed that
nice
command: "bash"
args: ["-c", "echo", "{{ `{{ .Values.vaultconfig }}` }}","|","kubectl -n {{ `{{ .Values.vaultoperator.namespace }}` }} apply -f -"]
but I think the |
is illegal unless helmfile
knows how to handle that. In my experience, most things don’t properly interpret the pipe in this context
yes, i can see your point
output from sync
Listing releases matching ^vault-operator$
vault-operator vault 10 2021-04-24 15:21:08.377398548 -0700 PDT deployed vault-operator-1.8.1 1.8.0
helmfile.yaml: basePath=.
hook[postsync] logs |
hook[postsync] logs |
UPDATED RELEASES:
NAME CHART VERSION
vault-operator banzaicloud-stable/vault-operator 1.8.1
but
at 15:21:36 ❯ kg vault vault
Error from server (NotFound): vaults.vault.banzaicloud.com "vault" not found
so no errors, but no resource either
it’s quick and dirty, but I can echo that out to a tmpfile and have kubectl read the tmp file, then shred it.
that’s kind of what I was working on before actually
actually…lemme try one more thing
sounds good
i have also managed to do it with bash here doc
values:
- somePlainYaml: |
apiVersion: v1
kind: ConfigMap
metadata:
name: frompostsync
---
releases:
- name: myapp
chart: incubator/raw
values:
- resources:
- metadata:
name: myapp
kind: ConfigMap
apiVersion: v1
hooks:
- events: ["postsync"]
command: "bash"
args:
- -c
- |
cat <<EOS | kubectl apply -f -
{{ .Values.somePlainYaml | nindent 6 }}
EOS
that’s what I was kinda working on…
sheez…forgot to remove the pipe
nice
it works
- events: ["postsync"]
command: "bash"
args:
- "-c"
- |-
< <(echo -e "{{ `{{ .Values.vaultconfig | nindent 6 }}` }}") \
kubectl --validate=false -n {{ `{{ .Values.vaultoperator.namespace }}` }} apply -f -
showlogs: true
awesome!
man
i’d still recommend the use of bash here doc there
can you articulate why?
so that you can avoid a potential issue of echo part breaking due to some character in vaultconfig
breaking the bash string
that’s pretty good reason
it won’t break always. but i would still think it as a good practice to use bash here doc there
that way all you need to care is that you don’t have any bash here doc delimiter in vaultconfig
there might be other ways. its just that bash here doc is the best way i can think of
in many ways, fd vivication is better but echo can be squirrely
I wouldn’t use echo
in fact, but the newlines are important and it seemed that newlines got swallowed up in a straight grok in my quick testing.
er
something like that. I’m trying to go by memory…I’m also switching contexts a lot here so trying to keep everything straight is iffy
hey curious…what time is it there for you?
I think you’re in Japan right? Or is that a poor assumption?
2021-04-25
I’ve curated a list of important feature requests and planned changes https://github.com/roboll/helmfile/issues/1806 Please feel free to review and add your voices if you find anything interesting to you
This is going to be pinned to the header of our GitHub issues list so that everyone can be aware of and redirected to important planned features for discussion :) Allow opting in for inheriting all…
2021-04-27
hi. is this a bug?
if I run helmfile lint
against helmfile.yaml that contains
helmfiles:
- environments/test1/helmfile.yaml
- environments/test2/helmfile3.yaml
it fails with Error: repo not found
it’s referring to a release located in helmfile3.yaml. if I remove the first line (environments/test1/helmfile.yaml) it works fine. btw, I am using helmBinary option to specify helm version in each of those helmfiles
if I run helmfile -f environments/test2/helmfile3.yaml repos
it works. shouldn’t it automatically fetch charts?
@es Does it work when you run helmfile -f environments/test1/helmfile.yaml repos
, too?
The repos in that file automatically fetch when I apply so I haven’t tested. I’m on latest helmfile version btw
Are you using any of strategicMergePatches/forceNamespace/jsonPatches/dependencies/transformers
in your helmfile.yaml
then?
No
I’m not aware of any relevant bug that is fixed in the unreleased version of helmfile, at least
https://pastebin.com/Gt4EU67h - thanks in advance and please let me know if you need more information.
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
@es Thanks! Would you mind sharing me the exact version numbers of /usr/local/bin/helm
and /usr/local/bin/helm3
?
it’s there, you probably missed it. sec
# helmfile version;helm version;helm3 version
helmfile version v0.138.7
Client: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
Ah thanks. Let me check..
Whoa, this helm2 is super outdated :slightly_smiling_face: I cant even run helm init --client-only
…
touch ~/.helm/repository/repositories.yaml
did the job…
so i was able to reproduce it with only
helmfiles:
- helmfile.jenkins1.yaml
- helmfile.jenkins2.yaml
helmfile.jenkins1.yaml
repositories:
- name: jenkins
url: "<https://charts.jenkins.io>"
releases:
- name: jenkins11
chart: jenkins/jenkins
version: "2.5.0"
helmfile.jenkins2.yaml
repositories:
- name: jenkins
url: "<https://charts.jenkins.io>"
helmBinary: helm2151
releases:
- name: jenkins21
chart: jenkins/jenkins
version: "3.3.9"
this might be due to unnoticed race between helm2 and helm3 fetch
yeah - a similar configuration worked before adding helmBinary
after the failure it doesn’t even show up in helm2 repo list
so it maybe be race between helm repo add/up
of helm v2 and v3
Almost certainly this is due to a helmfile bug. We usually skipp helm repo add
calls on already added repo. But we don’t differentiate calls between helm v2 and v3 there
https://github.com/roboll/helmfile/blob/ae942c5288895c84c79171e5446773e4cb41c4ce/pkg/app/context.go#L17-L25 https://github.com/roboll/helmfile/blob/ae942c5288895c84c79171e5446773e4cb41c4ce/pkg/state/state.go#L383
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
that results in helm repo add jenkins ...
for helm 3 skips the subsequent helm repo add jenkins ...
for helm 2
yeah, I see. I just renamed the jenkins repo in helmfile3.yaml to something else and it worked fine
Awesome! That’s a nice workaround
thanks for helping
@es Would you mind creating a bug report in helmfile issues? I have a working local branch for fix
@es Thanks for the writeup! Created the PR https://github.com/roboll/helmfile/pull/1816
Trying to helm repo add the same Helm chart repository to Helm V2 and V3 in a single Helmfile run had been resulting in an incomplete result, only the latter helm repo add being skipped. This fixes…
thank you @mumoshu
Anyway, it would be great if you could provide provide us a more complete example for reproduction. At this point it’s too hard to say if it’s either a bug or not
@mumoshu I think I have a fundamental misunderstanding of how to use selectors for sub-helmfiles. You already have a basic skeleton of my repo/condition although I’ve made many changes to it so I’m gonna have to articulate my ask.
I’m using a layered approach to my helmfile project as you know. If you look in the generic
directory there is a 01a-…
and 01b…
directory for a tier of tools in each directory so I want to create a selector that would cause helmfile to 1) run everything in the respective 01a-…
and 01b-…
directories and 2) I want the flexibility to be able to just run one helmfile within a subdirectory of say, 01b-…
(essentially just installing a single chart). It seems to me that the documentation says that the selectors are defined in the helmfiles
directive of the parent directories to those sub-directories.
given: <project root>/helmfile.d/generic/01a-tier1
and <project root>/helmfile.d/generic/01b-tier2
if I want to be in helmfile.d
to run helmfile and I want to install just charts in 01b-tier2
then the helmfile.yaml
in helmfile.d
would have to define the selectors and point to the helmfile.yaml
in generic
defining the selectors with a path:
pointing to helmfile.yaml
in 01b-tier2
and so forth, correct?
My helmfile.yaml
in helmfile.d
looks like:
# helmfile.yaml in helmfile.d
---
helmfiles:
- "generic/*"
- path: "generic/common/*"
My helmfile.yaml
in generic
is:
# helmfile.yaml in generic
---
helmfiles:
- "*/*.yaml"
- path: "common/*"
- path: "01a-network-and-proxies/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=network-and-proxies"
- path: "01b-secrets-management/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
and finally, my helmfile.yaml
in 01b-…
for instance is:
# helmfile.yaml in security-management
---
helmfiles:
- "*/helmfile.yaml"
- path: "../common/*"
- path: "certmanager/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
- "app=certmanager"
- path: "vault-operator/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
- "app=vault-operator"
- path: "vault-secrets-webhook/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
- "app=vault-secrets-webhook"
- path: "dex/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
- "app=dex"
- path: "oauth2-proxy/helmfile.yaml"
values:
- {{ .Values | toYaml | nindent 8 }}
selectors:
- "tier=secrets-managment"
- "app=oauth2-proxy"
I’m inclined to think that that is incorrect, though, because when I run sync --selectors tier=secrets-managment,app=dex
(or delete
) for instance, it runs everything:
Why are you giving selectors:
for every sub-helmfile? What if you just omitted that and let it install all when helmfile sync
and let it install only part of releases with helmfile -l foo=bar sync
?
I guess because when I tested that, it didn’t work as I’d hoped…
maybe I did something wrong.
I was just looking at https://github.com/roboll/helmfile/blob/master/examples/README.md and I saw that helmfile is able to use the release labels…
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
so I’ll check that out again and see if I can find out why that doesn’t work.
nevertheless, I would like to be able to assign each full directory to a tier
sorry for the slow response. I was at the store.
very cool. That seems to be working. Not sure what I was doing wrong before…
noice!
I have to go to the store again to get something for my wife but if you can answer this while I’m gone… I thought I read in the docs once that there was a way to set a dependency up (like terraforms depends on) but of course, differently implemented. Did I misread that? For instance, our implementation of dex requires a running vault. I’m sure there are ways to set up the environment that such a thing could be jiggered but is there a native way for helmfile to do this type of functionality?
I’ll be back in about an hour
also, quick question about a thought on a feature request which may already exist. It would be super awesome if I could run an argument that would just show me the charts which would be affected by an operation on requested selector(s).
Perhaps that might be something like a “dry-run” feature?
what Add option to process helmfile.yaml, but not execute helm why This would be helmfile for development use-case We're working on a master helmfile, kind of like a "distribution" of…
Or if you’re willing to just review which releases are eing selected by a specific selector, i think helmfile -l foo=bar list
works, as unlike helm list
, helmfile list
is able to list not-yet-installed releases
Have you tried helmfile --interactive apply
?
It stops after printing the list of releases to be deleted and updated, and prompt you for confirmation (y/n).
What if helmfile apply --dry-run
worked exactly like that and automatically exist without prompting?
list works great!
I haven’t tried –interactive yet. I’ll give it a go…
as for --dry-run
, I am not sure (yet) of how helm
asserts –dry-run. I would suppose that it does an actual dry-run against the cluster (like kubectl apply --dry-run=(client|server)
does (depending)). I would have to research how kubectl
asserts dry-runs first and how that correlates to how helm
asserts dry runs before intelligently answering the last question. Basically, when I do tell something to do dry-run, I’m assuming I’m performing several actions which will test the actual application state I’m requesting all the way to the end without actually committing anything to k8s.
bbiab
cool. nm. don’t have to go now.
2021-04-28
I thought I read in the docs once that there was a way to set a dependency up (like terraforms depends on) but of course, differently implemented. Did I misread that?
For instance, our implementation of dex requires a running vault. I’m sure there are ways to set up the environment that such a thing could be jiggered but is there a native way for helmfile to do this type of functionality?
needs
?
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
@mumoshu if you’re available: how on earth would one define globally used values for all releases in an environment. This has been my absolute biggest blocker. Even better is that the global scoped values.yaml could be also templatized. I cannot figure out how to achieve this goal. Nothing I’ve tried works.
something like this?
values:
- someGobalValue: foo
---
values:
- {{ .Values | toYaml | nindent 2 }}
- anotherGlobalValue: {{ .Values.someGlobalValue }}bar
---
releases:
- name foo
chart: somechart
values:
- some: {{ .Values.someGlobalValue }}
another: {{ .Values.anotherGlobalValue }}
- name bar
chart: somechart
values:
- some: {{ .Values.someGlobalValue }}
another: {{ .Values.anotherGlobalValue }}
yes this is PR to add this in the doc https://github.com/roboll/helmfile/pull/1808