#helmfile (2020-03)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-03-04
I’ve just noticed a little issue with paths. When I run helmfile command against some custom path and this helmfile file contains “helmfiles:” I have to make the helmfiles’ values path relative to cli, i.e.
I run helmfile -f environments/dev/helmfile.yaml
inside this helmfile there is helmfiles:
block:
helmfiles:
- path: git::<https://my_user>:{{ requiredEnv "REPO_TOKEN" }}@my_domaincom/my_repo.git@deployment/helmfile.yaml?ref={{ env "INFRA_VERSION" }}
values:
- ../../values.yaml
The folder structure is the following
├── environments
│ └── dev
│ ├── helmfile.yaml
│ ├── values.yaml
Is this an expected behaviour? Docs say the path in the manifest should be relative to this manifest.
The other thing. It seems terraform-helmfile-provider doesn’t respect a custom path for helmfile.yaml
Say, a config
resource "helmfile_release_set" "common_stack" {
path = "custom_helmfile.yaml"
working_directory = path.module
...
ends up with
specified state file helmfile.yaml is not found
but it works for the name hamefile.yaml
Ok, it seems one cannot just change the path or name for helmfile, as tf wants to make a diff using the old file as well:). So, at this time one need to have 2 helmfile files presented in the filesystem. Not that obvious at first)
Hello :slightly_smiling_face:
This will be my first post here, so please forgive me if the answer to the question is too obvious. Actually I create a dead-simple helmfile structure, here you can see the project tree:
.
├── chart
│ └── example
│ ├── charts
│ ├── Chart.yaml
│ └── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
├── helmfile.d
│ └── 00-base.yaml
├── repositories.yaml
├── secrets
│ └── example
│ ├── dev
│ │ └── secret.yaml
│ └── qa
│ └── secret.yaml
└── values
└── example
├── dev.yaml
└── qa.yaml
At the 00-base.yaml
helmfile I can use the {{ requiredEnv "PLATFORM_ENV" }}
environment variable.
On the other hand, my value files would contain some env vars too e.g.
image:
tag: ${MAJOR_NUMBER}.${MINOR_NUMBER}.${PATCH_NUMBER}
As I experienced, the requiredEnv go template solution does not work this way.
Do you have any hint on how to solve this problem? Am I on the right track with this logic at all?
@Norbert Fenk - You can use go templates in your values files, but they need to have a gotmpl extension. https://github.com/roboll/helmfile#templates
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
For example: if you rename values/example/dev.yaml
to values/example/dev.yaml.gotmpl
you’d be able to use {{ requiredEnv "PLATFORM_ENV" }}
in that template as well.
Ohhh nice! Thank you, I will give it a try
2020-03-05
Is there a way to add logic in a helmfile configuration to detect helm v2.x?
We have a large helmfile repo, a folder for each kube namespace. I want to cut-over one of those namespaces/folders to helm3 but add protection for someone running helmfile
against helm 2.x
In my team we use both helm 2 and 3 on different clusters and we ship a docker image containing all the binaries such as helm (named helm2 and helm3), helm plugins, helmfile, etc. What happens is t…
somewhat related
2020-03-06
Hello,
are environments taken into accoutn when calling helmfile template
?
It seem as no, and docs are only telling about helmfile sync
I hope I have understood correctly, but environments are taken into account when using template
, you just have to add the flag. helmfile -e environment_one -l name=testservice template
and helmfile -e second_env -l name=testservice template
should yield different results assuming the values you have set in the different environments value files are different.
hi. what about defaut env ?
wil lit be taken if i don;t pass -e
?
I’m not sure I understand your question. When it comes to working with different environments you should have something like this at the top of your helmfile.yaml file:
bases:
- common/environments.yaml
And then that file (common/environments.yaml) might look like this:
environments:
env_one:
values:
- common/val1.yaml
env_two:
values:
- common/val2.yaml
default:
values:
- common/default.yaml
Then the values will be taken from common/val1.yaml if you add -e env_one
. common/val2.yaml if you add -e env_two
and from common/default.yaml if you don’t supply an -e flag at all
I have this in my helmfile.yaml
environments:
default:
values:
- default.yaml.gotmpl
and file default.yaml.gotmpl
I see it is processed when enabling dbug logs but the value is not passed
i run helmfile template, no additional cmd arguments
is the path to the file correct?
I see the file is processed in debug logs, so, yes, the path is correct
Can you post some snippets of what the different files contain, what is happening and what you expect to happen?
I expect values from env fles to be passed to upsteam charts, i.e. available via {{ .Values}}
Instea, the values are ignores ( alhtough files are processed)
the smae values passed via helmfile template --set
are working as expected ( change charts rendering)
basicall i have a singel value that affects igf some CRD will be rendered or not
I have a feeling that environemtns
• either completely broken
• doesn’t speficially work for helmfile template
command
https://github.com/roboll/helmfile#environment-values
• Also, since you’re working with .gotmpl values, its important to remember for value files ending with .gotmpl
, template expressions will be rendered
• for plain value files (ending in .yaml
), content will be used as-is
But it’s very hard to help more without seeing what you’re trying to do
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
I even inlined values inside helmfile
environments, without using files
still doesn’t work
i’m tryiing to pass values from environments
as explained in the link you’ve jusy sent and i’ve been reading for 1 day already
helmfile template
- just ignores those values
can you please post some example with your actual code so that it is possible to see what is happening please?
environments:
default:
values:
- my:
value: false
repositories:
- name: qwe
url: .... actual repo url ....
username: '{{ requiredEnv "HELM_USERNAME" }}'
password: '{{ requiredEnv "HELM_PASSWORD" }}'
releases:
- name: chart1
chart: qwe/chart1
version: 1.0.0
values:
- chart1.yaml
- name: chart2
chart: qwe/chart2
version: 1.0.0
values:
- chart2.yaml
$ helmfile template
my.value
is not passed to chart1, chart2, since templates are rendered as though my.value=true
chart1.yaml
and chart2.yaml
are empty
You could try to reference a yaml file instead of adding the values directly in the environments.defalut.values, like so:
testfile.yaml
---
my:
value: false
---
helmfile.yaml
---
environments:
default:
values:
- testfile.yaml
I did this and described this above
the same result
@Jonathan here’s example
Expecting port to be set from environment - 8888, instead it is just empty
$ helmfile template --skip-deps
Templating release=chart1, chart=chart1
---
# Source: chart1/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: chart1
spec:
ports:
- port:
targetPort: http
protocol: TCP
name: http
But when passing values via cmd line - it works
$ helmfile template --skip-deps --set servicePort=9090
Templating release=chart1, chart=chart1
---
# Source: chart1/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: chart1
spec:
ports:
- port: 9090
targetPort: http
protocol: TCP
name: http
So the obvious assumption - helmfile template
jsut doesnt support environments
$ helmfile.exe --version
helmfile version v0.100.0
yeah, I see what you mean!
one solution could be to do something along the lines of this:
chart: stable/{{`{{ .Release.Name }}`}}
namespace: default
missingFileHandler: Warn
values:
- config/{{`{{ .Release.Name }}`}}/values.yaml
- config/{{`{{ .Release.Name }}`}}/{{`{{ .Environment.Name }}`}}.yaml
And have a configuration as such for each release
Im not looking for immediate solution, i want to understnad why what described in docs doesn’t work
also I’ve tried helmfile sync
- the same error. so bug is rather global
@Erik Osterman (Cloud Posse) Might have an idea?
@mumoshu would definitely be able to help when he’s around next
(Sorry! At the SCALE conference today - will take a look later today)
I’ll ping you
I think this is caused by what is essentially a documentation issue in that Environment Values are not the same as Values for helm releases (this threw me a lot when I first started using helmfile). Environment values are just arbitrary values you can set per environment, but they have nothing to do with values that get passed to helm as part of your release.
going off your example in the zip file, you want your helmfile to look something like
environments:
default:
values:
- servicePort: 8888
releases:
- name: chart1
chart: ./chart1
values:
- servicePort: {{ .Environment.Values.servicePort }}
which takes the environment “value” an explicitly maps it to a helm value (or value passed to helm)
https://github.com/roboll/helmfile/issues/1048 describes how helmfile should probably rename them to something less confusing
Background We call Helmfile-specific template parameters as Environemnt Values or State Values today. Those parameters can be loaded from helmfile.yaml or another yaml and yaml template files with …
Yes, from what I understood you should set this values to releases explicitely. Ex.:
#default.yaml
mysvc:
chartVersion: 0.12.0
imageTag: 4-22
replicaCount: 1
#helmfile.yaml
environments:
default:
values:
- default.yaml
- name: mysvc
namespace: {{ .Environment.Name }}
chart: chartmuseum/mysvc
version: {{ .Environment.Values.mysvc.chartVersion }}
values:
- image:
tag: {{ .Environment.Values.mysvc.imageTag }}
replicaCount: {{ .Environment.Values.mysvc.replicaCount }}
Oh, it’s already answered by @Graeme Gillies. Haven’t noticed at first)
This section https://github.com/roboll/helmfile#note
says that env vars are available via .Values
after this PR is merged
https://github.com/roboll/helmfile/pull/647
but in reality - they are not available, only via .Environment.Values
Thanks everyone for your support!
Well, it should be available within helmfile.yaml
.
How did you confirm that it isn’t working?
Anyways this doesn’t work as state values and chart values are completely different things
environments:
default:
values:
- my:
value: false
repositories:
- name: qwe
url: .... actual repo url ....
username: '{{ requiredEnv "HELM_USERNAME" }}'
password: '{{ requiredEnv "HELM_PASSWORD" }}'
releases:
- name: chart1
chart: qwe/chart1
version: 1.0.0
values:
- chart1.yaml
- name: chart2
chart: qwe/chart2
version: 1.0.0
values:
- chart2.yaml
as probably someone has mentioned in this thread before
$ cat helmfile.yaml
values:
- a: A
b: B
environments:
prod:
values:
- b: BB
c: CC
releases:
- name: envoy
chart: stable/envoy
values:
- a: "{{ .Values.a }}"
b: "{{ .Values.b }}"
c: "{{ .Values.c }}"
helmfile --log-level=debug -e prod -f helmfile.yaml build
---
# Source: helmfile.yaml
filepath: helmfile.yaml
values:
- a: A
b: B
environments:
prod:
values:
- b: BB
c: CC
releases:
- chart: stable/envoy
name: envoy
values:
- a: A
b: BB
c: CC
templates: {}
@mumoshu but there’s no any mention in documentation about values
available as a root level in helmfiles.yaml
https://github.com/roboll/helmfile
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
yep. i’ve introduced it as an experimental feature, promising that i would document it once the person who initially asked for this feature confirm it works great.
i dont remember who asked this originally but i think i had received no explicit confirmation that it works on their end so… i just missed documenting it since then
does my example work?
it should work. if it doesn’t work w/o any user error, it’s a bug we need to fix. if anyone confirmed it does work for the intended purpose, i can write some doc for it in my spare time
or a pr for that is welcomed
I created a new and empty directory, and created one file helmfile.yaml
with the following content:
values:
- a: A
b: B
environments:
prod:
values:
- b: BB
c: CC
releases:
- name: envoy
chart: stable/envoy
values:
- a: "{{ .Values.a }}"
b: "{{ .Values.b }}"
c: "{{ .Values.c }}"
ran the following command: helmfile --log-level=debug -e prod -f helmfile.yaml build
which resulted in this output:
# Source: helmfile.yaml
filepath: helmfile.yaml
values:
- a: A
b: B
environments:
prod:
values:
- b: BB
c: CC
releases:
- chart: stable/envoy
name: envoy
values:
- a: A
b: BB
c: CC
templates: {}
helmfile --version
helmfile version v0.90.8
Seems to work for me!
works for me as well, will try to implement my scenarios using this thanks ! please document this, since docs ar eextremely confusing and it seems not only me wasted hours to figure it out
Eventually did what we wanted thnx
2020-03-08
2020-03-09
I’m missing something crucial i think. I’m trying to tune my nginx-ingress and https://docs.cloudposse.com/kubernetes-optimization/tune-nginx/ looked really interesting. But what i do not get is how with this helm file https://github.com/cloudposse/helmfiles/blob/master/releases/nginx-ingress.yaml some sort of compiled version of https://github.com/cloudposse/helmfiles/blob/master/releases/values/ingress.nginx.template ends up in the config map for stable/nginx-ingress.
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
I don’t think that file is actually used anymore. It was referenced previously as a workaround for an issue in the upstream chart, but that reference was removed with
The configmap you are seeing populated likely comes from the upstream chart itself now
what [ingress] Update nginx-ingress why Get the support of nginx.ingress.kubernetes.io/custom-http-errors
ah ok
2020-03-10
hello all
I have a question.
what type of env can I use in values.yaml.gotmpl ?
can I use .Release.name? or maybe data from other chart? E.G. If I have an ES server and would like to connect to that server from fluentd and kibana? Should I move the creds to global env or can I set it in es’s chart values file and call from the other values file ?
No. You should define the release name within maybe your helmfile.yaml
and pass it around.
helmfile doesn’t support propagating chart/release values from one to another for simplicity.
and reviewabiliity.
{{ $releaseName := "whatever" }}
releases:
- name: foo
values:
- thename: "{{ $releaseName }}"
- name: bar
values:
- thefooendpoint: "https://{{ $releaseName }}:8080/api"
thanks for the answer
Does anyone know if you can specify dependency relations in Helmfile? We have a chart that requires another chart to be installed first.
helmfile should install releases in the order they are specified in helmfile
also you can try this https://github.com/roboll/helmfile#dag-aware-installationdeletion-ordering
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
@Andriy Knysh (Cloud Posse) is right, however, the other trick is to set the concurrency to 1
that way it happens serially
ahhh
ok all good to know
to be clear, as long as you use the newer “DAG” feature by giving each release a needs:
field, you don’t need to set the concurrency to 1.
helmfile just plans the deployment so that independent releases are installed concurrently while dependent releases are installed seerially
oh right! thanks @mumoshu
To create a CRD
Create namespace functionality coming back to helm: https://github.com/helm/helm/pull/7648
closes #6794 Signed-off-by: Matthew Fisher [email protected]
2020-03-11
Has anyone been able to successfully use a pipe passing input to stdin when using helmfile exec
function? I’m trying to debug the code and I’m not entirely sure how it works with regards to putting inputs
to stdin
2020-03-13
Hello Everyone, do we need to migrate helm(2-3) in every helm client machine ?
It is as part of the helm2 to helm3 migration
no, you can just keep using helm 2
The serious answer is yeah, on each deployed cluster with helm2 charts, you probably should upgrade to helm 3. Or better yet, just blow them away and start from scratch so you can practice treating them as cattle instead of pets.
Hi @Zachary Loeber
Yeah.. I can continue to use helm2. However in future I should be upgrade to 3. I am thinking to migrate the helm charts in 1 client machine and in remaining Client machines just remove the helm2 binary and place helm3 binary.. will that help?
Are you running tillerless? Either way, yeah completely removing the helm2 deployments in the other machines and switching to helm 3 after the deployments have been updated is smart. I ran into issues with plugins that would not respect the local PATH variable (for instance) when attempting to use both versions.
going one or the other is going to be easier
Thankyou @Zachary Loeber
2020-03-14
@here while installing the helm chart for grfana from stable/grafana, the container is keep restarting. The only changes i made in values file is
- changing the service type from ClusterIP to NodePort(32323)
- setting the password for admin
- making the persistent as true I have installed Promethues in first place and it is working fine..
Did anyone has faced similar kind of issue? i could not find much logs either from k8s or docker for troubleshooting
root@master:~/grafana# k logs grafana-df5bc8c76-2zgt5 -n grafana Error from server (BadRequest): container “grafana” in pod “grafana-df5bc8c76-2zgt5” is waiting to start: ContainerCreating
I suspect it could be an issue with resources… It’s started working..
2020-03-18
hello. I’ve been trying for a while now to import a defined template into another file but I can’t find a way of doing this. I’ve submitted this issue https://github.com/roboll/helmfile/issues/1146. Did anyone tried to do this or how can this be done? My template file for the values.yaml has reached more than 1000 lines and a vast majority of those are {{ define “test” }} templates. I want to extract those from this file and move those into separate files and import them as needed. Thank you
Hello, So I have the following structure in my current directory helpers/_commons.tpl my_template.yaml.gotmpl the _commons.tpl contains a function definition as follows: {{- define "add_http_r…
I’ve read most of the docs but I don’t see any clear reference or some examples of doing this
I found this from the godoc but couldn’t find a way to do this
By construction, a template may reside in only one association. If it's necessary to have a template addressable from multiple associations, the template definition must be parsed multiple times to create distinct *Template values, or must be copied with the Clone or AddParseTree method.
Parse may be called multiple times to assemble the various associated templates; see the ParseFiles and ParseGlob functions and methods for simple ways to parse related templates stored in files.
~is it crazy for me to look for a solution where:~
helmfile apply
~triggers a blue/green deployment (instead of rolling deployment) (no istio)?~Disregard. Taking a different approach
Here’s the gist of it
## Get virtual service color label
{{ $currentColor := env "CURRENT_COLOR" | trim | replace "<none>" "" | default "blue" }}
## Map that define workflow { blue => green, green => blue }
{{ $rules := dict "blue" "green" "green" "blue" }}
{{ $color := get $currentColor $rules }}
that’s at the top of the release to determine the next color.
And then we define the helm release like this:
releases:
#
# References:
# - <https://github.com/cloudposse/charts/blob/master/incubator/monochart>
#
- name: '{{ printf "%s-%s" $release_name $color }}'
labels:
app: '{{ $release_name }}'
color: '{{ $color }}'
so that takes care of deploying the release to an alternating blue/green release.
but we still need to flip the traffic.
we’re using istio virtual gateways
for that we deploy a second release that handles that using a lot of hooks. It’s a bit nasty.
# None Application Deployment specific Release
# This means virtual services and destination rules
# And None application CRDs
- name: '{{ $release_name }}'
labels:
pull-request: "true"
chart: "cloudposse-incubator/monochart"
version: "0.18.4"
wait: true
force: true
recreatePods: false
needs:
- '{{ printf "%s-%s" $release_name $color }}'
hooks:
- events: ["cleanup"]
showlogs: true
command: "/bin/sh"
args: ["-c", 'echo {{ $release_name }}-{{ $currentColor }}']
- events: ["cleanup"]
showlogs: true
command: "/bin/sh"
args: ["-c", 'helm list -q {{ $release_name }}-{{ $currentColor }}']
- events: ["cleanup"]
showlogs: true
command: "/bin/sh"
args: ["-c", 'helm list -q {{ $release_name }}-{{ $currentColor }} | xargs -rI {} helm upgrade {} cloudposse-incubator/monochart --set replicaCount=0 --reuse-values']
values:
- fullnameOverride: '{{ requiredEnv "APP_NAME" }}'
## Ingress is workaround to register the application in forecastle.
## You can remove it when issue would be fixed <https://github.com/stakater/Forecastle/issues/73>
crd:
{{- if .Environment.Values.oidc_ingress }}
"forecastle.stakater.com/v1alpha1":
ForecastleApp:
default:
enabled: true
spec:
icon: '<https://www.ruby-lang.org/images/header-ruby-logo.png>'
name: '{{ env "RELEASE_NAME" | default "service-dev-00" }}'
group: '{{ env "PORTAL_GROUP" | default "Services" }}'
url: '{{ env "APP_SCHEME" }}://{{ .Environment.Values.app_base_host }}'
instance: '{{ env "ISTIO_GATEKEEPER_NGINX_CLASS" | default "mock-ingress" }}'
{{- end }}
"networking.istio.io/v1alpha3":
DestinationRule:
default:
enabled: true
spec:
host: '{{ requiredEnv "APP_NAME" }}'
subsets:
- name: blue
labels:
color: blue
- name: green
labels:
color: green
# Service endpoint
service:
enabled: true
type: ClusterIP
selector:
app: '{{ requiredEnv "APP_NAME" }}'
ports:
# tmp until monochart default values or fixed
default: null
http-default:
internal: 3000
external: 80
virtualServices:
default:
enabled: true
labels:
color: {{ $color }}
prev_color: {{ $currentColor }}
gateways:
{{- if and (eq (env "OIDC_INGRESS" | default "true") "true") .Environment.Values.oidc_ingress }}
- istio-system/istio-oidc-ingressgateway
{{- else }}
- istio-system/istio-ingressgateway
{{- end }}
hosts:
- "{{ .Environment.Values.app_base_host }}"
http:
- name: "default"
match:
- uri:
prefix: "/"
route:
- destination:
host: '{{ requiredEnv "APP_NAME" }}'
subset: {{ $color }}
We are doing blue green with Helmfile
If you are interested @Igor Rodionov can share details
Please share details @Igor Rodionov on how you are doing blue/green deployments
Whoa, no way?
nice
2020-03-19
anyone can help me with the above problem?
Is it possible to make a value immutable somehow? Say it’s set once(initially) and it should be ignored during subsequent runs or it returns an error if it’s changed? Have a use case right now, but haven’t checked out the possibilities yet.
Don’t think that would be possible as helmfile doesn’t maintain state and basically just constructs command line arguments for helm.
So if there was a way for you to do it in helm
, then there would be a way to do it in helmfile
maybe a CRD but yeah thats not helm
2020-03-20
I want to do something like this, but it doesn’t seem to work, does anyone have a working example?
bar: {{ exec "sh" (list "-c" requiredEnv "FOOL" ) }}
file name is values.yaml.gotmpl
My solution so far:
bar: {{ exec "sh" (list "-c" (printf "echo %s" (requiredEnv "FOOL")) ) }}
but maybe there is a neater way
2020-03-23
hello any news with this issue? https://github.com/roboll/helmfile/issues/1146
Hello, So I have the following structure in my current directory helpers/_commons.tpl my_template.yaml.gotmpl the _commons.tpl contains a function definition as follows: {{- define "add_http_r…
hey! just replied in the issue
Hello, So I have the following structure in my current directory helpers/_commons.tpl my_template.yaml.gotmpl the _commons.tpl contains a function definition as follows: {{- define "add_http_r…
2020-03-25
2020-03-27
Adding @discourse_forum bot
@discourse_forum has joined the channel
2020-03-29
2020-03-30
I just discovered helmfile while working for quite a bit of time on a python script that essentially does what helmfilme was designed to do. I’m looking to investigate and switch to it soon, look really cool
I’ve started noticing that installed: false
is not working for us anymore. Don’t know since when. Haven’t used it extensively. It could be an issue with tf helmfile provider though. Also we are using pretty old version of helmfile right know, can’t keep pace with the frequency of helmfile releases). I’ll try to investigate.
Ok, I got it. Since helmfile diff
cannot handle this, tf doesn’t see the difference and does nothing. tf plan
shows No changes. Infrastructure is up-to-date.
@Andrew Nazarov I was thinking about this for a while.
Can we just add some kind of “summary of changed releases” and include names of the to-be-deleted releases in it?
Probably the output would look like the affected releases part of helmfile apply
output, like:
Affected releases are:
anotherbackend (charts/anotherbackend) UPDATED
backend-v1 (charts/backend) DELETED
backend-v2 (charts/backend) UPDATED
database (charts/mysql) UPDATED
front-proxy (stable/envoy) UPDATED
frontend-v1 (charts/frontend) DELETED
frontend-v3 (charts/frontend) UPDATED
logging (charts/fluent-bit) UPDATED
servicemesh (charts/istio) UPDATED
…followed by today’s helmfile diff
output.
That seems readable and better than enhancing helm-diff to somehow show deletion of every k8s resource contained in the deleted release
thx! i’ll address this in https://github.com/roboll/helmfile/issues/1072
When you install a chart using installed: true and then run helmfile diff you will get no changes but running helmfile apply the chart will be uninstalled. It would be nice if this was shown in the…
2020-03-31
If I would want to “bootstrap” certain releases all at once on different clusters (cert-manager, nginx-ingress etc). How would one “loop” through all of the clusters in a helmfilme and apply certain subcharts such as the ones I mentioned to each cluster?
You can manipulate with kubeContext. Like to make it dependent on environment you are running helmfile against.
labels are very useful for this. you could make a label that’s something like infra: true
and then run helmfile with infra=true
as a selector
so each chart you want to install as your infrastructure package, you’d label with infra: true
(the label name is arbitrary, as is the value, this is just how i do it)
@James Huffman that sounds swell, how is this applied across all clusters though?
So if I have 10 clusters and a release in each and in 8 I want to deploy nginx-ingress and cert-manager
how do you currently determine what goes into each cluster? or do you not have a mechanism so far?
@James Huffman so far I have just the app I deploy and include it in the release list of the helmfile
Maybe an include/function call where I pass the subcharts and the kubecontext ?
you could do that. that’s how we feed helm chart versions into our setup. we pull in a file whose path is determined by environment variables