#helmfile (2020-05)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-05-01
Hi all, am trying to deploy my values into Jenkins environment using helm install Jenkins stable/jenkins --value helm/jenkins-values.
yaml but I keep getting some depreciation error. Any help would be highly appreciated
What is the error? I just deployed this successfully
2020-05-02
2020-05-04
Hi, should it be possible for a helmfile environment to have multiple values files with them merged? eg:
environments:
default:
secrets:
- ../environments/default/secrets.yaml
values:
- ../environments/common.yaml
- ../environments/default/values.yaml
so in that example a value specified in environments/common.yaml
would be overridden by a value set in environments/default/values.yaml
?
yes
2020-05-06
2020-05-08
HI all, I installed some resources using helm3 in default namespace. What would be proper way to move everything in another namespace without disruption?
define ‘disruption’? as far as I know there is no way to move workloads across namespaces without recreating them. As such, the way to ensure both internal and external services are still available would be doing a blue/green deployment using some external load balancer/service (external to the helmfile deployment that is).
But what does it matter the namespace? did you deploy multiple workloads into one namespace and need to split them out now?
I deployed something in default namespace which i dont like
i will probably end up creating new release in another namespace and deleting old one
oh, whoops. well I cannot tell you how to achieve zero disruption but if you used helmfile switching namespaces should not be a huge undertaking at least.
helm3 means that the deployment info is held per namespace
so you can simply target the new namespace to have both deployments running simultaneously
and its not for application, its for some controller wich handle other objects, so it can be tricky
then remove the old one.
but at least is in lower environment, so i can give it a shot
well all would be fine if i created in non default one at te beginning , but now I will have vailable experience about outcome of two parallel releases in separate namespaces
thanks for advice, helpful as always
sorry I have nothing better off the top of my head
dont be, you helped me, and I really apricate it
With disruption via helmfile it could be done like:
-) installed: false
for the old release (beware the possible data loss, make dumps and stuff)
-) fix the release definition, set installed
back to true
and apply
-) deal with data migration (restore dumps or reuse volumes if possible)
Or
-) create a definition of the new release -) migrate the data -) installed: false for the old one
That’s the only way that comes to my mind how this can be done with helmfile. I think I’ve seen an issue related to the same problem
Editing the namespace field in the helmfile does not redeploy the chart to the new namespace. e.g changing this: - name: kubernetes-dashboard namespace: kube-system chart: stable/kubernetes-dashboa…
could you deploy a new release into the different namespace and point traffic to that service instead. the service endpoint would be different because of the namespace. if its ingress i assume you could technically have 2 ingresses in different namespaces w/ the same endpoint and remove the old one once the new one is up?
Yes I believe @Andrew Nazarov and @btai have summarized the state very well! Thanks.
So it highly depends on the type of your service and how you’re exposing it to other in-cluster and inter-cluster services, via ClusterIP service or service loadbalancers or ingresses.
For example, for in-cluster services, you’d need to add a new release adjacent to your release in the old namespace. Switch the dependent apps to use the new service. Remove the old release by setting installed: false
and running helmfile apply
.
Thanks all, I was able to switch release/objects to new namespace. Objects (deployment/pod) that i installed in new namespace had precedence over old one, I realized that since serviceaccount didn’t have permission on some aws resources so it showed permission error. Once i fixed permission issues in iam roles everything started to work, and i safely removed old release/objects from default namespace
2020-05-09
2020-05-11
Hi Guys. This is probably more related to helm secrets
than helmfile
itself but I am wondering if I am missing a flag or something when using helmfile
.
I have a SOPS
encrypted secret e.g. bitbucket.key
which does get decrypted when using helmfile into bitbucket.key.dec
and the original file get deleted. But the problem is helmfile
still tries to load origin bitbucket.key
which obviously doesn’t exist.
failed to read jenkins.yaml: environment values file matching "../secrets/bitbucket.key" does not exist in "."
It should be loading bitbucket.key.dec
or decrypt it to bitbucket.key
in the first place. Does anybody know what I am doing wrong here? Thanks in advance for your help.
you should name your original file with the .yaml
extension
see https://github.com/roboll/helmfile/blob/master/pkg/helmexec/exec.go#L217 for more details
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Thanks @Vincent Behar Will give it a try now
Hi guys, helmfile diff shows a change in the deployment but helmfile sync does not re-create the pods. Why is that?
Oh my bad this is a helm question rather
Can anyone help me understand the difference between values
and valuesTemplate
? The only place I see valuesTemplate
mentioned in the docs is https://github.com/roboll/helmfile/blob/master/docs/writing-helmfile.md - but it’s still not clear to me how they are different. I tried reading through issue 428 as mentioned in the doc and unfortunately it did not clear anything up for me.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
2020-05-12
Hi! does anyone know how to get past running helmfile against GKE (helm2/tiller installed with TLS enabled) with this error:
Error: transport is closing
i had not enabled tls in the helmfile
a better question is, how can I enable helm tls for a specific helmfile environment?
helmfile -i apply --args helmDefaults.tls=true
does not work
ok this works:
helmfile -i apply --args --tls
I’ve got a some values that I need to set for all of my releasees based on values from AWS SSM Param store, anyone know how to make something like this work?
templates:
default: &default
valuesTemplate:
secret: secretref+awsssm://{{ .Values.repo }}/{{ .Environment.Name }}/secret?region=us-west-1
releases:
- name: app-{{ .Values.branchSlug }}
version: 1.0
values:
- repo: app
<<: *default
What if you try
{{`{{ .Values.repo }}`}}
?
I had the same thought, but I still get executing "stringTemplate" at <.Values.repo>: map has no entry for key "repo"
Oh, my bad, wasn’t careful enough at first. I might be wrong, but I think it won’t work at all as it’s impossible to use .Values
to reference anything from values:
of the release.
I was afraid that might be the case.
It makes sense why it wouldn’t work, but it sure would be nice.
So basically this .Values reference is a reference to so called “environment values”, not to values of a release (which are chart values) :)
That’s helpful, thanks. I know you can use the release name in the templates section (for example
secret: secretref+awsssm://{{`{{ .Release.Name }}`}}/{{ .Environment.Name }}/secret?region=us-west-1
I wonder if I could use that or another value from .Release
to do what I’m attempting. I’ll try it out.
I’m still frequently confused by where things come from and where they’re available in helmfiles.
As is I’m getting error during apps.yaml.part.0 parsing: template: stringTemplate:28:48: executing "stringTemplate" at <.Values.repo>: map has no entry for key "repo"
2020-05-13
Does anyone know how one can execute a command inside the deployed pod of a specific release?
That is not something you would do as part of a helm release unless it were passed in as part of the starting argument for a container.
you can use init containers to run initialization commands against a shared volume. otherwise using pre-sync hooks you can spin up containers to run commands (https://github.com/roboll/helmfile/issues/538)
HI, When reading the process to install the cert-manager chart (https://hub.helm.sh/charts/jetstack/cert-manager), you can see two steps before installing the chart: installing some CRD prior to th…
@Zachary Loeber thanks for the reply. This would be part of a ci/cd pipeline and triggering a server update after deployment. But this job can fail and might need to be re-triggered. Not sure if using initContainer in this case will be the best, especially in some cases where you might want to run the update command optionally
What is it you’re trying to do? It is possible to run a command inside a specific existing pod, but I don’t recommend it. Using a job (https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) is usually a better option.
I’ve done something similar in the past for one-shot operations via a single pod (for the most part the only reason you’d ever want to run just a single pod). The helmfile looked something like this:
- name: kafka-init
chart: incubator/raw
namespace: database
values:
- resources:
- kind: Pod
apiVersion: v1
metadata:
name: kafka-init
spec:
restartPolicy: Never
containers:
- name: kafka-init
image: {{ requiredEnv "CONTAINERREPOSITORY" }}/{{ requiredEnv "STACK_KAFKA_INIT_IMAGE" }}:{{ requiredEnv "STACK_KAFKA_INIT_IMAGE_TAG" }}
# command:
# - /usr/bin/init_connectors.sh"
imagePullPolicy: Always
env:
- name: 'JDBCPASSWORD'
value: '{{ env "JDBCPASSWORD" | default "secretjdbcpassword@azurekeyvault" }}'
- name: 'JDBCURL'
value: '{{ env "JDBCURL" | default "secretjdbcurl@azurekeyvault" }}'
- name: 'STORAGEACCOUNTNAME'
value: '{{ env "STORAGEACCOUNTNAME" }}'
- name: 'STORAGEACCOUNTKEY'
value: '{{ env "STORAGEACCOUNTKEY" | default "secretstorageaccountkey@azurekeyvault" }}'
- name: 'JDBCDATABASE'
value: '{{ env "JDBCDATABASE" }}'
- name: 'JDBCUSER'
value: '{{ env "JDBCUSER" }}'
- name: 'JDBCSERVER'
value: '{{ env "JDBCSERVER" }}'
- name: 'JDBCSCHEMA'
value: '{{ env "JDBCSCHEMA" }}'
- name: SCHEMAREGISTRYHOST
value: '{{ env "STACK_KAFKA_SCHEMA_REGISTRY" }}'
- name: 'KAFKACONNECTHOST'
value: 'confluent-kafka-cp-kafka-connect.database.svc'
- name: 'ZOOKEEPERHOST'
value: 'STACK_ZOOKEEPER_HOST'
- name: 'CONNECT_PLUGIN_PATH'
value: '/usr/share/java'
- name: 'CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE'
value: 'false'
- name: 'CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE'
value: 'false'
- name: 'CONNECT_INTERNAL_VALUE_CONVERTER'
value: 'org.apache.kafka.connect.json.JsonConverter'
- name: 'CONNECT_INTERNAL_KEY_CONVERTER'
value: 'org.apache.kafka.connect.storage.StringConverter'
- name: 'KAFKA_BOOTSTRAP_SERVERS'
value: '{{ env "STACK_KAFKA_BOOTSTRAP_SERVERS" }}'
- name: 'KAFKA_BROKERS'
value: '{{ env "STACK_KAFKA_DEFAULT_REPLICA_COUNT" | default "3" }}'
That is overly complex and you can see where I later had the image itself run a default command (thus commenting out the command)
but it was for initializing a kafka instance after the fact in a cicd pipeline
Interesting
Well for my case it’s pretty straightforward I think
I have a helmfile which updates 2-3 releases or so at once when running helmfile sync
yeah, same deal though. you could use a raw chart to run a pod
On one specific release of the 3 after it has properly rolled out
but also, if the commands are simple enough, you could also simply use kubectl run as well
It should exec -it update-modules in any of the replica pods
yeah it’s just a single command but it should be run on that specific deployment after it rolled our properly
and I don’t want to maintain separate variables in my gitlab-ci where I hardcode the deployment names or so
gotcha, maybe join the office hours chat happening right now and ask if anyone else has better ideas
ah nice
I can just bust in and ask questions?
yup, Erik will ask multiple times usually
2020-05-14
Do recent version of helmfile still support helm 2?
hello I’m having some issue with helmfile, i can’t figure out if I’m doing something wrong or if it’s intended: So i’ve this helmfile which will call another helmfile. You can see that I’m trying to override 1 value.
helmfiles:
- path: base-opt-in/kube-janitor.yaml
values:
- kubejanitor:
dryRun: false
This is the other helmfile where I have all my default values (that i do not want to repeat)
repositories:
- name: hjacobs
url: <https://raw.githubusercontent.com/hjacobs/kube-janitor/master/unsupported/helm>
releases:
- name: kube-janitor
chart: hjacobs/kube-janitor
namespace: kube-system
values:
- image:
repository: hjacobs/kube-janitor
tag: '19.12.0'
pullPolicy: IfNotPresent
kubejanitor:
dryRun: true
debug: true
once: true
Problem is though that when I template this, it doesn’t set the dryRun to true
am i doing something wrong?
opened issue in case it’s a better way to communicate https://github.com/roboll/helmfile/issues/1262
I'm having some issue with helmfile, i can't figure out if I'm doing something wrong or if it's intended: I have this helmfile which will call another helmfile. You can see that I…
we solved this issue! thx for the support
2020-05-15
Hi Team, i have a requirement to club 3 files and make a configmap file like this and it will work. {{- range list “dev.properties” “properties.conf” “properties.json” }} However i want pass these names from the values file. I could not able to make the correct syntax. can some one help on it {{- range list .Values.propertiesFileEnv, .Values.propertiesFileCommon, .Values.propertiesFileJson }} <<< there is a syntax error here… “,” is not supported , but what could be the separator..
Hey all, can helmfile set a kubernetes auth context per release or only per helmfile? I have a bunch of releases I want to go out in parallel, but they are across different clusters
It should be possible to do per release. But I think recently I’ve seen the issue that this might not work
Hello, I use kubeContext override per release, and it's broken starting from v.0.106.0. Simple helmfile.yaml is following: helmDefaults: kubeContext: default releases: - name: cert-manager name…
Thank you!!
Looks like I’m able to do it just fine, just can’t do it on the helm file and the release
Hello all, I have a question: how would you propose in helmfile to pull/reference a helm chart from Azure Container Registry? So if for example I have ACR with login server named acrX.azurecr.io, and inside of it there is a repository named repoX/chartX, and inside of it there are it’s versions, how would I need to update my helmfile (or possibly some other files) to pull/reference that chart?
repositories:
- name: ?
url: ?
username: ?
password: ?
. . .
I’ve seen once they have this /helm/v1 appended to the url, but don’t know the real thing actually
Like
<https://acrX.azurecr.io/helm/v1/repo>
Oh, I was almost right)
Here are docs:
https://github.com/roboll/helmfile/blob/master/README.md#azure-acr-integration
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
I’ll only add that the authentication token for ACR is abysmally short-lived
if you are having issues running locally, ensure you auth to the acr just before syncing your charts
thanks for your answers, but this is happening in gitlab ci, and lint job is failing since
<https://acrX.azurecr.io/helm/v1/repo>
is not reachable publicly
The same principles apply, the runner needs to be authenticated to the acr prior to linting
this is true of helm in general, not just helmfile
2020-05-17
Hi all, is anyone using helmfile and kustomize together?
Hi guys, I suppose I am doing something wrong if I use a helmfile for staging and production and separate them by environment
?
Because if I specify the environment the releases not belonging to it are deleted…
Kind of going through this myself. Currently I have them separated which obviously makes the whole environment thing pointless. The best I can tell is the other option is to add conditionals for every release saying “if env then….”
It would be much more desirable of you could set those sort of conditionals at the helmfile import level
Yes I ended up using conditionals in the end
I think I’ll probably do the same in the end
Would you happen to know if one can set values: in each specific environment and the release to inehrit it?
It can be confusing. Only values set at the release level are relative to the actual helm deploy
All the other values set everywhere else are only relative to helm file
My issue now (after just using conditionals) would be to use the prod.yaml or staging.yaml depending on the environment
What you can do is set values in the environment for home file to pick up on and then dynamically load values files into the helm chart based on those values from helmfile
I assume production: values:
• x.yaml is not how it owkrs
hmm not sure I got that
How would I use the respective environment values in the release?
So in prod.yaml you might have “security=production” and then in the release you might have
Values:
- vars/security/{{ . Environment.Values.security }}-security. Yaml
Forgive typos and pseudo code. Typing from my phone
Hmm I just want to point the release to the respecitve .yaml file that’s all
I believe I just demonstrated how one could do that in the above example?
ahhh so you set a random variable and you just use it in the values on the release
my bad I thought that values: in environment == values in release
God confused
Thanks for the explanation
Yeah that’s what I was trying to explain, albeit poorly, above
I think the naming here is a bit confusing in general
I just make a variable valuesFile
And I don’t need this anymore
using the same logic
Exactly. and I totally agree, it’s confusing. I think they should be named helmfile_ values and helm_release_values, or something to that affect
Yeah, or anything to distinguish them, especially since the example for environment is still loading an external yaml file
I get the fun of having a multi-regional environment, still not sure how I’m going to work regions into this whole thing
I’m just learning it, before I did my own python script to do the same thing and it was a lot of redundancy I think
Also I saw this note
Prior to this pull request, environment values were made available through the {{ .Environment.Values.foo }}
syntax. This is still working but is deprecated and the new {{ .Values.foo }}
syntax should be used instead.
This adds values to state files as proposed in #640. values: - key1: val1 - defaults.yaml environments: default: - values: - environments/default.yaml production: - values: - envir…
neither form is working for me…
I don’t get it
It’ just too confusing for me
This was revealing btw: https://www.reddit.com/r/kubernetes/comments/am5mcq/helmfile_how_to_deal_with_different_versions_in/
I’ve been reading the docs at https://github.com/roboll/helmfile. But as the title indicates I’m looking for a way to deal with different versions…
I tried every possible combination on earth to get a value from environments: x:
But nothing works, this is terribly confusing
hello everyone, i just got introduces to helmfile and this is turning out to be a great tool for our use case. Appreciate you all working on this making this better day by day.
I got this working with a bunch of addons , but i want to have a time lag between 2 of my addons. i.e iwant to introduce some kind of a delay
2020-05-18
Hi I was wondering if it’s possible to modify a value from a hook. Usecase: getting data from an existing secret with kubectl
and pass it to another release as value. Or is there another way to achieve that?
@Michael Seiwald I’m trying to achieve something similar, I’m building a pipeline where I deploy the application and then I need to run a command inside of a working pod as many times as needed until I get a good exit code
still fiddling with it
# Ordered list of releases.
helmfiles:
- "releases/coredns.yaml"
- "releases/external-dns.yaml"
- "releases/ingress.yaml"
- "releases/certmgr.yaml"
- "releases/certissuer.yaml"
- "releases/dashboard.yaml"
I have the above charts to be installed in multiple clusters. Now to achieve this, i organized my directory like this using environment:
environments:
cluster1:
values:
- ../cluster1.yaml
So can do helmfile -e cluster1 sync. But this does not seems to be working that way. Any suggestions ?
Getting the following error:
err: no releases found that matches specified selector() and environment(cluster1), in any helmfile
there is currently a bug / feature that you have to define environments:
all the way down the pipe
are you doing that?
not sure if i understand what you are saying ?
quite literally, you have to put your environments:
block in every helmfile
oh gotcha. let me try that
only the bottom level directories seem to be required in order to actually have the full fleshed out definition
but every file along the way needs at least
environments:
prod:
that level of definition
or just define it once in yaml and include it in your bases:
block.
I tried that, it did not work
bah
copy/paste idiocy
here, a partially working example
Fairly large set of scripts for crafting and working with devops tools - zloeber/CICDHelper
I think that works because you’re doign the bases in the same file as the release
I think the block needs the dividers around it ---
the bug/issue is with helmfile includes
right, and in the umbrella helmfile as well
Fairly large set of scripts for crafting and working with devops tools - zloeber/CICDHelper
unrelated: I am really enjoying your examples!
I wouldn’t use my crappy repo as the best example but I know that it does work
it’s so hard to find people showing their examples
ha, I’m working on my own kubernetes craftsman framework using helmfiles to vet out solutions quickly (for better or worse)
I’m somewhat embarrassed to be using so many makefiles but most the stuff we do are just scripts anyway and Makefiles are generally easy to rip apart for other purposes. But thanks good sir, glad it is helping a little
I’m still struggling to fully wrap my head around the best way to use helmfile so things are dry and multipurpose across various deployments
yea I’ve gone through about 3 iterations on my repo so far
I wanted to keep the command line simple, so I setup some simple go templating to do everything by env
so my top level or 2nd top level, i forget, does conditional includes based on EnvName
you have one public to share so I can rip off your ideas?
I can put up what I have somewhere
you see all the helmfiles/wip/*yaml files? I actually deployed them all at one point all using env vars and a huge env file
it felt…. wrong
I know I can do better and generalize more, I just have to learn more first. This is after 2-3 passes.
seems like you are on the right path though
one YAML line at a time
I dig the use of the preprod_common.yaml insertion for the default release block elements.
I’m going to do that too I think
kubeContext is probably a wise idea as well I suppose
gee, thanks for the huge insertion of self doubt on a Monday….
rofl
I just call that Monday
for real though, thanks for sharing. It is helpful
the kube context is invaluable for deploying multi-region multi-cluster
but thats where my pre_hook_bootstrap.sh comes in
it runs terraform with a -target
to generate kube configs from the terraform created cloud managed K8s clusters
I keep wanting to force the thing to use its own config file
I’m going to move that logic for kube contexts completely out of helmfile
and just have helmfile expect that these things exist
Sorry, to be specific, the logic for generating them dynamically to be available in ci/cd, I’m going to move out ofhelmfile (which is just running a bash script presync hook)
the kubecontext parameters will be staying
clever use of the presync hook
it makes the deployment really slow. as a bandaid I separated what would have been a parallel deployment of 5 clusters into a bootstrap of 1 arbitrary cluster and then the other 4 go after
but the slowness is due to terraform, not helmfile
that’s terraform for ya
2020-05-19
Does anyone have a suggestion on how to execute a command on a specified release pod? After I make the deployment I like to have a step in my CI pipeline that runs a module update which can fail. It should retry 4-5 times before calling it quits but for that I need to execute it in another step after deployment several times. Any suggestions?
its unclear to me specifically what you’re trying to do but couldn’t you just run a script to do the necessary things in a post-sync hook ?
I also thought about that but I have two issues
- I always do a rolling deployment meaning helmfile apply will always re-deploy the release (but let’s say I can turn that off)
but 2. The stdout from the pod is not being displayed
even with showlogs: true
so I do not see the output of the update procedure
Any idea why that is?
in a bash script?
are you setting set -x
?
bash script indeed
set -x ?
The fact that I don’t know what that does is a good indication I might be doing something wrong
#!/bin/bash
set -x
debug mode
although anything goingto stdout should be showing up for you
I don’t understand why a rolling deployment makes anything different
I’m not deploying an actual script but running the command directly in the hook but i assume the same priciple applies
I configured my helm chart to generate a random annotation so the pods are re-creted
But I can remove that, especially for the production instance
If the update fails I don’t want to re deploy the pods as many times
Sorry but I’ll need to understand your workflow/use-case better. I’m very confused at what you’re trying to accomplish
I have a CI pipeline that builds and image, tests the code and deploys the built image to a kubernetes cluster through helmfile. After deployment is succesful an internal command (inside the pod) to execute an upgrade (kinda like a database migration) needs to be started and retried X times if it fails (because of potential db locks)
Does that sound resonable?
it sounds like you need an init container or maybe a side-car
but as a more temporary solution you could certainly execute a bash script with for loops and conditionals to do what you need
init container would run once, if it fails then what?
the init container is running a script like anyone else when a pod starts
Also I want to be able to execute it depending on the environment and also have the logs in my pipeline
it could have some redundancy built in but you have some bigger questions. why is your DB migration always failing?
helmfile I believe can handle go templating to place hooks depending on environment
using conditional blocks
It’s not a database migration it’s a server upgrade which changes the database schema but it can hit pg locks if someone is using that table then
I don’t mean to tell you how to do your business so please don’t take offense if I’m getting off topic here
but it seems to me like you’d benefit from abstracting the logic of database schema outside of the container init and supply the result of which schema to use as an env var
Ah no, it’s a constructive and I’m still learning
for example if you could run the same database check as a pre-hook in the bash script , you could have whatever simple redundancy you want built in. a do/until that never ends until it works
then the container deploys and is guaranteed to run
it’s a running python server that updates the database on command from local files (that’s the core design) and it itself handles running the instance and subsequently upgrading the database (through a separate headless thread)
So I need the same configuration and server pod essentially to carry out this operation
Same env vars, same volumes etc etc
So if I create a side-car of some sorts or a cron it would have to be an exact duplicate of the running deployment in order for things to work properly
or a very large portion of it
also I should be able to trigger it form the gitlab ci pipeline, see the logs and even retry if needed
Of course I could also stop the main server, upgrade and then start it and it would need ot be carried out once but that means downtime
# Ordered list of releases.
helmfiles:
- "releases/external-dns.yaml"
- "releases/dashboard.yaml"
in helmfile , is there a way to introduce time delay between 2 addons, like external dns and dashboad. I specify an interval of 5m for external dns so i do not hit rate limit (this is in AWS). So between external-dns and dashboard i want a time delay. Is it possible ? Any ideas ?
Post sync hook with a pause?
thanks, started looking into that. Its great helmfile offer options like presync, postsync
So we can run any bash script in these hooks ?
or for that matter any script , not just bash
I’m modifying some helm values files in a presync hook for a release. The same release loads those variable files as input to the helm chart. It would appear that the modifications are not making it into the helm chart
I’m not sure helmfile was actually ever supposed to work this way because according to the docs it loads the vars before any hooks
What senario will you modify helm values file directly ? helmfile supports ‘values.yaml.gotmpl’ and can help you render to helm values file.
So, unfortunately, I’m working with a cloud provider that is less than optimal. One of the things they required with their cloud controller on kubernetes is that subnets are specified in the service annotations via their ID (Think AWS ARN).
We use automation (terraform) to provision the cloud infrastructure so although the subnet IDs are pretty likely to rarely change I would still prefer the annotation to be dynamic in that it gathers the fact directly
otherwise I would have to hard-code it as a var and change it anytime our infra rebuild a subnet
@Raymond Liu unless even with that explanation you still think templating could help here? I believe the template would still need to be fed the info on the cloud objects
Ok, sure, I have done something you need. I use exec
in values.yaml.gotmpl
for example:
{{ exec “python” (list $script “–env” .Environment.Name “–region” .Values.awsRegion “–cf-output” “AZaName”) }}`
ohhhh wow
I had not looked into exec at all
@Raymond Liu would you be willing to share with me an example of your values.yaml.gotmpl? sensitive bits removed of course
{{- $script := "../../../../scripts/get_cf_output.py" }}
resources:
# ENIConfig: AZa
- apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "AZaName") }}
annotation:
forceupdate: "k8s-1.16-20200505"
spec:
subnet: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "EksSecondaryCidrsSubnetIdAZa") }}
securityGroups:
- {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "NodeSecurityGroupId") }}
# ENIConfig: AZb
- apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "AZbName") }}
annotation:
forceupdate: "k8s-1.16-20200505"
spec:
subnet: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "EksSecondaryCidrsSubnetIdAZb") }}
securityGroups:
- {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "NodeSecurityGroupId") }}
I use helm chart incubator/raw
to deploy two resources
nice thank you
can I use environment values or other helmfile available values in the exec block ?
oh, i see you doing it
this is great, thank you.
2020-05-20
Is it possible to specify the helm version to use per release with helmfile?
I think you might actually be able to use hooks for this?
you could basically have some directory with a bunch of helm versions named by version:
helm/
helm2.x
helm3.0
helm3.1
helm3.2
and then in the prepare hook you could force-overwrite your symlink
ln -sf helm/helm3.2 /usr/local/bin/helm
(so long as parallelism disabled, which is on by default)
yea good point
I think there’s a helmBinary
setting, but not sure if it can be defined per release. If not, that would be my first thinking to set it per release. I feel like this is supported. (I vaguely recall opening a feature request for it, but not )
Similar to what has been done in #1083, I think it would be useful to allow to specify the helmBinary for each release.
So I think our workaround for this was to define each release in one helmfile .yaml
and then define the helm binary in there. That way each one could depend on a different version. Then we include them all together in the main helmfile.yaml
Do you set anything to make it so they run in parallel ?
or just accept that you have a serial release
well, if you do this approach, you need no hooks and no symlinking
so parallel should be
yea, totally. thats great
I guess I’m confused though on how you tell helmfile that “these groups of helmfiles can all execute together instead of sequentially” ?
do you use bases to merge them ? Kind of wondering if I have two helmfiles each defining environment values includes by region, and I merge them via bases, will those environment values get applied to each of the regions individually? or does it merge it all and each releaese would get vars from both environments?
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
so the point is that each $foo.yaml
can define it’s own helmBinary
helmfiles are not merged.
so, in that example, inside of - "releases/external-dns.yaml"
, would define it’s own helmBinary
and the same for the rest, and so on
sure, but also in this example every file in that helmfile list is execute sequentially right ?
this assumes you have all the helm binaries installed on the system
ok, you got me there - i can’t remember if this breaks parallelism or not.
https://github.com/roboll/helmfile/issues/591#issuecomment-492949771 – This is what I was referring to with bases and the merging of helmfiles
ok, that confirms it
one idea is that you could have a helm2.yaml
and a helm3.yaml
then all the releases under helm2
are parallelized, etc.
yea, thats what I’m doing now with my regions
was hoping to leverage bases:
to make it all go at once
but I’m worried about what will happen to the region based vars on the merge
ok, yea, i can’t picture the mental calculus right now without a reference example to play with
same, I think I’ll just have to find the time to test it
Hey guys! :wave: I’m new to helmfile and I was wondering if there’s a way to pass context/arguments to templated values files that we reference from a release, similarly to the way we can pass context/arguments to templates when using include
? In my case, I have a plain yaml file listing my releases (./releases.yaml
) and I’m using the range
operator to generate the release entries, with a reference to an external values file for each release, like so:
releases:
{{ range readFile "./releases.yaml" | fromYaml | get "releases" }}
- name: {{ .name }}
values:
- ./values/{{ .name }}.yaml.gotmpl
{{ end }}
What I would like to do is pass the current data item (the current iteration of range
) to the external values go template, so that I may dynamically reference its properties from within that template. Is it at all possible?
Hey All, can someone help me understand why i can’t use this value being set in my environment? I’m setting a simple key/value, pulling in the file with what seems to be no issue, but if I try to access {{ .Values.region }}
I get this error still that it’s not defined
Not sure about your specific issue, but I do notice that your releases:
key has a typo: eleases:
thankfully that was just a copy and paste error lol
haha, ok, so that was not really the content of your file then!
just the relevant parts, minus a character
So, it appears if I wrap these values in {{
{{ .Values.region }}}}
it works
So it was interpolating the template too early? What you did implies that there is more than one pass of interpolation?
Even if it fixes the error you were getting at that stage, are you sure that it’s not actually passing the literal {{ .Values.region }}
as value for kubeContext
? And therefore that you may get another issue further down the road?
I would really double check the actual value that gets rendered as value for kubeContext
just to make sure it’s right.
yea thats exaclty what it was doing
this is mind boggling , it seems like what should be a very basic and very obvious feature is simply just not working
So, basically, back to square one!
lol I think it may be time for square 0. I cannot get even the most basic functionality to work and I think its just a sign of the tools immaturity
{{ exec "pwd" (list "") }}
testing this in a helm values file suffixed with .gotmpl
for helmfile
Error: Failed to render chart: exit status 1: Error: failed to parse /tmp/values596195416: error converting YAML to JSON: yaml: line 9: could not find expected ':'
I’m trying to figure out if there is a way I can get it so that in each helmfile
environment, I define a kubeContext
var, and then under helmDefaults
set kubeContext
to the value of {{ .Values.kubeContext }}
at the moment I am getting the dreaded
in ./helmfile.yaml: error during ../../bases/helmDefaults.yaml.part.0 parsing: template: stringTemplate:12:25: executing "stringTemplate" at <.Values.kubeContext>: map has no entry for key "kubeContext"
I figure because I have
---
bases:
- "../../bases/environments.yaml"
- "../../bases/helmDefaults.yaml"
I am getting hit by the double render problem
if I change it to
kubeContext: "{{`{{ .Values.kubeContext }}`}}"
It comes out as a literal
Yea, I had literally the same issues above yesterday
it seems basically impossible to actually use environment values in helmfile at the moment
fyi its the directory nesting
i just figured it out for myself
its because you’re pulling files in another directory. instead of using environments:
to pass those vlaues in, i just did it under each helmfile import
helmfiles:
- path: helmfiles/preprod_us-phoenix-1.yaml
+ values:
+ - '../../vars/helmfile/realms/preprod.yaml'
+ - '../../vars/helmfile/regions/us-phoenix-1.yaml'
2020-05-21
I can’t seem to understand why doing kubectl exec pod command in a helmfile hook does not print the output to stdout
2020-05-23
guys ? when we can expect merge for https://github.com/roboll/helmfile/pull/1172 ?
This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…
Someday. I’m overwhelmed by all the user support tasks and reviews these days
This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…
If anyone could fork/test/fix my pr, I’m more than happy to review/adopt/merge it
btw, seems that latest (helm-x_0.8.0_linux_amd64.tar.gz) helm-x does not work, not sure why.. as a standalone binary, i am getting:
panic: exec: "": executable file not found in $PATH
goroutine 1 [running]:
github.com/mumoshu/helm-x/pkg/helmx.(*Runner).IsHelm3(0xbe16d90, 0x8)
/home/circleci/project/pkg/helmx/helm3.go:21 +0x148
main.main()
/home/circleci/project/main.go:31 +0x5e
same error for helm plugin. Then i found this pr, which will be superb if implemented as buildin support for patches and kustomize in helmfile
hey- https://github.com/roboll/helmfile/pull/1172/ is merged and available since 0.118.0.
i didn’t re-do throughout testing before merging so it may work or not. your testing and feedback would be much appreciated!
This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…
first issue i can see with strategicMergePatches
is that it ignores my chart version and tries to upgrade to latest, when i remove strategicMergePatches
diff works fine
Comparing release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/8296726526650574646/nginx-ingress
default, nginx-ingress, ServiceAccount (v1) has changed:
- # Source: nginx-ingress/templates/serviceaccount.yaml
+ # Source: nginx-ingress/templates/helmx.all.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
- chart: nginx-ingress-1.6.17
+ chart: nginx-ingress-1.39.0
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
thx. could you provide me a reproducible example?
---
environments:
{{ .Environment.Name }}:
values:
- ../common/common.yaml
- ../clusters/{{ .Environment.Name }}/defaults.yaml
- ../clusters/{{ .Environment.Name }}/charts.yaml
---
bases:
- ../helmdefaults.yaml
- ../repos.yaml
---
releases:
- name: nginx-ingress
chart: stable/nginx-ingress
labels:
app: nginx-ingress
tier: network
version: {{ .Values.charts.nginx.chartVersion }}
installed: {{ .Values.charts.nginx | getOrNil "enabled" | default false }}
namespace: {{ .Values.charts.nginx | getOrNil "namespace" | default "kube-system" }}
values:
- ../clusters/{{ .Environment.Name }}/values/nginx-ingress.yaml
strategicMergePatches:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
template:
spec:
dnsConfig:
nameservers:
- 169.254.20.10
options:
- name: attempts
value: "3"
do u need all the values or u can try a single release?
Thanks! I can’t get to work on it until tomorrow but I believe we need to add
opts.ChartVersion = release.Version
to
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
The general rule is that we need to propagate everything necessary from ReleaseSpec read from your helmfile.yaml, to chartify options defined here https://github.com/variantdev/chartify/blob/3f73ddcc6682fddd4ad12eb2c5d6e7caa553df87/chartify.go#L16-L45
so mistakes there can be the cause of nasty bugs like you’ve encountered.
Convert K8s manifests/Kustomization into Helm Chart - variantdev/chartify
if you have some time to fix it yourself and you’re willing to do so, it would be useful to consider that rule. jfyi.
thanks, i definitely need to get familiar with helmfiles code, but it will take time
no worry! in the meantime, and if you have some time, i’d appreciate it if you could enqueue more potential bugs/issue to me so that i can fix all that tomorrow
ok ill check more charts and options to see if anything breaks
And also fix the bug that resulted in any such release to ignore the chart version number specified in helmfile.yaml. This is a follow-up for #1172
hey just tested v0.118.1
but its still trying to apply latest chart version
thanks! i think i found another source of issue. will fix it soon
@yuri it should be fixed now. would u mind trying v0.118.2?
@mumoshu thanks! im getting something else now maybe the patch itself is wrong
I0528 15:07:27.950506 98087 patch.go:136] generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/role.yaml
- helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml
patchesStrategicMerge:
- strategicmergepatches/patch.0.yaml
I0528 15:07:27.950527 98087 patch.go:139] generating /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress/helmx.2.patched.yaml
in /Users/yurilevin/tapingo-github/k8s_cluster_helmfiles/releases/nginx-ingress.yaml: [exit status 1
COMMAND:
kustomize build /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress --output /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress/helmx.2.patched.yaml
OUTPUT:
Error: no matches for OriginalId apps_v1_Deployment|~X|nginx-ingress-controller; no matches for CurrentId apps_v1_Deployment|~X|nginx-ingress-controller; failed to find unique target for patch apps_v1_Deployment|nginx-ingress-controller]
@yuri maybe you’re missing metadata.namespace
in your patch
mmm just added it, but same error:
strategicMergePatches:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: default
spec:
template:
spec:
dnsConfig:
nameservers:
- 169.254.20.10
options:
- name: attempts
value: "3"
if it doesn’t work, would you mind running helmfile template | grep -C 20 nginx-ingress-controller
and share its result so that i can suggest how you should write the patch
that error message does indicate that any part of the below is incorrect in your patch
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: default
I0528 15:14:27.452485 99200 chartify.go:236] using requirements.yaml:
dependencies:
I0528 15:14:32.602784 99200 replace.go:45] options: {false [/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/values189422353] [] 1.6.17}
I0528 15:14:32.673409 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/clusterrole.yaml
I0528 15:14:32.673559 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/clusterrolebinding.yaml
I0528 15:14:32.673658 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-configmap.yaml
I0528 15:14:32.673761 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-daemonset.yaml
I0528 15:14:32.673830 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-deployment.yaml
I0528 15:14:32.673889 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-hpa.yaml
I0528 15:14:32.673948 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-metrics-service.yaml
I0528 15:14:32.674013 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-poddisruptionbudget.yaml
I0528 15:14:32.674092 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-service.yaml
I0528 15:14:32.674169 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-servicemonitor.yaml
I0528 15:14:32.674237 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-stats-service.yaml
I0528 15:14:32.674303 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-deployment.yaml
I0528 15:14:32.674362 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-poddisruptionbudget.yaml
I0528 15:14:32.674427 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-service.yaml
I0528 15:14:32.674488 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/headers-configmap.yaml
I0528 15:14:32.674568 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/podsecuritypolicy.yaml
I0528 15:14:32.674647 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/role.yaml
I0528 15:14:32.674727 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/rolebinding.yaml
I0528 15:14:32.674820 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/serviceaccount.yaml
I0528 15:14:32.674887 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/tcp-configmap.yaml
I0528 15:14:32.674951 99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/udp-configmap.yaml
I0528 15:14:32.675125 99200 patch.go:37] patching files: [/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/role.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml]
I0528 15:14:32.675675 99200 patch.go:136] generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/role.yaml
- helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml
patchesStrategicMerge:
- strategicmergepatches/patch.0.yaml
I0528 15:14:32.675694 99200 patch.go:139] generating /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.2.patched.yaml
in /Users/yuri/k8s_cluster_helmfiles/releases/nginx-ingress.yaml: [exit status 1
COMMAND:
kustomize build /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress --output /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.2.patched.yaml
OUTPUT:
Error: no matches for OriginalId apps_v1_Deployment|default|nginx-ingress-controller; no matches for CurrentId apps_v1_Deployment|default|nginx-ingress-controller; failed to find unique target for patch apps_v1_Deployment|nginx-ingress-controller]
ah sry, but could you remove ` strategicMergePatches: from your helmfile.yaml before running that
helmfile template` command to obtain the result
Building dependency release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress
No requirements found in /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress/charts.
Templating release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: default
---
# Source: nginx-ingress/templates/controller-metrics-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller-metrics
spec:
clusterIP: ""
ports:
- name: metrics
port: 9913
targetPort: metrics
selector:
app: nginx-ingress
component: "controller"
release: nginx-ingress
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: "...."
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
--
--
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: "..."
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "..."
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
spec:
clusterIP: ""
externalTrafficPolicy: "Local"
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: "controller"
release: nginx-ingress
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/controller-stats-service.yaml
--
--
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: "controller"
release: nginx-ingress
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/controller-stats-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller-stats
spec:
clusterIP: ""
ports:
- name: stats
port: 18080
targetPort: stats
selector:
app: nginx-ingress
component: "controller"
release: nginx-ingress
type: "ClusterIP"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
--
--
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: "default-backend"
release: nginx-ingress
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
component: "controller"
release: nginx-ingress
spec:
dnsPolicy: ClusterFirst
containers:
--
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
component: "controller"
release: nginx-ingress
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1"
imagePullPolicy: "IfNotPresent"
args:
--
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=default/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
--
- --default-backend-service=default/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/nginx-ingress-controller
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
--
--
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/controller-hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
--
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: nginx-ingress-controller
minReplicas: 5
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50
---
# Source: nginx-ingress/templates/controller-configmap.yaml
---
# Source: nginx-ingress/templates/controller-daemonset.yaml
@yuri thx! probably this one helps?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.17
component: "controller"
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
you should use the below patch then
strategicMergePatches:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
template:
spec:
dnsConfig:
nameservers:
- 169.254.20.10
options:
- name: attempts
value: "3"
apparently apiVersion was wrong
strange the chart is already installed on the cluster with apps/v1
i thought recent k8s versions automatically translated deprecated apiVersion to the newest variant
not sure, anyway with beta api it works! perhaps someone changed manually the deployment on the cluster, will have to check it on a fresh cluster thank u very much for your support
awesome! thanks a lot for testing
@mumoshu ping ^
2020-05-24
2020-05-26
2020-05-27
Had anyone else had trouble with values file includes in helmfile requiring different relative paths depending on the directory you’re executing helmfile from ?
I think I have some strange case if I understand you correctly. I’ll try to find the example. It’s not in the Slack anymore
Ok, I found it in SweetOps archive. My problem was the following:
I’ve just noticed a little issue with paths. When I run helmfile command against some custom path and this helmfile file contains “helmfiles:” I have to make the helmfiles’ values path relative to cli, i.e.
I run helmfile -f environments/dev/helmfile.yaml
inside this helmfile there is
helmfiles<i class="em em-
block”></i>helmfiles: - path: git::<<https://my_user>>:{{ requiredEnv "REPO_TOKEN" }}@my_domaincom/my_repo.git@deployment/helmfile.yaml?ref={{ env "INFRA_VERSION" }} values: - ../../values.yaml
The folder structure is the following
├── environments │ └── dev │ ├── helmfile.yaml │ ├── values.yaml
Is this an expected behaviour? Docs say the path in the manifest should be relative to this manifest.
yea, it sucks
there is an explanation i found
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
something I ran across the other day, ‘jxl’ (jenkins-x labs) includes enhancements for supporting helm 3 and helmfiles for spinning up apps : https://jenkins-x.io/docs/labs/enhancements/proposals/2/readme/
Boot Apps with Helm 3 and Helmfile
Hi there, I’m curious if there is a way (and RTFM is a perfect answer) to get helmfile to print out just the rendered/merged values files from a specified environment. From what I understand it does all of this before going into executing on the selected operation for the releases defined. I’d like to run some preflight/sanity checks on the values that are about to be applied to releases.
Take a look at helmfile template
it will render the templates but not apply anything.
This is basically exactly what I need, but instead of the rendered kubernetes manifests, I’m looking for the rendered values before it goes to render the manifests.
Maybe helmfile --log-level debug template
will include what you’re looking for? I don’t know of a way to print only the values and not the rendered template
Ah, that’s a bummer.
2020-05-28
How do I use release secrets if secrets have to be in environment values
2020-05-29
Hello - is anyone using the new jsonPatches
feature?
Is there a way in helmfile where i can stop it to upgrade job and stateful sets.Actually currently what is happening is that i have labels in helm charts but when i use helmfile to upgrade the deployed helm charts it is failing because job and statefull sets cannot be updated(i.e. cannot add labels in my case).
@mumoshu Is there some way for this?
Unfortunate no
And I don’t get your situation…? If you have certain resources that you can’t update, you should not do that.
I guess you can instead create a brand new release with another name with additional labels
@mumoshu I have a use case in which env in the job has variable named method which changes when we try to upgrade the exsisting helm chart…is there a way where we can force it to delete exsisting job and create a new job
I have tried –force=true but it is not working
well
shouldn’t helm just delete the old job on helm upgrade --install
in that case?
would u mind sharing the error message you’re seeing?
nope it doesnot do so anyways we are using helmfile apply to upgrade all the helmcharts
Sure…
in ./helmfile.yaml: in .helmfiles[2]: in environments/multinode/10-helmfile.yaml: failed processing release mysql: helm exited with status 1:
Error: UPGRADE FAILED: failed to replace object: Job.batch "mysql-load" is invalid: [spec.selector: Required value, spec.template.metadata.labels: Invalid value: map[string]string{"com.fico.dmp/owner":"dmp-installer", "com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql"}: `selector` does not match template `labels`, spec.template.metadata.labels: Invalid value: map[string]string{"com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql", "com.fico.dmp/owner":"dmp-installer"}: `selector` does not match template `labels`, spec.selector: Invalid value: "null": field is immutable, spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"mysql-load", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql", "com.fico.dmp/owner":"dmp-installer"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"mysqlload", Image:"fico-dmp-docker-development.jfrog.io/fico/init-db-dev:3.5.5p_c6ad143ff8_2020-06-01_222556", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource{core.EnvFromSource{Prefix:"", ConfigMapRef:(*core.ConfigMapEnvSource)(0xc421c55f20), SecretRef:(*core.SecretEnvSource)(nil)}}, Env:[]core.EnvVar{core.EnvVar{Name:"MYSQL_PORT_3306_TCP_ADDR", Value:"mysql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_PORT_3306_TCP_PORT", Value:"0", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_ENV_MYSQL_DB_USER", Value:"DMP_ADMIN", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_ENV_MYSQL_DB_PASSWORD", Value:"Dmp1234567890!", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"hostname", Value:"mysql.drcluster.onprem.dmsuitecloud.com", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"port", Value:"3306", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"hostPath", Value:"mysql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"method", Value:"applyDeltaBasedOnCommaSeperatedSqlList", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releaseDeployed", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releaseToBeDeployed", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"deltaFilesList", Value:"create_dmp_mgr_ddl_v3.6.4-HF01_DMPR-43730.sql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releasesList", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"installationType", Value:"install", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"ADM_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55f40)}, core.EnvVar{Name:"DMP_SERVICE_PROVIDER_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55f60)}, core.EnvVar{Name:"ODS_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55fc0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc4451f2778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"dedicated":"dmp-system"}, ServiceAccountName:"dmpsvcacct", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc4598600e0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(0xc421c55fe0), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration{core.Toleration{Key:"dedicated", Operator:"Equal", Value:"dmp-system", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil)}}: field is immutable]
Installation Failed on: Fri Jun 12 17:53:51 IST 2020
hmm? so are you trying to change the pod selector of the job, right?
i thought it was an immutable field indeed
Yes along with method in the env method", Value:"applyDeltaBasedOnCommaSeperatedSqlList"
you’d need to somehow change the job name, so that helm can create another job for the new selector
Yes that i can skip the env should be updated and new job should be created as a part of upgrade
if your chart doesn’t support changing the job’s metadat.name, all you can do is create a brand new release for the new job with different pod selector
Yes but I dont want that…i used to put timestamp so that new job is created
but i want the job to be created if there is some change in job…not every time i run helmfile
and your job’s pod selector can change at any time?
then your job’s name should include a hash value calculated from the content of your job’s pod selector, at least
Yes it depends on the release for eg if we currently are on 3.5.0p then it will remain same but if we give customer our new release 3.5.5p then it will change
then your job's name should include a hash value calculated from the content of your job's pod selector, at least
How can i do that can you please elaborate
ok. the only way would be to improve your chart so that you can achieve https://sweetops.slack.com/archives/CE5NGCB9Q/p1591964910406700?thread_ts=1590774931.314100&cid=CE5NGCB9Q
then your job’s name should include a hash value calculated from the content of your job’s pod selector, at least
ok i will give it a try will it create a hash for complete pod yal or just the selector?
not sure. that depends on your requirement.
include all the fields your customer would like to change
Sure will give it a try Thanks Alot for your help…It was really helful
i guess you would find sha256sum
handy
https://github.com/helm/charts/search?q=sha256sum&unscoped_q=sha256sum
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
Cool it looks Great…It might solve the problem
your job template should look like:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "yourchart.fullname" . }}-{{ toYaml .Values.whatever.your.customer.change | sha256sum | quote }}
i hope it solves your issue. good luck!
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "yourchart.fullname" . }}-{{ toYaml .Values.whatever.your.customer.change | sha256sum | quote }}
what is quote in this?
it surrounds the argument with "
ok Thanks i will read about it…
Hi. I see that if I use --debug
I can see where helmfile generates the actual values.yaml file that helm uses to do the install/upgrade. I can even go view them on disk, which is really nice. Is there any way to generate a values file for a particular release? I tried a lot of things, and in particular helmfile build
looked promising, but it doesn’t seem to have a lot of options, and only generates what the helmfile.yaml looks like (which is super helpful, don’t get me wrong), not the values for each release (which I template heavily).
hey! unforutnately it isn’t possible today.
it can be implemented. what command would you like for that?
For example, it can be helmfile -f helmfile.yaml export-values RELEASE_NAME ./path/to/dir
. not sure if you like this tho
Hm. Yeah hadn’t really thought about the UI/UX of this.
I would just dump the yaml to stdout, personally. Let me deal with where they go.
helmfile export-values RELEASE_NAME
would be all I need. I can always tack on > destination.yaml
at least as a MVP of that command, I think that seems like a good start
Thanks! That looks great
Would u mind submitting a github issue/feature request for that, so that i wont forget about it?
I see that if I use –debug I can see where helmfile generates the actual values.yaml file that helm uses to do the install/upgrade. I can even go view them on disk, which is really nice. Is there …
@mumoshu btw, helmfile is a killer utility. I have been absolutely in love with using it lately.
Keep up the awesome work
Thanks for your support
2020-05-30
Can I use helmfile to apply straight yaml from a url as if it were a chart instead of having to transpose the thing into a raw chart?
Yep, we have done that for configmaps
Use exec curl
ah, clever, thanks
2020-05-31
I’m trying to pass all helmfile release labels as helm values under helmfileLabels
. Something like
setTemplate:
{{ range $key,$value := .Release.Labels }}
- name: {{ printf " helmfileLabels.%s" $key}}
value: {{$value}}
{{ end }}
but, as expected, .Release.Labels
is evaluated later than range cycle.
Stuff like
setTemplate:
{{`{{ range $key,$value := .Release.Labels }}`}}
- name: {{`{{ printf " helmfileLabels.%s" $key}}`}}
value: {{`{{$value}}`}}
{{`{{ end }}`}}
doesn’t pass too. Any ideas on how to implement this ?
Maybe this one would work?
setTemplate: [
{{`{{ range $i,$key := (keys .Release.Labels) }}{{ if gt $i > 0 }},{{end}}{{ $value := (.Releases.Labels | get $key) }}{"name":{{ ... }}, "value": {{...}}}{{ end }}`}}
]
I’ve started from simple
setTemplate: [{{`{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}`}}]
and I cannot get it to work. It renders to line #3
3: setTemplate: [{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}]
`
and err: failed to read helmfile.yaml: reading document at index 1: yaml: line 3: did not find expected ',' or ']'
as after first {{}}
removal we don’t get valid array yet
it looks like it needs 3 passes instead of 2 to get it done
thats why I’ve added {{ if gt $i > 0 }},{{end}}
in my example?
Can’t we use
{{range pipeline}} T1 {{else}} T0 {{end}}
The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, dot is unaffected and
T0 is executed; otherwise, dot is set to the successive elements
of the array, slice, or map and T1 is executed.
to achieve the same w/o if
?
Why you think so? According to the error message you’ve shared, it seems like you do need to remove the trailing redundant,
after the last array elemet(or preceeding redundant “,” before the first element)
and i don’t know how range
helps that
I mean range else end
instead of if end
inside range
And I think error message is not about extra comma, it says it cannot find comma or ]
ah thanks. my fault
I cannot get range working inside string quote
{{`{{ range }}`}}
well, where is setTemplate
written in?
helmfile.yaml?
would u mind giving me more lines around setTemplate
?
or perhaps the whole helmfile.yaml
values:
- i0: v0
i1: v1
---
templates:
default: &default
chart: stable/nginx-ingress
setTemplate: [{{`{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}`}}]
releases:
- <<: *default
name: test
labels:
controller.image.repository: "image0"
controller.image.tag: "1.0"
label0: value0
label1: value1
just a PoC, I wanna loop over Release.Labels
ah ok i got it. you can’t use setTemplate for that
i thought only value
is rendered as a template under setTemplate
at the same time I’m able to use single labels values like
valuesTemplate:
- helmfileLabels:
app: "{{`{{ .Release.Labels.app }}`}}"
so probably it should look like:
valuesTemplate:
- helmfileLabels: {{`{{ range $i,$key := (keys .Release.Labels) }}{{ if gt $i > 0 }},{{end}}{{ $value := (.Releases.Labels | get $key) }}{ {{ $key | quote }} : {{ $value | quote }} }{{ end }}`}}
]
will try it
ah well you want a json object here, right? my above example won’t work as it rendersto a single string
well, I need a map like
- helmfileLabels:
label0: value0
label1: value1
gotcha
seems like it’s impossible
ok, thanks for your time
a workaround would be
{{ $app1labels := (dict "key1" "val1" "key2" "val2") }}
{{ $app2labels := ... }}
releases:
- name: app1
chart: ./charts
values:
- helmfileLabels:
{{ toYaml $app1labels | nindent 6 }}
labels:
{{ toYaml $app2labels | nindent 6 }}
- name: app2
chart: ./charts
values:
- helmfileLabels:
{{ toYaml $app2labels | nindent 6 }}
labels:
{{ toYaml $app2labels | nindent 6 }}
release-specific labels are ignored in this way
yeah, so you need to define template variables like $labels` for each release
yes, it becomes a little bit unclear
unfortunately, yes
hardcoded label name pass in template looks better with 2-4 labels, like
templates:
eth: ð
....
labels:
chart: my-chart-name
valuesTemplate:
- type: "{{`{{ .Release.Labels.type }}`}}"
- chain: "{{`{{ .Release.Labels.chain }}`}}"
releases:
- <<: *eth
name: eth-blocks
labels:
chain: ethereum
type: blocks