#helmfile (2022-11)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2022-11-02
can i have dedicated service account to run helm hook with High RBAC permission ? while Main conatieners dont have privilage RBAC rules
@bhavin vyas could you give me a example?
or need more info.
2022-11-07
Hey everyone. Quick question for anyone who might have a possible solution. Trying to clean up the env-vars that get used across apps as there’s quite a bit of repetition. Wanted to find out if there’s a way to stack env-vars so we can break out common vars into a shared-values folder of some kind. Essentially repo structure is as follows:
├── environemtns/
│ ├── env-1/
│ │ └── backend/
│ │ └── app/
│ │ ├── values.yaml
│ │ ├── secrets.enc
│ │ └── helmfile.yaml
│ └── env-2/
│ └── backend/
│ └── app/
│ ├── values.yaml
│ ├── secrets.enc
│ └── helmfile.yaml
├── tenants/
│ └── backend/
│ └── helmfile.yaml
└── lib/
└── app/
└── helm-chart
environments helmfile.yaml -s contain
helmfiles:
- path: 'path to tenants helmfile'
values:
- environmentName: env-1
tenants helmfile looks like
environments:
default:
---
missingFileHandler: Error
helmDefaults:
verify: false
atomic: true
wait: true
cleanupOnFail: true
skipDeps: true
recreatePods: false
force: false
createNamespace: true # require Helm v3+
releases:
- name: app
namespace: backend
chart: "path to lib helm chart"
values:
- ../../environments/{{ .Values.environmentName}}/backend/app/values.yaml
secrets:
- ../../environments/{{ .Values.environmentName}}/backend/app/secret.enc
Using this along side ArgoCD.
With multiple apps that share the same values across them, constantly updating those across every app that has a reference to a commonly used env-var, starts to get tedious. Lets say all apps have the same DB strings across them, and we’ve rotated the DB endpoint, I’d have to update this across every app. My idea was to try and break this out into a shared values folder that sits at the same level as the tenants folder. What isn’t clear is how I can leverage that in the helmfile.yaml. I suppose it would look something:
releases:
- name: app
namespace: backend
chart: "path to lib helm chart"
values:
- ../../environments/{{ .Values.environmentName}}/backend/app/values.yaml
- ../shared-values/databases.yaml
secrets:
- ../../environments/{{ .Values.environmentName}}/backend/app/secret.enc
Curious if anyone has any experience with something like this?
In one projects we use a template with shared redis & rabbit creds:
secrets:
- secrets/redis.yaml
- secrets/rabbitmq.yaml
- secrets/{{`{{ .Release.Labels.app }}`}}.yaml
- secrets/{{`{{ .Release.Name }}`}}.yaml
valuesTemplate:
- ../common/affinity.yaml.gotmpl
- ../common/values/{{`{{ .Release.Labels.app }}`}}.yaml.gotmpl
- values/{{`{{ .Release.Labels.app }}`}}.yaml.gotmpl
- values/{{`{{ .Release.Name }}`}}.yaml.gotmpl
I wonder if its the way I’m using our go templating and the deployment.yaml templating thats stopping us from being able to stack multiple envVar fields.
The base values.yaml has
---
deployment:
image: image info
envVars:
- name: ENV_VAR name
value: some value
and then my environment specific values.yaml SHOULD have just the values I’d like override from the base helm chart correct? So far, the only things thats worked is that, if I’m doing an environment override on a single env-var I’d have to copy the ENTIRE env-var block from my base values.yaml and paste the whole thing into my environment specific values.yaml and override the one value there…
I thought the workflow would have been, just having to add the single override in our environment specific values.yaml and leaving the base values.yaml as is. Shouldn’t helm converge/layer the values? Isn’t helmfile just a wrapper around helm?
Well. you want to get deep array merge (envVars
is an array) between layers. I suppose the easiest way should be to use map instead of array, and the will be no issues with the values merging between layers.
2022-11-08
2022-11-14
Hello team,
I am using helmfile and am needing some more information on how to reference azure key vault. I looked at https://github.com/variantdev/vals#azure-key-vault and my helmfile looks like this.
- name: artifactory
url: blah
username: "svc_govna_tools"
password: "<ref+azurekeyvault://blah/blah>"
releases:
# Published chart example
- name: artifactory
namespace: system
labels:
app: operator
chart: blah
version: 0.5.0
values:
- values.yaml
recreatePods: true
force: true
But it is not passing in the key from AKV. Any help would be appreciated
Hello, what value do you get instead of real password ?
@Ryan Shelby please show the log by adding –debug.
Looks like I had to add {{ fetchSecretValue “<ref+azurekeyvault://blah/blah>” }}. Which now gets me to a new error of having to use azure cli. Tested it locally and logging into az and setting subscription will fetch the secret. But now i need to figure out how to do az login in helmfile.
2022-11-15
2022-11-16
Oh I didn’t realize I was in a #helmfile chart here, useful
Can anyone explain why Helmfile might be used over just using helm subcharts, please?
@Herman Smith Declaratively deploy your Kubernetes manifests, Kustomize configs, and Charts as Helm releases in one shot.
@Herman Smith if you have more issue. Please ping me.
Thanks
2022-11-17
https://github.com/helmfile/helmfile/releases/tag/v0.148.1 v0.148.1 released. enjoy it. looking for feedback.
2022-11-18
2022-11-21
Hi folks,
For some reason waitForJobs: false
on a release seems to have no effect - helmfile (0.145.3) still waits for the cronjob to run. I’m setting it like so
- name: foo
namespace: foo
chart: repo/foo
version: 1.1.0
waitForJobs: false
is this not correct or am I misunderstanding this option? The defaults for the helmfile are
helmDefaults:
atomic: true
cleanupOnFail: true
historyMax: 30
timeout: 1200
wait: true
helm
binary is 3.5.3
Edit: solved
This was due to use of a PVC whose storage class was WaitForFirstConsumer
, which is an open Helm issue. So nothing to do with waitForJobs
.
@Ilya Shaisultanov please try to use the latest helmfile.
2022-11-23
Hey :wave:
I’m new to Helmfile (and Helm) and need some help.
I want to have a conditional part of some configuration for a release without using Environment Values.
Is it possible to have an {{ if … }}
expression based on an environment variable?
I tried something like this, but the if
block doesn’t seem to evaluate the env var at all:
releases:
- name: ...
...
values:
...
- config: |
...
# Redis
{{ if env "INSTALL_THING" | default false }}
thing.enabled = on
thing.host = ...
{{ end }}
SOLVED
this works:
{{ if eq (env "INSTALL_THING") "true" }}
...
{{ end }}
I think the reference to env is not right. The following is an example for referencing:
{{- if .Values.ingress.enabled -}}
...
apiVersion: extensions/v1beta1
kind: Ingress
...
{{- end }}
This is a bit of a weird use case, but I’d like to avoid using Environment Values.
In this case, INSTALL_REDIS
is an environment variable that’s not really dependent on the “environment” (like prod, test, stage, etc), but more like a toggle depending on user input.
but how does the user switch that toggle? isn’t he changing a variable value?
Yes. More specifically - based on user input, an app is setting the environment variable to run helmfile
with.
well then we’ll continue looking at using the environment variable, unless you want to change your app to store the value elsewhere and use vals
in helm.
For the above you need this
You’d need in your case something like:
{{ .Values.envName }}
Thanks for the hints Denis.
The following works for me:
{{ if eq (env "INSTALL_REDIS") "true" }}
...
{{ end }}
The environment variable is a string (duh … :face_palm:), not a boolean, so that’s why the if
expression wasn’t evaluated as expected.
what I do wrong? I would like to add more volumeClaimTemplates using helmfile. Influx v1 already have one, and I would like to add a second one. using jsonpatch:
target:
group: apps
version: v1
kind: StatefulSet
name: influxdb
patch:
- op: add
path: /spec/volumeClaimTemplates/-
value:
metadata:
annotations: null
name: v2db
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "6Gi"
storageClassName: "gp2-delete-encrypted"
- op: add
path: /spec/volumeClaimTemplates/-
value:
metadata:
annotations: null
name: vv2config
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
storageClassName: "gp2-delete-encrypted"
with strategicmerge:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ regexFind ".*" .Release.Name }}
spec:
volumeClaimTemplates:
- metadata:
annotations: null
name: v2db
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "6Gi"
storageClassName: "gp2-delete-encrypted"
- metadata:
annotations: null
name: vv2config
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "6Gi"
storageClassName: "gp2-delete-encrypted"
- metadata:
annotations: null
name: influx-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
storageClassName: "gp2-delete-encrypted"
when I run helmfile template I see the additional pvc-s, but when I run apply I got nothing to change ….
any error?
@Balazs Varga https://github.com/kubernetes/kubernetes/issues/69041
/kind feature
/sig storage
What happened:
Currently, you get this if you want to update an existing statefulSet with a new volumeClaimTemplate:
Error: UPGRADE FAILED: StatefulSet.apps "my-app" is invalid: spec: Forbidden: updates to statefulSet spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.
What you expected to happen:
Allow the creation of volumeClaimTemplates in a statefulSet.
How to reproduce it (as minimally and precisely as possible):
For example:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
+ - metadata:
+ name: data2
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
Anything else we need to know?:
Some background here
Environment:
• Kubernetes version (use kubectl version
): all
• Cloud provider or hardware configuration: all
• OS (e.g. from /etc/os-release): all
• Kernel (e.g. uname -a
): all
• Install tools:
• Others:
Thanks
still not solved though (after more than a year)
yeah just scrolled the comments. Anyway I solved it with separate pvc files. :)
got it. I think we should post a PR for k8s. @Denis @Balazs Varga
I actually found that the issue in the Validation function here. But I’m not smart enough to understand why they decided to block everything. This is the first PR that decided to block all changes and after that only certain properties are allowed (enabled) one by one. The PVC part as you see in the issue has been siting idly for more than a year. One of the commentators said there are a lot of edge cases that require a lot of thought and syncs with the storage SIG.
yeah
Did you tried to use kruise instead of standard statefulset? We use it around a year with CloneSet.
It seems that CloneSet supports multiple PVCs via volumeClaimTemplates