#helmfile (2020-12)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-12-01
Question, I know there’s a way to mount a configMap as a volume in a helm chart and overwrite a specific file on the docker image. Is there a way to append to a file instead?
Okay found the answer, gotta update the entrypoint to do the append from the configMap volume before running the main command
You could also probably use an init-container
to read the config setting and append it to a file.
TIL about init containers, thanks Erik!
Do init containers share a filesystem with the main container or does that interaction still need to happen via a volume?
(the entrypoint thing worked about to be difficult because of permissions issues with the app user in the main container)
does that interaction still need to happen via a volume?
yes, kind of emptyDir
2020-12-02
Hi all,
why does helmfile apply
not removing releases that are removed from helmfile.yaml
?
Is there a way to implement such behaviour ?
PS: i know about installed: false
, but I’d like to make releases be removed just by removing them from helmfile.yaml
We got used to the current behaviour)). And for us it would have been a disaster once if helmfile had had this functionality by default.
You will find some thoughts on this here (like the absence of state and others): https://github.com/roboll/helmfile/issues/194
This idea comes from the kubectl apply -f –prune where kubectl deletes any resources that aren't referenced. Helmfile should have an option for the sync command that checks what helm charts ar…
2020-12-03
2020-12-04
Hi, all Does helmfile support OCI ? If yes - where one could read about it https://helm.sh/docs/topics/registries/
Describes how to use OCI for Chart distribution.
Hey folks. How can I get Helmfile to recreate pods in a chart if and only if there’s a new release of that chart? Note I want pods to be recreated if the chart values changed but the chart is the same, and I don’t want to recreate pods on every single helmfile apply
.
That means I can’t rely on the hash of the configmaps/secrets (those won’t change if I’m just changing values without changing templates), and I don’t want to include an annotation that generates a random value (because that’d cause a diff according to helm diff
and helmfile will do a release).
This would be dead simple if I could just use .Release.Version
in a pod annotation but that doesn’t seem to exist
Well, you may pass .Release.Version
to chart values and then to pods controller’s template as long as you control the chart
Sorry, does .Release.Version
exist? Or do I have to set it explicitly (for example, in the CLI)?
I’ve just tried using it without actually setting it, and the value isn’t set. Maybe I’m doing something wrong.
.Chart.Version
But .Chart.Version
won’t change if I’m just changing the values passed to the chart
values should be used inside templates or these values are useless
chart values should affect k8s state
Indeed, but if I’m changing a value used in a ConfigMap or a Secret, I want pods using that CM/Secret to be recreated
So that they can pick up the new values
this is exactly helm3 documented case with hash of configmap/secret
Covers some of the tips and tricks Helm chart developers have learned while building production-quality charts.
Yes, but unless I’m doing something wrong, that hash only takes the template as an input – It doesn’t take the actual values. So it doesn’t help if I want to change the values without changing the template or releasing a new version of the chart.
with new chart version helm will recreate pods due to controller change
the template as an input
it takes rendered template with values as input
thus, any value change triggers pod restart
Hmmm, I must’ve done something wrong when I tried it then. I’ll have another go later
Thanks voron!
check your configmap/secret templates and be sure you’re using all CM/secrets in controller annotations
Good point. I think that’s probably the problem… I must’ve missed a CM/secret
helmfile diff
shows if there are any expected diffs, and it takes 2 files at least - configmap/secret and controller annotation (and thus pod recreate) due to hash change
2020-12-05
2020-12-08
Hello, was hoping to get some suggestions how to implement this use case with helmfile:
I need to deploy a bunch of different in-house applications that are all configured very similarly. Generally speaking the only changes between the app deployments are the docker image/tag, environment variables, and maybe some additional k8s resources. I’ve created a generic base helm chart which uses a values.yaml file to specify image/tag, and environment vars. For each app I can then create an app-specific chart (using the base chart as a subchart) with any additional k8s templates and a values file to configure the chart/base-chart’s templates. Using helmfile I can then add on additional values files for each release.
The challenge I’m facing is that the environment variables are set in the base chart via a values file and I would like to render them based on app-specific and release-specific values files. So roughly I am trying to evaluate chart-default-values.yaml, release-specific-values.yaml and then use the data from those inside a final values file which sets variables for the base chart’s templates.
This is roughly asking how to use .Values
in a values file, which as best I can tell from my reading on Github is not supported. I’ve taken a look through these issues to try to get ideas however I’m a bit stuck.
• https://github.com/roboll/helmfile/issues/756
• https://github.com/roboll/helmfile/issues/387 I did get something working for a single release by taking the approach below, however this doesn’t scale once I want to add additional releases to different namespaces in k8s or different environments:
# chart-default-values.yaml
logLevel: error
baseChart:
environment: ""
# release-specific-values.yaml
logLevel: debug
# helmfile.yaml
environments:
defaults:
values:
# Read the values into the environment so they are accessible in the inline-values below.
- ./chart-default-values.yaml
- ./release-specific-values.yaml
releases:
- name: "some-app"
namespace: "my-namespace"
chart: .
values:
- ./chart-default-values.yaml
- ./release-specific-values.yaml
# Set the environment vars inline referencing values from the environment
- baseChart:
environment: |-
export MYAPP_LOG_LEVEL={{ .Values.logLevel | quote }}
Last thing: I also considered setting the environment variables directly in the release-specific values file however that is not DRY and also causes issues if I want to set chart-wide defaults.
Any advice?
Let’s gather all the tips and tricks worth included in the documentation, by linking from various questions answered in this repo.
It would really help if values from different places were available as .Values (like in standard Helm), to be referred and used. Not alle values are Environment values, example: We have set up Open…
well, you may separate your release-specific values into different files and include these values via something like
values:
- values/{{`{{ .Release.Name }}`}}.yaml.gotmpl
Let’s gather all the tips and tricks worth included in the documentation, by linking from various questions answered in this repo.
It would really help if values from different places were available as .Values (like in standard Helm), to be referred and used. Not alle values are Environment values, example: We have set up Open…
Hey, thanks for your reply! My release-specific values are in a separate file. I need to be able to use them to template other values which apply to the base helm chart.
then your release-specific values are not release-specific
do you wanna “calculate/evaluate” some base values based on release-specific values ?
Yes that. My base chart expects a value called environment
which I’m trying to compute based on the app-chart’s default values and my release-specific values.
did you tried gotmpl ?
I tried splitting into three files:
• chart-default-values.yaml (contains default values for the environment vars) • release-specific-values.yaml (contains the overrides for that release) • environment-values.yaml.gotmpl (sets the environment variables for the base-chart’s values) If in my release I specify:
releases:
- name: "myrelease"
values:
- ./chart-default-values.yaml
- ./release-specific-values.yaml
- ./environment-values.yaml.gotmpl
I find that in environment-values.yaml.gotmpl the Values and environment vars are empty (when I add: {{ .Values }}
and ``{{ .Environment }} to the file and run
helmfile template it prints
[]map and
{default map[] map[]}` in the generated templates. I cannot access the values from chart-default-values or release-specific-values
helmfile version v0.135.0 btw
what about
{{`{{ .Values }}`}}
?
That renders the string {{ .Values }}
into my configmap without evaluating it
I think using it unescaped in the environment-values.yaml.gotmpl is correct as it renders to a string representation of itself map[]
. The issue is that it is empty, and I would like it to be populated with the previous two values file’s properties.
2020-12-09
Hey everyone, been setting up terraform and helmfile to manage our K8s cluster. How do you guys manage terraform created resources and pass it to helmfile? I ideally would not want to hardcode any endpoints that I need in my yaml files. I explored using the helm provider of Terraform but it made sense to me to manage my cluster using helmfile.
Currently, I have this setup where terraform publishes everything that I need for helmfile to consume to SSM (IAM roles for service accounts, RDS endpoints), and then I use the remote-secrets feature of helmfile to get the values. This is also how I manage secrets by just using SecureString.
Just curious if this is the right way to go? Any cons of this setup? Currently it looks like I’m setting myself up to shoving everything to SSM lol
We are using embedded vals
functionality to get outputs from tf state and pass them to helmfile.
Helm-like configuration values loader with support for various sources - variantdev/vals
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Yes, I am using exactly that. No problems in setting that up. I just do for example role_arn: <ref+awsssm://path/to/param>
then I just made a terraform module for the k8s IAM roles that also takes care of putting everything to SSM.
My question is more of if it’s the right way to go about this? Since it would be essentially throwing everything to SSM, wherein it essentially acts like the glue between helmfile and terraform.
I guess you are doing the same thing then? Sorry idk, this approach just feels. . .hacky. Maybe just me haha
Yeah, it looks pretty the same indeed. We just don’t have this intermediate “publish to SSM” step, but pull directly from the tf state. I will be glad to hear any better alternatives as well, but we are pretty satisfied with our current solution:)
helmfile provider might be an alternative, but to us the current solution seems a bit more reliable.
Ah I actually didn’t notice that vals
also supports reading directly from the terraform state. I wonder if that is the better approach, instead of publishing everything to SSM.
Do you also pull sensitive values/secrets directly from the tf state?
Last time I looked, it only supported local state, not remote/S3 backed, but that’s a while ago now. I’ve just gone with Terraform writing to SSM (or humans/other code if it’s something that Terraform can’t handle) and then Helmfile reads from SSM.
At least that way, regardless of how something was provisioned/configured, there’s a consistent contract with Helmfile
Last time I looked, it only supported local state, not remote/S3 backed
It still doesn’t support anything. For example it doesn’t support GCS backend so we have to pull the state and operate with it as if it’s local. That’s what looks hack-ish to me right now, speaking of hacks:) Actually it’s an issue of the underlying tfstate-lookup
Do you also pull sensitive values/secrets directly from the tf state?
Not yet:) So far we are interested in endpoints and LB IP addresses.
Last time I looked, it only supported local state, not remote/S3 backed
Ah got it. I don’t think this is an option for me then.
Thanks for the discussion! I’d just stick with what I have :)
S3 should work ok
Yeah I’ve verified that it works with S3 backed tfstate before. GCS isnt supported yet
Anyways, I think the way @Christian using SSM as the connector between tf and helmfile is nice.
That way you can make tf and helmfile more decoupled than reading tfstate directory
Thanks for the confirmation @mumoshu! This gives me a nice confidence boost to my current setup
hello all, if I have dfata in helmfile.d like
app:
name:
image:
path:
can I somehow include all options under app: to the values under config and not one by one ?
I’m switching from Helm-2 to Helm-3 and a Helmfile hook that works with Helm-2 is failing with Helm3 and I’m not sure why.
Basically I’m trying to run a kubectl
to apply a label and it has no idea about the Kube context anymore when it runs and tried to connect to localhost:8080
.
I created a new release with this set of dummy hooks, the last one to deliberately make it quit before installing the chart:
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "env | grep KUBECONFIG | sort;echo [$KUBECONFIG]; test -n \"$KUBECONFIG\" && ls -l $KUBECONFIG; cat $KUBECONFIG"
- events:
- presync
showlogs: true
command: kubectl
args:
- config
- current-context
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "exit 1"
When run using helm2, the output of the hooks is:
helmfile.yaml: basePath=.
hook[presync] logs | []
hook[presync] logs |
helmfile.yaml: basePath=.
hook[presync] logs | test2.he0.io
hook[presync] logs |
helmfile.yaml: basePath=.
hook[presync] logs |
in ./helmfile.yaml: failed processing release cert-manager: hook[sh]: command `sh` failed: command "/usr/bin/sh" exited with non-zero status:
PATH:
/usr/bin/sh
ARGS:
0: sh (2 bytes)
1: -c (2 bytes)
2: exit 1 (6 bytes)
ERROR:
exit status 1
EXIT STATUS
1
That’s fine, it shows my context, then deliberately exits with an error in the next hook as I wanted.
When I run it with helm3 however the output is:
helmfile.yaml: basePath=.
hook[presync] logs | KUBECONFIG=/tmp/tmp.018hsO8A7s
hook[presync] logs | [/tmp/tmp.018hsO8A7s]
hook[presync] logs | -rw------- 1 kube-hetest kube-hetest 0 Dec 10 13:18 /tmp/tmp.018hsO8A7s
hook[presync] logs |
helmfile.yaml: basePath=.
hook[presync] logs |
in ./helmfile.yaml: failed processing release cert-manager: hook[kubectl]: command `kubectl` failed: command "/usr/local/bin/kubectl" exited with non-zero status:
PATH:
/usr/local/bin/kubectl
ARGS:
0: kubectl (7 bytes)
1: config (6 bytes)
2: current-context (15 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
error: current-context is not set
COMBINED OUTPUT:
error: current-context is not set
For some reason it has set the KUBECONFIG
environment variable and pointed it at an empty temporary file.
Then the next command to show the context dies because there is no selected context.
Regular chart hooks have been working fine with Helm3, it’s only this helmfile hook that no longer works when I want to run kubectl
command because the kubecontext has been overridden.
Note that if I set --kubeconfig
to point to my config file and also set --context
in the call to kubectl
in the hook, then it works with helm3, but that isn’t a viable option.
I don’t know why it is setting the KUBECONFIG
environment variable within the hook when helm3 is used, nor how to stop it from doing so.
The versions of software I am using are
$ helmfile version
helmfile version v0.135.0
$ helm3 version
version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"clean", GoVersion:"go1.14.13"}
I still don’t have a solution to this. Does anyone have an idea of why I’m seeing this different behaviour with helmfile hooks depending on if I’m using Helm 2 vs Helm 3?
2020-12-10
Anyone know why the incubator chart is deprecated?
Or just the previous locations only?
incubator/raw 0.2.5 0.2.3 DEPRECATED A place for all the Kubernetes resou..
2020-12-11
@Paul Catinean yes, see https://github.com/helm/charts
(OBSOLETE) Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
Oh damn that’s unfortunate, what are we supposed to use for the same purpose?
(OBSOLETE) Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
I think we need to wait for @mumoshu to host this chart in some separate helm repo It’s still possible to use “deprecated” chart, as the chart is fine itself, the problem is with incubator repo.
Yeah the chart is marked as deprecated due to the incubator repo being abandoned. But it still works as before
I’m using chartcenter’s mirror of the incubator repo these days https://chartcenter.io/incubator/raw
version 0.2.5 of Helm chart incubator/raw. DEPRECATED A place for all the Kubernetes resources which don’t already have a home. Discover Helm charts with ChartCenter!
But it’s still with “deprecated” label
Most, but not all, charts have migrated to their own repos/chart locations.
ohhhh
Yeah I am still using it was just worried they might drop it one day, thanks for the info everyone
A lot of charts are being abandoned unfortunately.
2020-12-12
2020-12-13
Any idea how to escape {{ $labels.target }}
string?
Its prometheus specific:
values:
- prometheusRule:
enabled: true
rules:
- alert: UrlDown
annotations:
message: The status code of {{ $labels.target }} is 4xx or 5xx, or some other failure occurred such as a timeout (60 seconds) for the last 5 minutes.
expr: probe_http_status_code >= 400 or probe_http_status_code == @Vinit Sarvade 08
I tried
{{`{{ $labels.target }}`}}
, but still getting undefined variable "$labels"
Any idea @mumoshu?
Thanks
I have the same problem as yours, here is my solution. Create a file called prometheusrule.yaml
# prometheusrule.yaml
prometheusRule:
enabled: true
rules:
- alert: UrlDown
annotations:
message: The status code of {{ $labels.target }} ...
And just reference it
values:
- prometheusrule.yaml
You can choose the better one for your use case
2020-12-14
Any idea how to render multiline string in values ?
environments:
default:
values:
- foo: |
hello world
bye
to
foo: |
hello world
bye
Thanks
2020-12-15
I haven’t touched helmfile in a while, and I was wondering if any good examples out there for using:
• environments in helm/helmfile
• kustomize in helmfile
We’ve updated all of our helmfiles to use environments: https://github.com/cloudposse/helmfiles/tree/master/releases/
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
I am also looking about what or how I can use all the features with chartify in helmfile. I am currently using hooks with shell scripts, but if there is a smoother way to do kustomize
in helmfile, that would be better.
2020-12-16
Hi, all when using vals
for value resolution, it it possible to disable the actual resolution when calling
$ helmfile template
via helmfile.yaml or CLI argument ?
for instance, I don’t have an access to production Hashcorp Vault, but I want to render templates locally for testing
We are wrapping this in if ne .Environment.Name "default"
. I must admit the overall readability suffers:)
- is it possible to introduce a CLI flag or parameter to helmfile.yaml ?
- unfortunately i didn’t understand the idea of that
if
, could you provide an example ? In fact i want to render locally template for different environments, I just don’t want to perform resolution at all ( speed of render + permissions issues )
the issue is that I can’t even override those values via --set
since helmfile still tries to perform a resolution, and it’s logical
The thing is that we use “default” environment only to test things, for real environments we have correspondent environment names and settings in helmfile. With that being said it just a matter of the following simple if:
loadBalancerIP: {{- if ne .Environment.Name "default" }} ref+tfstate:///{{ env "PATH_TO_TF_STATE" }}/output.external_ip.value {{- end }}
Got it, thanks
Hi, all!
Is it possible to do something like that in helmfile:
templates:
default: &default
namespace: test
chart: repo/{{ .Release.Name }}
{{- range $service, $version := .Values.release }}
{{- if eq $service .Release.Name }}
version: "{{ $version }}"
{{- end }}
{{- end }}
where the .Values would be
release:
my-service:
version: 1.0.0
?
I’m getting a failed to read helmfile.yaml: reading document at index 1: yaml: line 11: could not find expected ‘:’
I’m not exactly sure if or how it’s possible to refer to values in the template, but if you want to do it a bit differently, you can set a version under the releases
in the helmfile.yaml like so:
- name: "test"
version: {{ .Values | default (env "TEST_VERSION") | default "1.0.0"}}
<<: *default
@Jonathan the problem is that I have around 100 releases )
So, I need to template it somehow
Have you tried specifying the version value like this?
{{ .Values | get "Release.Name" "" }}
just to try getting the value in a slightly different way?
The reason I’m not sure it worksis that it might try to access the .Release.Name before the values.yaml have been properly rendered, so you might have to do some {{ ` {{ .Values.Release.Name }} `}} magic
@Jonathan Hmmm, good catch! Will try
This worked
templates:
default: &default
namespace: test
chart: repo/{{ .Release.Name }}
version: 0.0.0-{{ .Values.release | get .Release.Name "develop" }}
where values are
release:
service: 1.0.0
How can I set values in values.yaml file?
e.g.
releases:
- name: prometheus
namespace: default
chart: prometheus-community/kube-prometheus-stack
values:
- ./alertmanager.yaml
disableValidation: true
So the alertmanger.yaml
would have a value in there, like {{ requiredEnv "PAGERDUTY_INTEGRATION_KEY" }}
ah, .gotmpl
extension
yep
Despite specifying:
disableValidation: true
I am getting:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Prometheus.spec): unknown field "probeNamespaceSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "probeSelector" in com.coreos.monitoring.v1.Prometheus.spec]
What can I do?
If the disableValidation: true
is specified as above, I don’t think that it’s specified in a way that shows that the field is supposed to be part of the .Values. You’d have to do something like this:
releases:
- name: prometheus
namespace: default
chart: prometheus-community/kube-prometheus-stack
values:
- ./alertmanager.yaml
- disableValidation: true
disableValidation
is a key in helmfile
, not the prometheus-community helm chart.
The solution was to delete the existing CRDs. An earlier version installed CRDs that have since then been updated. Once CRDs were purged, helm chart would now work, as it installs new CRDs.
2020-12-17
So, one more interesting question from me.
Suppose I have a release that installs CRDS. And I have a release that depends on those CRDS. How can I make the dependency between them to work without running helmfile 2 times?
you can define the release in the helmfile in the order you want them installed, and then use helmfile --concurrency=1
to run them sequentially
It may be better to use deps in helmfile
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Yeah, so, the sync command would work, but the diff if the crd is not installed will not work
is it due to helm diff cannot complete w/o required&installed CRDs ?
@voron Yes
Hey All, wondering if anyone has seen any issues around helmfile and its storage of helm meta-data? We have been testing our helmfile from our local workstations but our ci/cd pipeline executes helmfile from within a k8s cluster and I seem to be finding that when doing the latter the helm release meta-data is not going to the release namespace but instead going to the default
namespace instead because that is where the job is executing
In Helm 3, the release metadata is stored in the same namespace as the release itself. See https://helm.sh/docs/faq/#release-names-are-now-scoped-to-the-namespace
Helm - The Kubernetes Package Manager.
@Christian yea I mean that appears to be demonstrably false
Hey everyone, also have a query about having dependencies between charts.
I know you can have a needs
keyword as part of the helmfile release to maintain dependencies. However, I can’t seem to figure out the syntax as to how to reference a release when they are in different folders.
- cert-manager
/ helmfile.yaml
- cert-issuer
/ helmfile.yaml
In cert-issuer, having a needs: ['cert-manager/cert-manager']
errors out. Also tried various combinations, but none seem to work.
Did you able to get needs
to work inside single helmfile.yaml
? needs
may need to specify k8s context too just before namespace.
Yup, got it working in a single helmfile. I’d try specifying the k8s context, thanks!
2020-12-18
2020-12-19
2020-12-21
Hi everyone,
Before creating my own argocd+helmfile docker image, I would like to know if someone already did it and published it public ?
I found chatwork
image but not really up to date with helmfile version.
And if worked like a charm (helmfile with argocd) ?
Thx a lot
hello all,
I am writing a hpa file for my cluster and would like to template it. if I have the following metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 70
can I template it into the hpa.yaml using range or something else? I would like to get all metrics from values. so if I would like to use the pod or object metrics I only need to update values
well, use hpa.yaml.gotmpl
and templating features
2020-12-22
2020-12-24
I cannot get environments to work as documented. From the docs (README.md), I tried:
environments:
## `default' environment uses Alpha with TLS
## set tls_client_auth with env var TLS_CLIENT_AUTH
default:
values:
- ./examples/alpha_tls.yaml.gotmpl
- ./examples/{{ env "TLS_DIR" | default "dgraph_tls" }}//secrets.yaml
## 'zero_tls' environment uses Zero with TLS
zero_tls:
values:
- ./examples/zero_tls.yaml.gotmpl
- ./examples/{{ env "TLS_DIR" | default "dgraph_tls" }}//secrets.yaml
releases:
- name: {{ env "RELEASE" | default "my-release" }}
namespace: {{ env "NAMESPACE" | default "default" }}
chart: {{ env "PWD" }}/../../charts/dgraph
But the helmfile apply
doesn’t pick up default environment.
I guess these are values you can inject and picked up by .Values.key
, but not actual helm values, rather helmfile values
Did you specify environment via helmfile -e
?
2020-12-26
2020-12-27
2020-12-28
Hi everyone,
i’m trying to see if we can use url to get values
for a release:
(like with helmfiles:
keyword)
releases:
- name: app
chart: ../chart/test
values:
- git::<https://github.com/><something>.yaml # << like this ?
helmfiles:
- path: git::<https://github.com/><something>/helmfile.yaml # similar to this
I think it’s possible if you use it under:
environments:
But would be cool indeed to have it under releases or at least under helmfiles
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
helmfiles is already possible but value make thing easier for my user case with different repo for each app , i wanted to just have param for the app not helm config in those repo
each release could get his own file , which can be different from the same environment
also i could do something like `
git::<https://github.com/me/app/config-{{> .Environment.Name }}.yaml?re=master
and in another release have
git::<https://github.com/me/app2/config-{{> .Environment.Name }}.yaml?re=master
similar thing is possible with secrets And you still have exec
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
in the end what i did was to use multiple helmfile which all have there own environment and url.
i manage to make it work with git without problem but i believe that the http is rather trouble some as you need specific content/header to be pulled