#helmfile (2020-01)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-01-01
So, how are we all handling the helm 3 upgrades without automatic namespace creation anyway?
updating all helmfiles to include a presync like this?
- events: [“presync”]
showlogs: true
command: “/bin/sh”
args:
- “-c”
- >-
kubectl get namespace “{{
{{ .Release.Namespace }}
}}” >/dev/null 2>&1 || kubectl create namespace “{{{{ .Release.Namespace }}
}}”;
Helm3 doesn't automatically create namespace - see https://v3.helm.sh/docs/faq/#automatically-creating-namespaces How can we solve this with helmfile, so that we don't have to manually crea…
Best option I think is to use raw chart
I’ve had trouble with the raw chart, multiple applies fail because the namespace already exists
Hrmm… but it works for other resource types?
I think if this were the case, then it would fail just as well for Deployments
as it would for Namespace
Yes, we faced thi issues and we had to delete the whole deployment
and also all k8s objects for the chart. atomic
release option didn’t help much :(
I decided to just whip up a point solution for the hell of it.
Slack robocop told me not to swear. It took me a good long moment to realize I had done so… sorry I guess
lol, yes, it’s a little bit strict
The helm chart I put together is so simple its not even worth publishing but hey, it does allow one to at least change the helm resource policy from ‘keep’ to whatever else it needs to be to allow for redeployments (if you need to do that)
I was going to do v2 for helm3 but it should work for both helm 2 and 3 I think so I left it at v1
@Erik Osterman (Cloud Posse), you ever not working/geeking out?
haha, not enough…
Btw, try this? https://github.com/thomastaylor312/helm-namespace
Namespace auto-creation for Helm 3. Contribute to thomastaylor312/helm-namespace development by creating an account on GitHub.
Overview Helm2 provided support for the Release namespace {{ .Release.Namespace }} via –namespace option if the release namespace did not exist. This functionality was considered rudimentary, and …
It works fine if you are all helm 3 and willing to change your base helm commands. Honestly, its probably a better solution in general
A generic helm namespace chart. Contribute to zloeber/helm-namespace development by creating an account on GitHub.
read somewhere that 3.1 will add namespace creation back anyway, so its likely a moot point
Yea, not worth investing in
I’m looking at a pretty large stack of helm 2 charts all deployed with tillerless and helmfile that have gobs and gobs of secrets polluting the tiller namespace that makes me itchy to move to helm 3
too many clusters for a single devops guy to look to migrate ATM so I’m using both helm3 and 2 in the same clusters like a fool
anyone using helmfile in a gitops style deployment?
with flux or argocd or something?
2020-01-02
Would like to automate our helmfile-centric workflow a bit more. Developing an Operator to handle the watching as well as the bits that helm/helmfile not able to perform. Would still like to leverage our helmfile effort, at least initially to quickly prototype. Anyone else go on a similar adventure? Recommendations on Operator sdk/framework (I’m currently looking at Metacontroller)?
In addition to helmfile-operator, I’ve built a POC of a GitOps + operator for helmfile deployments for that. It’s based on Brigade, Helmfile, and Flux and available at https://github.com/mumoshu/brigade-helmfile-demo
Demo for building an enhanced GitOps pipeline with Flux, Brigade and Helmfile - mumoshu/brigade-helmfile-demo
Looks like operator-sdk new --type=helm ...
is designed to address this use case.
@erik-stephens Have you seen the Helmfile operator by @mumoshu ?
Kubernetes operator that continuously syncs any set of Chart/Kustomize/Manifest fetched from S3/Git/GCS to your cluster - mumoshu/helmfile-operator
I have not, but it’s on the short list of things to evaluate. Thanks!
I’ve looked at it but was unable to get it to work at the time
@Erik Osterman (Cloud Posse) hello, do you have any example of usage helmfile to patch chart service spec without touch the chart?
You can’t “monkeypatch” with helmfile
as it just wraps helm
So if helm provided someway to do that, then helmfile could.
@deftunix - describe instead what you want to accomplish, and perhaps we can think of a way to do it.
2020-01-03
I want just add some rule in the ingress controller without change the chart to support it
And by changing the rule, you hope to accomplish what? …what is the business objective
Ssl redirect with alb ingress
He needs a rule and an annotation
It needs sorry
Does the chart support disabling the ingress?
But the chart doesn’t support it
Yes
The chart support the ingress disabling
Perfect. Then you can use Helmfile
I will disable and just add a kustomize or manifest?
Disable the ingress. Then define a new one using the raw chart and Helmfile
We have used this pattern in the past
Do you have same repo?
You mean example?
Yes
I couldn’t point you directly (on my phone), but you have seen our massive repo?
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
Go to the releases folder
Thanks
Maybe search for the ingress keyword or raw
Ok
2020-01-05
Is there is any tool like kube-applier
to manage deployments for helm
?
for helm? or Helm deployments?
There is: https://keel.sh/
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
2020-01-06
2 questions:
• What are the intended use cases for helmfile apply
vs helmfile sync
. If I understand correctly sync
would also remove when installed: false
. Otoh, apply
has the diff output which is nice feedback.
• Shouldn’t helmfile have a cool logo? (Running into that every time I need to create presentations for team or business)
Anyone out there setup the operator lifecycle manager as a helmfile?
heck, anyone deploy/use it in any way?
hi all, quick question about helmfile. I would like to apply a gotmpl kubernetes manifeest during the helmfile release apply phase. do you have any idea?
I need to render the template and then apply it
maybe use the raw chart with a values.yaml.gotmpl file?
or delve into kustomize (something I’ve yet to do honestly)
@Zachary Loeber I want just patch the ingress controller of a community helm chart without change it
from the helmfile run
I guess that’s what you use helmfile for in general if I’m reading your statement correctly.
I don’t know about others, but I tend to completely disable ingress on all public charts then float ingress up to a custom chart of my own so I can more quickly make lateral ingress moves if so required.
That way you aren’t trying to patch 20 different ingress chart implementations which may or may not be charted out the same.
You could do such a thing without a custom chart as well I suppose. again, maybe use the incubator/raw chart with a values.yaml.gotmpl file
other (smarter) people on this channel may know better ways though
I create a custom chart
for the ingress
I’m trying to use ref+vault integration for secrets and I’m getting vault: get string: key “foo” does not exist in secrets foo yet it does
Has anyone used the vault integration successfully? I assume people have
I figured it out :)
2020-01-07
This might not be the right place for this question, if not, sorry! I’m trying to deploy grafana, and import dashboards from a git repo. The issue is that the repository I want to import from is private, so even though the dashboards.default.local-dashboard.url
is correct, I can’t reach it for obvious reasons, but I cannot find anywhere in the documentation how to pass secrets/username+pass etc. to authorize myself so I can read it. Does anyone have any pointers?
username:[email protected] where user and password you can pass them usually from env vars or even vault.
{{ requiredEnv PASSWORD" }}
I’ll try that out, thanks a bunch!
2020-01-09
Morning helmfile folks.
I have been struggling the past could days on a problem related to helm file and maybe someone in here can help. Basically I am trying to get weave flux to use helmfile instead of going straight to helm. Here is what I have done so far. I have setup flux to use manifest generation which allows me to run helmfile. Using that I can actually get helmfile to run and build things, but that isn’t a very good use of flux. Basically I am just using flux to clone git. What would be better is if I could get helmfile to write to stdout like kustomize build does. I am told by the folks that develop flux that it should work. It would allow us to benefit from the templating and secrets of helmfile while getting the gitops benefits of flux. Plus we could possibly use helmfile on our local systems if we wanted to.
So the question is who do I get helmfile to output to stdout like kustomize? I am currently running the following thinking it might do the trick, but it doesn’t seem to do anything in flux:
helmfile -e dev -q -f ./helmfile.d/helmfile.yaml build
Any help or suggestions would be great!
@Matt McLane have you seen the helmfile operator?
Kubernetes operator that continuously syncs any set of Chart/Kustomize/Manifest fetched from S3/Git/GCS to your cluster - mumoshu/helmfile-operator
I think that might make it a simpler integration with weave flux because you can just use CRDs
I have seen it but it didn’t look all that functional and I didn’t know how to set it up. I was also concerned that there is a standing issue titled How to run helmfile-operator?
Hey, I had a look at this project and tried to set it up on our cluster. But I struggle with that. All the single pieces of this operator are described but there is on example or docs about how to …
haha “howto” docs are a nice-to-have on open source projects
but yea, it’s more in the incubator stage
@mumoshu is around, he can probably answer questions if they come up.
Have you used that operator yet? I’ve not been able to get it to compile
That is what I was worried about. it didn’t look complete to me.
But I am willing to be wrong.
@mumoshu
hi , is there predefined helmfile for redis native cluster(not using sentinel)? or anyone worked on creating one. Please let me know
2020-01-10
I could create one in about 3 minutes based on the default redis chart. Why don’t you give it a whirl first as you will almost always need to customize whatever anyone else precreated anyway.
So what is the “industry standard” for pipelines to run helmfile? We are trying to move toward a gitops approach, which is why I have been looking at flux so much. We also like some of the functionality helmfile brings us. We could build something from scratch but I much rather be in line with what others are doing.
@Matt McLane Good question, I’ve been trying to figure out the same. I’m looking into ArgoCD for this because it seems to have easier plugin capabilities to support helmfile. But it also seems that argo ‘apps’ are synonomous with helmfiles (generically) https://github.com/argoproj/argo-cd/issues/2143.
Is your feature request related to a problem? Please describe. Similar to helm, helmfile support would be great. Describe the solution you’d like Support for helmfile.
Have you found any documentation on how to plug helmfile into it? I am kinda figuring out that Flux isn’t going to work.
Is your feature request related to a problem? Please describe. Similar to helm, helmfile support would be great. Describe the solution you’d like Support for helmfile.
I am wondering if I can use a postsync hook within Argo CD.
Nothing done with it yet, I’m still on the research/interest stage, sorry
It’s all good
@Matt McLane we are using atlantis
; i would not argue it’s an industry standard, but it works well enough.
atlantis lets one define custom workflows with a plan and apply phase which we map to diff
and apply
in helmfile
Interesting. We run atlantis too for our terraform modules. In those cases we have created custom workflows to run terragrunt instead of terraform.
I can see where using it for helmfile will work.
Yup, very similar….
How do you handle different environments?
helmfile -e dev apply vs helmfile -e qa apply?
How do you promote things?
We have one repo per AWS account.
We use remote helmfiles pinned to a github release
so to promote, we open a PR for that account environment and pin it to a new release
Gotcha
2020-01-13
hello, i’ve been trying to do something that probably shouldn’t be difficult but i’m struggling.
i want to do an {{ if }}
block in my helmfile which checks for the existence of a file. ultimately, i only want a particular release to be deployed if a specific file exists locally. is there a standard way of doing that? (i have seen nothing of the sort in the docs/examples and have fought with my own approaches for a while now)
Hrmmmm that should be possible if Sprig supports a function for that
Useful template functions for Go templates.
I don’t see a function for that
@James Huffman what underlying business logic are you trying to implement? maybe there’s an alternative way that doesn’t depend on the existence of files.
I'm generating a list of additional values files for a release driven by some other dynamic configuration. I'd like to be able to detect existence of those files prior to declaring them in …
the issue above has a workaround you can use for now
{{ if eq (exec "./fileexists.sh" (list $valueFile)) "true" }}
2020-01-14
OK, that’s what i was wondering, if i needed to write a shell script to do it instead. thank you!!
Hi folks, I am just wondering if anyone tried to add roles/clusterroles and their bindings via helm file , I am looking to apply some security polcies for multiple namespaces via helm ? is this possible ?
try rbac-manager along with some incubator/raw charts
that’s what I’ve been using and it works well enough
@Zachary Loeber looks good , I am wondering in incubator/raw under resources and templates I can specify both role/clusterrole and bindings
my goal is to be able to specify/create only get, list,watch policies in namespaces
- name: inv-ingest-rbac
chart: incubator/raw
namespace: inv-ingest
{{- if eq (env "HELM_VERSION" | default "2") "3" }}
needs:
- kube-system/namespace-inv-ingest
{{- end }}
values:
- resources:
- kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: inv-ingest-cluster-role
labels:
app.kubernetes.io/name: inv-ingest
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
- kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: inv-ingest-role-binding
labels:
app.kubernetes.io/name: inv-ingest
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: inv-ingest-cluster-role
subjects:
- name: default
namespace: inv-ingest
kind: ServiceAccount
my bad (my personal social skills are just as awkward as my online ones unfortunately….)
this is great , thanks @Zachary Loeber I am on the right path then. many thanks for telling me about this raw chart
gladly!
The raw chart is great
We are doing kinda the same.
That’s a fairly full example of a helmfile chart for a spark application that uses the default service account and requires more rights than I’m comfortable with
you can pare it back to just what you’d need I’d think.
@Zachary Loeber you’re doing some interesting things over there. would love a demo sometime.
It all feels like hacks layered on hacks to me
Haha that’s the reality though… it’s why I hate these demo videos that deploy hello word apps and proclaim that victory! “Deployments made easy”
The reality is that it’s difficult. Especially when you don’t control the tool chain from top to bottom. Integration is all about hacking bits and pieces together.
I’m digging in to helmfile for the first time and trying to do something I think should be straight forward. I want to set a value that can be used “globally” in chart value files (via gotmpl). I’m not sure where to put this. If I put it in my main helmfile.yaml I get an error ``line 4: field foo not found in type state.HelmState, if I put it in a file that is listed under bases, I get an error
line 1: field foo not found in type state.HelmState` so where does it go?
basically I want to use {{ .Value.foo }} in several chart values.yaml.gotmpl files
This is what is in my defaults.yaml base file foo: '{{ coalesce (env "FOO") (env "LOCAL_FOO") }}'
2020-01-15
Interested in hearing the communities thoughts on the above puzzles ^^
@DanB wouldn’t you use an environment variable for that?
that is what i am trying to do
but set it once to reduce duplication
so what you want is for an environment variable to be assigned to a key in your helmfile but propagate up into the base chart?
Yes but multiple charts, not just one chart
so our solution to this, which may or may not work for you, is to make a values file section called global
in each of our charts.
within global
are all of the common values of this nature we’d like to set, initially with placeholders (since they will get overwritten).
then we have a macro file which handles setting all of the fields to their real values at run time, using env
calls to collect them.
in each helmfile, we include this macro file as another values file:
values:
- macros/deploy.yaml.gotmpl
having to put the placeholders into each chart is the most annoying bit, but helm template helpers might let you avoid that
Ah hmm. So in our case the value is the same in each chart but each chart may use a different key in their values. I guess wed need the global section to duplicate the value for each possible key?
to be fair, you only have to put in each chart the specific keys you care about for that chart. any keys in the macro file which don’t exist in the underlying chart just won’t do anything.
To put what I want in another way: I want to keep all usage of env vars out of individual chart values. I want to centralize all use of env vars to one place (environment? Base? Whatever works). Pain point I am trying to avoid is if an env var name changes or we introduce a new override env var we only have to change it in one place instead of n where n = # of charts. In our case n is 50ish
you could make a single .yaml.gotmpl file containing a single parent key, under which are all of the keys you want and how to obtain their values (e.g. through env
calls), then add that file to the values:
array for each helmfile you’re using. to make it fully work you would then update your charts to pull in that whole section. this would be a one-time deal so any time you updated this master file, all charts would see it when they render.
imagine you called the section in your master values file global:
and put all your keys below it. you can grab them in each of your charts with $root.Values.global
i believe.
hmm, some of these charts I do not control
To me this seems like it’d be a common use case, I wonder if I am missing something or over complicating things
it’s mostly a consequence of how helm works. a particular value is only meaningful if it lands in the correct place within a chart. so with different charts from different sources, some of which you cannot easily modify, there’s no one-size-fits-all solution. everybody writes charts their own way
ignoring env vars, is there a way for me to set a static global value I can reuse in individual chart .yaml.gotmpl via mykey: {{ .Value.someStaticGlobal }}
to give a concrete example my charts want to know the “name” of the cluster in the values file. When I run helmfile I set an envvar CLUSTER_NAME=k8s-dev-01
now I want to set some variable once in one location and then use that value in various chart values, trick is chart A may expect cluster name in a variable named clusterName
chart B might define it as cluster
chart C might define it as cluster-name
etc
i can sprinkle the env `CLUSTER_NAME through my chart .yaml.gotmpls, but I’d really like to avoid that
only way to handle that is chart by chart, unfortunately. you can’t do a generic solution since the charts themselves vary so much. we’ve run into the same thing
Oh I think I figured it out: this was the key: https://github.com/roboll/helmfile/issues/640
This is a copy-paste of #361 (comment) for visibility. We're going to introduce State Values, that should be the foundation for various useful features. (Note that this isn't a breaking cha…
In my helmfile I specify values: that set values based on envvars. I simply use {{ .Values.key }}
in my chart value gotmpl file. I can override these as well in my environments.yaml.gotmpl which is set as bases:
in my helmfile
2020-01-16
Code examples of how Adobe Experience Platform uses helmfile in Kubernetes to streamline large-scale application management.
Hi All , apologies in advance if this is not the right place to ask this , but I yesterday faced an issue with the helmfile diff --suppress-secrets
. I was simply applying some RBAC policies but tiller is now giving full of errors with this.
I can confirm that tiller has cluster-admin
privileges in kube-system
namespace so I am not sure why this diff is failing.
2020-01-17
so what rbac policies did you apply?
another vote for tillerless
2020-01-19
can anyone give me pointers on how to use this: https://github.com/roboll/helmfile/pull/906 but for aws secret manager? I have this in my helmfile but it just uses the literal as is, debugging doesn’t indicate it tried to resolve the secret
trying to piece together info from the README and this repo: https://github.com/variantdev/vals
Helm-like configuration values loader with support for various sources - variantdev/vals
helmfile version v0.98.2
doc is wrong, should be ref+awssecrets://...
, and depending on your secret name format it may not work at all: https://github.com/variantdev/vals/issues/18 @mumoshu
This works: $ ~/.local/bin/aws secretsmanager get-secret-value –secret-id DanTest/ { "Name": "DanTest/", "VersionId": "4853e4d6-d7e8-4a30-9099-89cb8c522099"…
2020-01-20
@mumoshu It would be interesting for helmfile diff
to have a “hardcore” mode that compares against the k8s state instead of the helm state. Embarrassed to say, i hit cases where there’s manual changes to k8s resources that aren’t reflected in helm state. (If i’m misunderstanding helmfile diff
-> helm diff
, my bad ) Anyways, been using something along the lines of:
# Render all the k8s yaml
helmfile -f hello.yml template > ~/Desktop/hello.helmfile.yml
# Diff the new yaml with what's actually deployed
tail -n +2 ~/Desktop/hello.helmfile.yml | kubectl diff -f - > ~/Desktop/hello.helmfile.diff
# If diff is acceptable, run helmfile
helmfile -f hello.yml apply
@rms1000watt I also share your concerns about helm diff not comparing to the actual state of the cluster. I suggested a workaround similar to yours in this issue: https://github.com/databus23/helm-diff/issues/176#issuecomment-576291610
Hi, At the moment, if you make any manual changes to resources (not via helm) helm diff will not reflect these changes. I suggest that the output should reflect the desired vs actual state of the r…
@rms1000watt btw, would your workaround actually work in a scenario where kubectl diff finds a diff but helmfile apply doesn’t find a diff? Wouldn’t it just exit in that case?
@timduhenchanter for visibility. And kudos to your always helmfile template | kubectl apply -f-
methodology
@stobiewankenobi for visibility too. Rofl. Afterthought.
@rms1000watt out of curiosity, are you using helm3? was wondering if it would do a better job.
What are the key differences between Helm 2 and Helm 3? Visit the FAQs for insights.
Solid
I need to upgrade
I’m not sure if this impacts the helm-diff
plugin or not.
but this is a great lead
let me know what you find out!
2020-01-21
helm 3
is very buggy, we face a lot of issues, eg. https://github.com/helm/helm/issues/7426
Hi, I am trying to install a release using the –atomic flag but it seems that it hangs forever: helm3 install bar stable/mariadb -n default –atomic Error: release bar failed, and has been unins…
helm-diff w/ helm 3 is unaffected, as it still shows the diff between the release stored in the cluster(!= the current state of k8s resources originally created for the release) and the manifests rendered by helm template
.
but yeah the diff and the install/upgrade result can be much more reliable than in helm 2, as helm3 tries its best not to accidentally “revert” manual changes
@Mahesh does it still hang when you fix your deployment? (at glance it can happen when the k8s resources created by the chart is stuck in error or not-ready states, which is not issues in helm
you can just rerun it without --atomic
and see if it reveals the underlying issue = cause of the hang.
helm
should do a better error reporting on user errors if it’s actually user error, though.
we just do helm delete
and delete k8s objects created by helm package (its very picky even for secrets)
to fix the hang?
yeah, to redeploy
Oh crap… good point @Dudi Cohen you’re right. Yeah.. I think I would have to reconcile with kubectl apply
with the helmfile template
output
@rms1000watt then you won’t have a release in helm
Random helm/helmfile tip: For what its worth, if you are upgrading to helm 3 ensure you sync more than once for each new chart you deploy for the first time. The three way merge in helm 3 means certain helm constructs will be problematic (such as the autogenerated ClusterIP: “” of a service for instance). Ran into this issue a few times now without realizing it until after the fact.
ClusterIP was a PITA. Had to set force
to false
to deal with it. So far so good.
Disabling force
still doesn’t solve some of the edge cases with Helm 3’s three-way merge unfortunately: https://github.com/helm/helm/issues/6378#issuecomment-556212320 They closed the issue but you can see many people are still reporting the issue.
We’re still blocked from upgrading util the Helm team 1) seriously acknowledges the issue and 2) resolves it
I use the following to install / upgrade a chart: ./helm upgrade –install –set rbac.create=false –set controller.replicaCount=2 –set controller.service.loadBalancerIP=$ip –wait main-ingress st…
2020-01-22
Guys anyone faced such issue that kubernetes job created with helm for db migrations always succeed although when we manually deploy job it shows up actual error
$ node_modules/node-pg-migrate/bin/pg-migrate -m ./migrations-app -v up
No migrations to run!
Migrations complete!
Done in 0.62s
if I manually deploy a job it will show up db migration error which is actual output what could be the case where job from helm is passing in every case ?
Sounds like maybe the exit code from node_modules/node-pg-migrate/bin/pg-migrate
is not getting returned
can you share how you call it in your docker image? for example, if you’re running it in a bash script, you’ll want to have set -e
to ensure you exit non-zero on all errors
@Erik Osterman (Cloud Posse) it is called by yarn run <command> in package,json
ok, but what then calls yarn run
also, can you share the snippet from package.json
where it’s called?
1 min
ok, so the good news is your package.json
looks good. That should pass through the exit codes.
So how do you call this? We need to ensure that everywhere exit codes are preserved.
hmmm makes sense I am rebuilding images, seems I have found something but let me test it
2020-01-23
2020-01-24
It would be helpful if you would all add your commentary to that issue if you experienced it. Hopefully they will reopen or at least point to a new issue with resolution at some point.
@mumoshu Any ideas how I might accomplish value key removals in Helmfile during the values merge? I’m looking for behavior similar to this: https://github.com/helm/helm/issues/1966 Using null
however does not seem to make it past the Helmfile value merge operation (that is merging a discreet/defined value with null
does not appear to be subtractive; the original value key remains)
Specifically, we have some global value keys like resources, probes, etc. that we want to remove for the default environment only (we use default for local development). Does that make sense?
Since the introduction of deep merging (#1620), it's now not possible to remove keys from values.yml entirely. For example, the telegraf values has a default entry for single.config.inputs.infl…
It’s not impossible but I’d say you shouldn’t do it.
Since the introduction of deep merging (#1620), it's now not possible to remove keys from values.yml entirely. For example, the telegraf values has a default entry for single.config.inputs.infl…
Probably you can achieve it with readFile | fromYaml
in combination with merge
and unset
template functions
More feasible way would be not using something that must be removed afterwards as defaults
Thanks @mumoshu
2020-01-28
I am trying to install helmfile in a custom Docker file. This is what it looks like:
RUN apk add --update --no-cache curl ca-certificates bash && \
curl -L ${BASE_URL}/${TAR_FILE} |tar xvz && \
mv linux-amd64/helm /usr/bin/helm && \
chmod +x /usr/bin/helm && \
curl <https://github.com/roboll/helmfile/releases/download/v0.98.2/helmfile_linux_amd64> -O && \
mv helmfile_linux_amd64 /usr/bin/helmfile && \
chmod +x /usr/bin/helmfile && \
helmfile --version
However the build fails with the following out put
/usr/bin/helmfile: helmfile: line 1: syntax error near unexpected token `<'
/usr/bin/helmfile: helmfile: line 1: `<html><body>You are being <a href="<https://github-production-release-asset-2e65be.s3.amazonaws.com/74499101/19b32580-317f-11ea-9dc4-79b9457abdad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200128T124544Z&X-Amz-Expires=300&X-Amz-Signature=4504b341305adc17e28b24ce7d340e0c395b662bc7e991a6d2c856a991c17fd2&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dhelmfile_linux_amd64&response-content-type=application%2Foctet-stream>">redirected</a>.</body></html>'
Any ideas?
Your first curl command has the -L
switch, the second doesn’t. That’s why your downloaded file contains html stating you are redirected, which curl does by adding the -L
flag.
@TBeijen is correct - you need to follow redirects.
My “go to” for curl
arguments is -fsSL
also, I see you’re using alpine. We distribute helmfile
for alpine here: https://github.com/cloudposse/packages
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
# Install the cloudposse alpine repository
ADD <https://apk.cloudposse.com/[email protected]> /etc/apk/keys/
RUN echo "@cloudposse <https://apk.cloudposse.com/3.11/vendor>" >> /etc/apk/repositories
then apk add helmfile@cloudposse
Thanks all super helpful. Will give it a go and report back
Hi! I am attempting to use the exec fuctionality in gotmpl as part of helmfile, to try and pull a value from an external source. I am doing the following in a values.gotmpl file
external_pass: {{ toJson (exec "./vault-show" (list "secrets" "stage")) }}
global:
appConfig:
incomingEmail:
password:
secret: {{ .external_pass }}
However I get an error saying
executing "stringTemplate" at <.external_pass>: can't evaluate field external_pass in type state.EnvironmentTemplateData
you’ve a chicken-and-egg problem here
where you need .external_pass
defined in the go template in order to render the yaml, but to load the yaml you need to evaluate the go template
you could try this
{{ $external_pass := toJson (exec "./vault-show" (list "secrets" "stage")) }}
global:
appConfig:
incomingEmail:
password:
secret: {{ $external_pass }}
err
I have also tried
.Values.external_pass
for the secret field
2020-01-29
I just started looking at the cloudposse helmfiles repo, but I keep getting errors. Is there a recommended combination of versions of the repo/helm/helmfile/etc that is known to work well? I had bizarre helmfile/helm version mismatch issues which required me to downgrade helm from 3.0.2 to 3.0.0. Then I discovered that the config in the repo is trying to pull content from github which no longer exists, it looks like coreos renamed files in https://github.com/coreos/prometheus-operator/tree/master/example/prometheus-operator-crd . I updated those URLs, but now I’m hitting this: error: error validating "<https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml>": error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "preserveUnknownFields" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionSpec; if you choose to ignore these errors, turn validation off with --validate=false
Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes - coreos/prometheus-operator
@David Nolan - your best bet is to fork and use our helmfiles as a starting off point.
I’m trying, but I can’t seem to get the Rube Goldberg machine that is helm/hemlfile to work. I was hoping this would be a good place to start on diving into the world of helm.
haha
A friend and I were musing that there just isn’t a really good corpus of k8s example deployments, and I was saying I wanted the equivalent of the cloudposse terraform repos… and then I found your helmfiles repo and hope sprung eternal…
This is the “goal” but I would say it’s far from it right now. The problem is deploying many of these will be totally different if using EKS, Azure, or Digital Ocean.
(we used kops)
Also, if you use Kiam or not
Also, very frequently, helm releases need backing services; those are deployed with terraform (our modules). But those backing services will differ by cloud.
sorry - the helmfiles are not as portable as our modules
the fact we pinned to master is wrong - we shouldn’t have done that
we haven’t redeployed prometheus since they moved thigns around
hence we haven’t been bitten by it
I’ll probably send you a PR with some updates based on that coreos repo structure change. Right now I’m trying to figure out whats throwing the error about validation. Might be because I’m trying to use an EKS cluster, maybe its missing some extensions.
Yes, so most of our client engagements have been on kops. We’re working right now some for EKS. I would expect some updates to our helmfiles for better EKS support in the coming months. That said, you probably can’t wait that long!
I’m just messing around in free time, the job I’m starting in two weeks uses helm so I figure its something I should learn.
the CP modules for EKS made getting that up trivial
thanks! glad that worked well.
agree that now we just need the samething for helm services on top of it.
I sort of hope that we can get the terraform-helmfile-provider
to a place where it can help us bridge the gap. To date, we’ve just not had enough time to get back to it.
what [prometheus-operator] fix change in prometheus-operator crd yaml locations [prometheus-operator] add podmonitors crd why the url to install crd yamls have changed (currently a 404 Not Found…
Ah, my testing had missed the additional of another CRD file. Nice. Sadly I still hit the validation error, but I think I have a lead on that…
what [prometheus-operator] fix change in prometheus-operator crd yaml locations [prometheus-operator] add podmonitors crd why the url to install crd yamls have changed (currently a 404 Not Found…
Its an incompatibility with k8s 1.14 (which is what my EKS cluster is running)
I think I’ll need to grab an older version. They recently merged a commit that should fix the backwards compatibility, but haven’t regenerated the CRD files it appears.
Pinning to v0.34.0 seems to be working so far… at least helmfile is still running
ah im on k8s 1.15
EKS only supports 1.14 unfortunately
Do helmfile enviroments not support having a
set:
stanza? Just a
values:
stanza only?
set
s are supported as well.
set:
# single value loaded from a local file, translates to --set-file foo.config=path/to/file
- name: foo.config
file: path/to/file
# set a single array value in an array, translates to --set bar[0]={1,2}
- name: bar[0]
values:
- 1
- 2
please see https://github.com/roboll/helmfile#configuration for more info
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
hmm ok I am seeing an error like
in ./helmfile.yaml: failed to read helmfile.yaml: reading document at index 1: yaml: unmarshal errors:
line 48: field set not found in type state.EnvironmentSpec
it isn’t supported under environments
. environment values are completely different concept than helm chart values
ok I think that makes sense. So I could just put everything I was going to set into an environment specific values.yaml and then load that in the particular environment?
then you should either render helm chart values according to the env, or select appropriate helm chart values filer according to the env name
Hey all, I’ve got a list of ip addresses stored as a comma separated list in aws ssm parameter store. I’m trying to get the value and split it into a list:
{{ $extIps := "<secretref+awsssm://path/to/VAR?region=us-west-1>" }}
externalIPs:
{{ range splitList "," $extIps }}
- {{ . }}
{{ end }}
But this is giving me:
externalIPs:
- 127.0.0.1,0.0.0.0
Does anyone know if it’s possible to get it to render as a list of ip addresses like this?
externalIPs:
- 127.0.0.1
- 0.0.0.0
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
here’s how we used it
That makes sense, but it appears my issue is with the secretref+awsssm part. I can get what I want if I set $extIds
to the string "127.0.0.1,0.0.0.0"
but when I try to do something with the result of the call to aws ssm it doesn’t seem to work. Maybe a rendering order thing?
you can’t use it within go template variables as secrets are retrieved and replaced with references after the values file is loaded as yaml
and the go template rendering happens before it’s loaded as yaml
maybe you can make it work with combining environment values and chart values, like having a env values file like this:
extIps: <secretref+awsssm://path/to/VAR?region=us-west-1>
And in chart values gotmpl:
externalIPs:
{{ range splitList "," .Environment.Values.extIps }}
- {{ . }}
{{ end }}
Unfortunately that’s how I started out, I’ve tried various combinations of rendering in a values.gotmpl file and in the helmfile. Looks like I’ll need to take another approach.
Thanks for confirming how it works. That is very helpful.