#helmfile (2019-07)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2019-07-11
One more question about {{'{{}}'}}
syntax.
Say, I set something like this in helmfile.yaml
values:
- hostname: {{`{{.Release.Namespace}}`}}.my-domain.com
But this causes the error during helmfile lint
: reading document at index 1: yaml: line 241: did not find expected key
.
Is this an expected behaviour?
Btw, I’m on quite old version: v0.69.0
Answering my own question. Probably somebody will find this useful. That was explained previously here: https://sweetops.slack.com/archives/CE5NGCB9Q/p1560168577052300. I wasn’t very attentive at first. But got the answer right after sending the question.
chart: {{` {{ .Environment.Values | get (printf "%sVersion" "tpsvc-config") "" | eq "" | ternary "../.." "talend" }} ``}
evaluates to
chart: {{ .Environment.Values | get (printf "%sVersion" "tpsvc-config") "" | eq "" | ternary "../.." "talend" }}
which looks chart: {{ whatever }}//{{ whatever }}
for the yaml parser
does helmfile support creation of standalone kubernetes resources? for example if i add istio as a release and want to install a gateway, but without it being contained in a helmchart
@Mical Helmfile just calls helm. But you’re in luck…
There is a “raw” chart that does exactly what you want
Here is a example: https://github.com/cloudposse/helmfiles/blob/master/releases/external-storage.yaml
Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles
And we used it also to install an istio gateway
Here is an example of that: https://github.com/cloudposse/example-app/blob/master/deploy/releases/istio.yaml
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
@Erik Osterman (Cloud Posse) thanks I’ll check it out!
What’s the road map for 1.0.0? We’re looking at different tools to use for decentralizing helm chart development while having a centralized way to test, package and continuously deploy systems containing a variety of charts. So far we’ve been looking at flux and helmfile. Flux is great but very opinionated while helmfile gives us the freedom of control since it is stateless. The fact that helmfile is still <1.0 is an issue for us for obvious reasons.
I think it’s unfair to judge based on the release version being pre 1.0. Terraform is 0.12 is in use in production by enterprises/banks/etc. Obviously the uptick in traction is a wee bit less for helmfile. From my POV, better indicators to look at is how often the software is released (aka maintained), how responsive are the maintainers to issues, the overall traction of the project/community.
when you look at these characteristics, then helmfile shines.
btw, @mumoshu has now written an HelmfileOperator
so you can do flux-like things with helmfile (that is a proof of concept though.
Kubernetes operator that continuously syncs any set of Chart/Kustomize/Manifest fetched from S3/Git/GCS to your cluster - mumoshu/helmfile-operator
2019-07-12
@Erik Osterman (Cloud Posse) that’s a valid point.
2019-07-15
Thanks Erik!
@Mical If I have anything to add, there would be only two things:
(1) Many orgs including my company uses helmfile in production
https://github.com/roboll/helmfile/blob/master/USERS.md
(2) Even though it is pre-1.0, Helmfile has never introduced breaking changes to existing feature for a year or so.
We do introduce breaking changes to experimental features but even those happen only after prior discussions with the known users of the feature.
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
2019-07-18
can i get the exit code from hooks to propagate so that helmfile sync
doesn’t exit with 0 on failure?
hey!
generally hemlfile
exits with 1
when one of your hooks failed.
for example, running helmfile sync
against this helmfile.yaml
exists with 1
:
releases:
- name: mysql1
chart: stable/mysql
namespace: mysql
hooks:
- name: myhook
events: ["presync"]
command: "sh"
args:
- -c
- echo whatever; exit 1
does this help?
hm, it doesn’t exit with 1 if my helm tests fail. will try to get something that can be reproduced
oh. which hook events
are you using?
postsync
ah, i’ve reproduced it. perhaps i’ve made it ignore non-zero exit codes for postsync hooks for thinking it’s nice
i’ll take this as a chance to redesign it
helmfile processes your releases in parallel. do you want all the ongoing releases to be immediately canceled if one of them failed in postsync
?
or complete all the releases anyway and fail helmfile itself only when one or more releases failed in postsync
?
for me the latter would suffice, since there might be cases where you would want other releases to be processed but as long as we can tell if all passed or not i’m happy
but in the long run it might be nice to have that control yourself via configuration
Is there an easy way to patch this manually until there’s a new release available? It’s a deal breaker since I want to use helmfile
in our ci/cd pipeline and I only have 2 more weeks until vacation
since you said that you made it ignore non-zero i was thinking there might be an easy way to fork helmfile and make it not ignore it
yeah probably. i’ll take a look now!
thank you
Just change this line to results <- syncResult{errors: []*ReleaseError{err}}
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
thanks man
ah wait we need a bit more work
ok
Try changing these lines to:
relErrs := []*ReleaseError{}
if relErr == nil {
relErrs = append(relErrs, relErr)
}
if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
st.logger.Warnf("warn: %v\n", err)
relErrs = append(relErrs, &ReleaseError{err})
}
if len(relErrs) > 0 {
results <- syncResult{errors: relErrs}
} else {
results <- syncResult{}
}
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
thanks i’ll give it a try
make install
is handy if you want to install the helmfile binary built from your binary to go/bin (usually ~/go/bin)
i’m on the hello world
level of go so that’s helpful
helmfile -v
to see the version number to verify you’re running the correct binary
then ensuring existence of go 1.12.x+ on your machine will also help. for me its like:
$ go version
go version go1.12.5 darwin/amd6
yeah i had go 1.11.x so i’m updating
great
if you’re using make install
, ensure your $PATH contains the go/bin in it
s/updating/upgrading
pkg/state/state.go:425:46: cannot use err (type error) as type *ReleaseSpec in field value
pkg/state/state.go:425:46: too few values in &ReleaseError literal
try changing relErrs = append(relErrs, &ReleaseError{err})
to relErrs = append(relErrs, newReleaseError(release, err))
// updated
newReleaseError is not a type
argh! it should be newReleaseError(release, err)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb926e7]
would you mind giving me the rest of the message, especially the first several lines of stack trace?
full stack trace is too long to post but i’ll paste it in chunks
goroutine 1 [running]:
github.com/urfave/cli.HandleAction.func1(0xc00023ce28)
/home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:474 +0x287
panic(0xca5980, 0x1570ff0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/roboll/helmfile/pkg/app.context.wrapErrs(0xc00038c120, 0xc00000b0e0, 0xc000388a00, 0x3, 0x4, 0x15a2d70, 0x0)
/home/zmiccar/src/helmfile/pkg/app/app.go:627 +0x1c7
github.com/roboll/helmfile/pkg/app.context.clean(0xc00038c120, 0xc00000b0e0, 0xc000388a00, 0x3, 0x4, 0x3, 0x4)
/home/zmiccar/src/helmfile/pkg/app/app.go:614 +0x13e
github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xc0000d5da9, 0xa, 0xc000038700, 0x35, 0x0, 0x0)
/home/zmiccar/src/helmfile/pkg/app/app.go:334 +0x878
github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0xc00003c2a0, 0x2c)
/home/zmiccar/src/helmfile/pkg/app/app.go:220 +0x9d
github.com/roboll/helmfile/pkg/app.(*App).within(0xc00038c120, 0xc0000d5da0, 0x8, 0xc00004c080, 0xc000258148, 0x2)
/home/zmiccar/src/helmfile/pkg/app/app.go:181 +0x3f6
github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120, 0xc0000d5da0, 0x13, 0xc000090180, 0x0, 0xdbb2f4)
/home/zmiccar/src/helmfile/pkg/app/app.go:214 +0x29f
github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120, 0xc0000d5da0, 0x13, 0x15a2d70, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0000386c0, ...)
/home/zmiccar/src/helmfile/pkg/app/app.go:255 +0xd4
github.com/roboll/helmfile/pkg/app.(*App).visitStates.func1(0xdad353, 0xd, 0xc0000be3f0, 0x23, 0x0, 0x0)
/home/zmiccar/src/helmfile/pkg/app/app.go:314 +0x5ad
github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles.func1(0x0, 0xc0002585b0)
/home/zmiccar/src/helmfile/pkg/app/app.go:220 +0x9d
github.com/roboll/helmfile/pkg/app.(*App).within(0xc00038c120, 0xda48d4, 0x1, 0xc000388780, 0xc0002587a8, 0x2)
/home/zmiccar/src/helmfile/pkg/app/app.go:162 +0x725
github.com/roboll/helmfile/pkg/app.(*App).visitStateFiles(0xc00038c120, 0x0, 0x0, 0xc0000d15c0, 0x20, 0xcf6da0)
/home/zmiccar/src/helmfile/pkg/app/app.go:214 +0x29f
github.com/roboll/helmfile/pkg/app.(*App).visitStates(0xc00038c120, 0x0, 0x0, 0x15a2d70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/zmiccar/src/helmfile/pkg/app/app.go:255 +0xd4
github.com/roboll/helmfile/pkg/app.(*App).VisitDesiredStatesWithReleasesFiltered(0xc00038c120, 0x0, 0x0, 0xc0001c5f60, 0xc000258901, 0xc00038a600)
/home/zmiccar/src/helmfile/pkg/app/app.go:403 +0x426
github.com/roboll/helmfile/pkg/app.(*App).ForEachState(0xc00038c120, 0xc00038a600, 0xc0001c5f50, 0xd89580)
/home/zmiccar/src/helmfile/pkg/app/app.go:349 +0x81
github.com/roboll/helmfile/pkg/app.(*App).Sync(0xc00038c120, 0xf50940, 0xc0001c5f50, 0xc0001c5f50, 0xcd1560)
/home/zmiccar/src/helmfile/pkg/app/app.go:125 +0x6e
main.main.func7(0xc00038c120, 0xc0002fd400, 0x0, 0xc0001c5f30, 0x0)
/home/zmiccar/src/helmfile/main.go:272 +0x6d
main.action.func1(0xc0002fd400, 0x0, 0x0)
/home/zmiccar/src/helmfile/main.go:548 +0x121
reflect.Value.call(0xc58980, 0xc0001c5b00, 0x13, 0xda52b3, 0x4, 0xc000258dc8, 0x1, 0x1, 0xc0000d6000, 0x411d03, ...)
/usr/local/go/src/reflect/value.go:447 +0x461
reflect.Value.Call(0xc58980, 0xc0001c5b00, 0x13, 0xc000258dc8, 0x1, 0x1, 0xc0002efa00, 0xc0002efa48, 0x140)
/usr/local/go/src/reflect/value.go:308 +0xa4
github.com/urfave/cli.HandleAction(0xc58980, 0xc0001c5b00, 0xc0002fd400, 0x0, 0x0)
/home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:483 +0x1ff
github.com/urfave/cli.Command.Run(0xda5963, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0xdd089d, 0x43, 0x0, ...)
/home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/command.go:186 +0x8d1
github.com/urfave/cli.(*App).Run(0xc00009ca80, 0xc0000bc020, 0x2, 0x2, 0x0, 0x0)
/home/zmiccar/go/pkg/mod/github.com/urfave/[email protected]/app.go:237 +0x601
main.main()
/home/zmiccar/src/helmfile/main.go:397 +0x2835
ahh if relErr == nil {
must be if relErr != nil {
to wrap up, you should change these lines to:
relErrs := []*ReleaseError{}
if relErr != nil {
relErrs = append(relErrs, relErr)
}
if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
st.logger.Warnf("warn: %v\n", err)
relErrs = append(relErrs, newReleaseError(release, err))
}
if len(relErrs) > 0 {
results <- syncResult{errors: relErrs}
} else {
results <- syncResult{}
}
patch in case someone is interested in the tmp fix:
diff --git a/pkg/state/state.go b/pkg/state/state.go
index 99818cd..b0b0849 100644
--- a/pkg/state/state.go
+++ b/pkg/state/state.go
@@ -415,14 +415,20 @@ func (st *HelmState) SyncReleases(affectedReleases *AffectedReleases, helm helme
}
}
- if relErr == nil {
- results <- syncResult{}
- } else {
- results <- syncResult{errors: []*ReleaseError{relErr}}
+ relErrs := []*ReleaseError{}
+ if relErr != nil {
+ relErrs = append(relErrs, relErr)
}
if _, err := st.triggerPostsyncEvent(release, "sync"); err != nil {
st.logger.Warnf("warn: %v\n", err)
+ relErrs = append(relErrs, newReleaseError(release, err))
+ }
+
+ if len(relErrs) > 0 {
+ results <- syncResult{errors: relErrs}
+ } else {
+ results <- syncResult{}
}
if _, err := st.triggerCleanupEvent(release, "sync"); err != nil {
2019-07-22
hey, just joined the slack, but been using helmfile for a 6+ months; thanks alot for making it
i’ve got a quick question: are environments: []
propagated to children helmfiles: []
? looks that they aren’t from my experiments, and not sure if its by design
hey! you can selectively inherit values
https://github.com/roboll/helmfile/issues/725#issuecomment-506101418
We are trying to use helmfile in our pipeline. For this we hoped to use a parent helmfile (with repository configuration and helm-defaults) and subhelmfiles that INHERIT from this master helmfile. …
if you want an easier way to inherit all the values, it isn’t implemented yet, but here’s the feature request https://github.com/roboll/helmfile/issues/762
Helmfile doesn't inherit values to sub-helmfiles by default today. It does support selectively inheriting some values(#725), but there's no easy way to inherit all the values. Perhaps it wo…
also trying stuff out with bases: []
, but i’m probably getting something wrong
bases
implicitly inherit all the values from the parent and sub-helmfiles. but there’s a plan to make it explicit
Extracted from #347 (comment) We've introduced bases a month ago via #587. I'd like to make this breaking change(perhaps the first, intended breaking change in helmfile) before too many peo…
Hi, Hoping someone knows off the top of their head why this wouldn’t work
templates:
dataproduct: &dataproduct
namespace: dataproduct
chart: chartmuseum/cdp-chart
version: 1.0.1
values:
- app
name: {{`"{{.Release.Name}}"`}}
- {{ .Environment.Name }}.yaml
releases:
- name: my-dp
<<: *dataproduct
- name: another-dp
<<: *dataproduct
mind sharing the error message you’re seeing?
nm im reading
Specifically
- app:
name:
{{`"{{.Release.Name}}"`}}
It seems to be an issue with nesting values because if I change it to
values:
- appName: {{`"{{.Release.Name}}"`}}
it doesn’t fail
The error is
YAML parse error on cdp-chart/templates/rbac.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Release.Name":interface {}(nil)}
@Ben could you copy-paste your actual template once again? I see a trailing :
is missing after app
in https://sweetops.slack.com/archives/CE5NGCB9Q/p1563858986015700
Hi, Hoping someone knows off the top of their head why this wouldn’t work
templates:
dataproduct: &dataproduct
namespace: dataproduct
chart: chartmuseum/cdp-chart
version: 1.0.1
values:
- app
name: {{`"{{.Release.Name}}"`}}
- {{ .Environment.Name }}.yaml
releases:
- name: my-dp
<<: *dataproduct
- name: another-dp
<<: *dataproduct
anyways this seems to work without emitting such error on my machine:
templates:
dataproduct: &dataproduct
namespace: dataproduct
chart: chartmuseum/cdp-chart
version: 1.0.1
values:
- app:
name: {{`"{{.Release.Name}}"`}}
- {{ .Environment.Name }}.yaml
releases:
- name: myapp
chart: stable/mysql
namespace: myapp
<<: *dataproduct
@mumoshu Sorry, yes the original has the colon
environments:
default:
values:
- default.yaml
production:
values:
- production.yaml
templates:
dataproduct: &dataproduct
namespace: dataproduct
chart: chartmuseum/cdp-chart
version: 1.0.0
values:
- app:
name: {{`"{{.Release.Name}}"`}}
- {{ .Environment.Name }}.yaml
releases:
- name: california-sos
<<: *dataproduct
- name: rdc
<<: *dataproduct
helm version
Client: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
@Ben Which helmfile version are you using?
> helmfile -v
helmfile version v0.80.1
thx
I’m doubting something is wrong with your cdp-chart/templates/rbac.yaml
. Could you share it?
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
rules:
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "{{ .Values.app.name }}-config"
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- "{{ .Values.app.name }}-secret"
- "gitlab-registry"
- "{{ .Values.app.name }}-tls"
verbs:
- get
{{- end -}}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Values.app.name }}
subjects:
- kind: ServiceAccount
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
Also - enabling debug logs like helmfile --log-level=debug
allows you to see the yaml after rendering the template, which would help debugging
Actually the error states the error occurs in serviceaccount.yaml which looks like
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
labels:
app: {{ .Values.app.name }}
{{- end -}}
ah sry I missed https://sweetops.slack.com/archives/CE5NGCB9Q/p1563859105017700
It seems to be an issue with nesting values because if I change it to
values:
- appName: {{`"{{.Release.Name}}"`}}
it doesn’t fail
indeed this can be a bug in helmfile!
ahh it isn’t rendering nested strings.. i’ll open an issue for that
@Ben Is this a blocker to you? (I’ll prioritize this accordingly if so
Thanks @mumoshu It’s not a blocker but would definitely remove a lot of boilerplate config for us
Thanks! I’ll try to fix it asap. Here’s the issue https://github.com/roboll/helmfile/issues/769
This is reported in our official Slack channel: https://sweetops.slack.com/archives/CE5NGCB9Q/p1563863513024900 This doesn't work, leaving "{{.Release.Name}}" not rendered: releases: …
Much appreciated @mumoshu
2019-07-23
Do you have any knowledge of how people are doing continuous deployment (or nearly CD with some manual gates before running sync) using helmfile? On the surface it seems simple; build, test, publish docker image, update helmfile with new image tag (or chart version if publishing a new chart) and run sync. However, it gets much more complicated once you have to deploy to multiple environments/tenants and also stage helmfile changes. Just wondering if you’ve seen any good approaches?
We run Helmfile under atlantis
We centralize our Helmfiles in a repo
Then use remote Helmfiles to pull them in pinned to a release (kind of like terraform)
Thanks @Erik Osterman (Cloud Posse) The thing I’m struggling with is how to update centralised helmfiles in an automated way. At the moment I can only see one option; Each microservice git project’s build pipeline creates/publishes docker image, clones the helmfile repo, creates a branch, using a script (sed, etc) updates the chart version or values, commits changes. Then another pipeline checks for helmfile repo changes and deploys to an environment based on branch name (feature/nnn, release/nnn, master). Once the helmfile sync is finished, a tester tests the system in a test env and if happy merges the helmfile repo branch into master. The same pipeline detects the change in git and deploys to prod because changes are now on master. Does that seem reasonable? How does your process differ from this?
in our model, we run one repo per AWS account.
we build one container per account based on geodesic
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
we have a /conf/helmfiles/helmfile.yaml
folder that looks like this
# Ordered list of releases.
helmfiles:
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/reloader.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/cert-manager.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/prometheus-operator.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/kiam.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/external-dns.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/aws-alb-ingress-controller.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/kube-lego.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/nginx-ingress.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/heapster.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/dashboard.yaml?ref=0.48.0>"
- path: "git::<https://github.com/cloudposse/helmfiles.git@releases/codefresh-account.yaml?ref=0.48.0>"
this allows us to surgically version pin individual accounts
our docker images inherit from geodesic like this: https://github.com/cloudposse/testing.cloudposse.co/blob/master/Dockerfile
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
[testing.cloudposse.co](http://testing.cloudposse.co)
represents one AWS account.
we do the same sort of thing for [prod.cloudposse.co](http://prod.cloudposse.co)
, [staging.cloudposse.co](http://staging.cloudposse.co)
in our case, we do this for our customers so our *.[cloudposse.co](http://cloudposse.co)
repos are out of date
Thanks @Erik Osterman (Cloud Posse), Makes sense. Like the idea of specifying sub-helmfile versions. Do you manually update the helmfiles whenever a chart or docker image changes?
Yes, everything is deliberate
Pinning to subhelmfiles has been awesome. We don’t need to worry about breaking changes or forcing updates across environments.
@Ben We’ve got a single helmfile with all the releases for all the environments defined. It’s placed in a separate repo, not within the main codebase. Since we’ve got a set of stages (dev, several stagings, prod) and per client installations we are leveraging environments:
feature extensively to make it DRY. The workflow is pretty manual indeed, one has to set a new version of the docker image and/or helm chart for the certain environment. To control things pull-request/merge-request workflow could be considered. Not the best approach for sure, but kinda work for us right now.
Thanks @Andrew Nazarov Sounds similar to what I was envisioning. I think we’ll go down this path and if the manual updates become cumbersome we’ll try and automate them.
@Erik Osterman (Cloud Posse) Out of curiosity, what is this Atlantis thing?
Atlantis: Terraform Pull Request Automation
it lets us run plan (diff) and apply using github comments.
Thanks. It looks quite interesting. I’m I right that you use TF to execute helmfile?
No, Atlantis can run freestyle steps
Cool, thank you.
2019-07-24
Would be nice to have condition: boolean
on hooks because sometimes you might want to run a hook only if installed: boolean
is true on another release.. if course it can be wrapped by other logic but would be cleaner to have a flag for it
bugs me that helm test --cleanup
does not output the logs before removal on error
Hiya. Currently I’m doing this:
releases:
- name: abc-namespace
chart: stable/magic-namespace
set:
- name: tiller.image.tag
value: v{{ .Values.tillerVersion }}
- name: xyz-namespace
chart: stable/magic-namespace
set:
- name: tiller.image.tag
value: v{{ .Values.tillerVersion }}
Is there a way to set tiller.image.tag as a default for all releases so that I don’t have to specify that set every time? I’ve seen you can use release templates, but I wonder if there’s an even more general way.
@Emanuel Hey! Unfortunately there’s only a marginally better way:
templates:
setTillerImage: &setTillerImage
- name: tiller.image.tag
value: v{{ .Values.tillerVersion }}
releases:
- name: abc-namespace
chart: stable/magic-namespace
set:
- <<: *setTillerImage
- name: xyz-namespace
chart: stable/magic-namespace
set:
- <<: *setTillerImage
would you mind opening a feature request if you need something better? thx!
can someone tell me if i’m not understanding environments:
correctly and how it applies to releases. i have 2 files in helmfile.d
:
istio.yaml
environments:
staging:
releases:
...
app.yaml
environments:
default:
staging:
releases:
...
- if i run
helmfile sync
i expect onlyapp.yaml
releases to be synced - if i run
helmfile -e staging sync
i expect both to be synced.
@Mical Your assumption is legit but I’m skeptical if I implemented helmfile as such
- if i run helmfile sync
i expect only app.yaml
releases to be synced
- if i run
helmfile -e staging sync
i expect both to be synced.
The general rule of Helmfile is that it has an empty default
environment by default. And any reference to undefined environment results in a failure.
That said, helmfile --env default diff
or helmfile diff
should process both yamls because they all refer to the default
env. helmfile --env staging diff
should also process both yamls as it refers to the staging
env defined in the both yamls
What I dont understand yet is this:
https://sweetops.slack.com/archives/CE5NGCB9Q/p1563977112071400
because if i replace default
with dev
in my case and run helmfile -e dev sync
then it only applies to app.yaml
releases (edited)
If you have dev
defined in both app.yaml and istio.yaml, they should work on the dev
environment.
What happened to istio.yaml
in your case? You encountered no error and got default
loaded into istio.yaml
?
because if i replace default
with dev
in my case and run helmfile -e dev sync
then it only applies to app.yaml
releases
My initial problem was that I did not want all releases to be part of the default
environment. I only managed to do this with a conditional wrapped around everything which I didn’t really like. Now I’m only using explicit environments, trying to avoid default
.
In the message you linked to I was talking about replacing default
with an explicit dev
environment.
Ah! So you wanted Helmfile to ignore all the releases defined in istio.yaml
when in default
environment
Yes, but I realized that’s not how it works
Yep. Then replacing default
with anything else should work
Good
Btw I see emerging use of {{ if eq .Environment.Name "theenv" }}
in helmfile.yaml templates these days
We use the pre-release version for dev/stg and release versions for other environments. Currently, helmfile deps only supports a single lockfile per helmfile. Something like this will not work with…
cc/ @Shane
So I’m considering to add first-class support for toggling releases per env(or even helmfile/state values)
That would be nice
releases:
- name: istio
chart: istio.io/istio
if:
environment:
- dev
- staging
@Mical If you had an idea or suggestion on the syntax, I’d appreciate it if you could share it
Sounds cool. What about just environment:
like
releases:
- name: istio
chart: istio.io/istio
environment:
- dev
- staging
?
Yeah, I would vote for a list of environments like @Andrew Nazarov proposed.
Our layout is essentially a global helmfile as our ops people are lazy(me). And a helmfile per team per environment. 90% of our global helmfile things are installed in all environments with the exception of jenkins being installed in only prod and a test service being installed in only dev. Of the two options above I like the second one better, but I still imagine a better method for all of this logic has to exist.
Possibly having a global helmfile, but where it includes sub helmfiles?
environments:
prod:
values:
- prod/_environment/values.yaml
secrets:
- prod/_environment/secrets.yaml
helmfiles:
- helmfiles/jenkins.yaml
That way you define the helmfile snipets and include them as we already have a list of environments in the helmfile itself I would rather see the environments be the first class citizens.
Join us here: https://github.com/roboll/helmfile/issues/781
I'm seeing emerging use of {{ if eq .Environment.Name "theenv" }} in helmfile.yaml templates for making releases optional for a subset of environments. To me, helmfile templates are n…
don’t want to wrap istio releases with {{ if eq .Environment.Name "staging" }}
or is it that default
applies to all implicitly?
Basically you want to set values under envs and use it like:
environments:
dev:
values:
- foo: aaa
prod:
values:
- foo: bbb
releases:
something: {{ .Environment.Values.foo }}
for helmfile -e dev apply
something
would be aaa
for helmfile -e prod apply
something
would be bbb
yeah that i know, was wondering about the default
environment if that is applied to all releases
because if i replace default
with dev
in my case and run helmfile -e dev sync
then it only applies to app.yaml
releases
i expect i was abusing the default
but thanks for input @Andrew Nazarov (chose some bad names in my example btw.. updated)
Actually, I thought environments don’t affect releases until you define something like mentioned {{ if eq .Environment.Name "app" }}
. What you are talking about is something new to me. I mean, that helmfile -e dev
syncs only releases in the app.yaml. It it documented somewhere?
Not sure if it is, but it works that way at least
I hope @mumoshu will comment.
but i need {{ if ne .Environment.Name "default" }}
around my istio releases to exclude them from the default
env
Ah, then it’s expected)). At first I thought you somehow managed to do it without if .Environment.Name
since you wrote “don’t want to wrap istio releases ”
Thanks for the clarification.
I meant that i don’t want to wrap it with {{ if eq .Environment.Name "xxx" }}
not {{ if ne .... }}
would prefer not having to add the not equals conditional though
2019-07-25
Hey, just started to experiment with Helmfile and besides some hiccups with helm-diff, I was wondering, is it possible to capture directly stdout from a command and use it in a template? E.g. something like {{ requiredEnv "PLATFORM_ENV" }}
, maybe {{ getOutput "echo Hello" }}
?
2019-07-29
2019-07-30
@Yannis is that not what {{ exec }} does? (I’m also new to Helmfile)
2019-07-31
I think I don’t fully understand how environments work. If I have
helmfiles:
- ./*/helmfile.yaml
And I want to use helmfile -e production apply
then does each of those subhelmfiles have to define the environment? Else you’d get err: no releases found that matches specified selector() and environment(production), in any helmfile
So what would that look like? Do I put this code at the top of each subhelmfile? There has to be a better way!
bases:
{{- range $_, $file := ( exec "sh" (list "-c" "echo ../environments/*yaml") | splitList " " ) }}
- {{ trim $file }}
{{ end }}
does each of those subhelmfiles have to define the environment
Yep
There has to be a better way!
Definitely
If environments are global, I’d expect to define them in the parent helmfile only.
The point is that they aren’t global. To make each sub-helmfile modular, they are intentionally not global.
Btw, did I implement globbing in bases:
? That would reduce the boilerplate to:
bases:
- ../environments/*yaml
If it doesn’t work, it would worth a feature request
My belief is that sub-helmfile
shouldn’t rely on environments so that they are modular and reusable.
Once you’ve removed all the environments from sub-helmfiles, https://github.com/roboll/helmfile/issues/762 will allow you to pass necessary helmfile values as template params of sub-helmfiles
Helmfile doesn't inherit values to sub-helmfiles by default today. It does support selectively inheriting some values(#725), but there's no easy way to inherit all the values. Perhaps it wo…
If environments are global, I’d expect to define them in the parent helmfile only.
A --no-hooks
option to helmfile
would be nice.
I’m hearing - I’d appreciate it if you could write up a feature request with your use-case
GitHub is where people build software. More than 36 million people use GitHub to discover, fork, and contribute to over 100 million projects.
Alternatively, would it make sense to extend our helmfile.yaml syntax to make hooks conditional depending on environment names or helmfile values?
That’s possible today with helmfile templates, but we’re having a discussion to make it possible without templates. For toggling releases without templates, we have https://github.com/roboll/helmfile/issues/781
I'm seeing emerging use of {{ if eq .Environment.Name "theenv" }} in helmfile.yaml templates for making releases optional for a subset of environments. To me, helmfile templates are n…
That would be nice, but would like the option to disable hooks globally which I think a --no-hooks
flag would make most sense. I can file an issue for it.
Thanks! Looking forward to read it