#helmfile (2019-03)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2019-03-21

FYI: Use raw/incubator
to add adhoc templated resources to your helm release
https://github.com/roboll/helmfile/issues/494#issuecomment-474697430
Let me start with describing the issue I am trying to solve. Many (official) Helm charts allow you to specify annotations on object descriptions (e.g. ingress descriptions). Ingress controllers sup…

Yes love that chart!


Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles

I think I’ve firstly discovered the chart reading your commits made somewhere in cloudposse-related repo thx about that

i’ve added templates
setting to the chart just now, which helps creating an adhoc chart thats creates a k8s secret with helm-secrets https://github.com/mumoshu/charts/blob/56d5b63d2998bcd519b52639b586d896b9016633/incubator/raw/README.md#templated-resources
Curated applications for Kubernetes. Contribute to mumoshu/charts development by creating an account on GitHub.

Brilliant!

@Igor Rodionov


So this came up again this week. We are now running Helmfile under Atlantis for a customer

For now, we are downloading a tarball from GitHub, but would love to do remote sources

Thinking maybe to have @Andriy Knysh (Cloud Posse) submit a PR

Any suggestions for him?

I think we have potential need to support a lot of sources

Our objective is to version pin Helmfiles like we do terraform modules

Sounds great

The risk is of course it is incompatible with some capabilities of Helmfile

Is that acceptable?

Do you have any concrete example(s) about the incompatibility? (I had none



This was what I was thinking of…

Let me recall..

Ah, so we usually have a companion values.yaml(.gotmpl)
files that is intended to be stored and used along with the referring helmfile.yaml

in certain cases you want to read a remote helmfile.yaml, you also want companion values.yaml files to be fetched and used altogether

Could we generalize what it means to “open and read” a file?

So that a file could be really just anywhere

E.g. use curl

Which supports every scheme imaginable

libcurl

libcurl go bingding

If no scheme specified, we default to file://

sounds good

@Andriy Knysh (Cloud Posse) project for you

so if you specify <https://example.com/foo/bar/helmfile.yaml>
, what might happen?

downloading only helmfile.yaml
results in you’re unable to read any dependent values.yaml(.gotmpl)
from the remote helmfile.yaml

Then then it is up to the remote Helmfile to use a well formed url if it should be a companion

Or if not, then it would use the local file

so do we need to download <https://example.com/foo/bar>
, extract everything into a local dir, then reads helmfile.yaml
in it?

Thanks Erik :)

So for example: referencing “values.yaml.gotmpl” maps to “file://./values.yaml.gotmpl”

(On my phone)

Then then it is up to the remote Helmfile to use a well formed url if it should be a companion
does that mean you need to, in your remote helmfile.yaml, turn every path to values.yaml to something like <https://path/to/your/values.yaml>
?

Yes, if the implication is that all deps are remote

But for values, that’s a perfect example of what we would usually customize per customer

We don’t want that to be remote

That’s their flavor

hmm.. i’m still unsure

Comprehensive Distribution of Helmfiles. Works with helmfile.d
- cloudposse/helmfiles

So here is how we use it

would your customer customizes the remote helmfile.yaml fetched by helmfile?

We provide these sample overrides

or just values.yaml?

But we would never want these defined by us

These are what end users ultimate set to customize

if that’s the latter, how do your customer pass the custom values.yaml to the remote helmfile.yaml without forking it?

So the remote Helmfile loads a local values file

(I don’t fully understand the code implications of what I am saying)

helmfile finds values.yaml relative to the helmfile.yaml

as of today.

Oh

Yes yes

for reproducibility of helmfile deployment

But the root Helmfile or each included Helmfile?

I see what you are getting at.

I just always assumed it was relative to the root Helmfile or pwd

I look at the values.yaml like terraform.tfvars

So I guess I see it all relative to my current working directory

But the root Helmfile or each included Helmfile?
each included helmfile.yaml

Oh

Yea not sure how to reconcile that

That gets messy

I mean, I get why it works that way today for consistency

that way, you can even helmfile sync
the sub-helmfile alone

Yes

Maybe a new keyword

Like base_url

but i do understand it isn’t intuitive to some people, and it doesn’t work straightforward when you want a library
helmfile.yaml

(Thinking like the base tag in html)

yep, a new keyword may work

@Andriy Knysh (Cloud Posse) sounds like we have some more thinking to do on this


do we probably need a way to override helmfile env..?
helmfiles:
- source: <https://path/to/helmfile.yaml>
overrides:
environments:
customprod:
- myownprod.yaml
so that helmfile --environment customprod helmfile.yaml
works against the remote helmfile.yaml, even the customprod
isn’t defined in the remote helmfile.yaml?

this defeats the reproducibility of sub-helmfiles. but does allow us to use the remote helmfile.yaml as a reusable, customizable package

like a helm chart

@Igor Rodionov

Created an issue to track it before I forget https://github.com/roboll/helmfile/issues/523
This is similar to #361, but for helmfile.yaml. You can write helmfile.yaml containing one or more sub-helmfiles that is processed before releases, in helmfiles: environments: default: values: - en…

Anyway, what I think recently is: We generally need abilities to (1)pin and (2)update any dependencies. a semver in the source spec, and a command to fetch the latest ver that accomodates the version constraint would be useful

after that, we can add any datasource users want…

as long as it supports version pinning and updating based on semver

anydep was my attempt to achieve that https://github.com/mumoshu/anydep
General-purpose project/application dependency manager - mumoshu/anydep

Oh yes, I recall you mentioning this to me
2019-03-22

@Erik Osterman (Cloud Posse) Can I add a link to this channel in the helmfile readme?

@mumoshu please do!

We also have an archive of the channel publicly available here: https://archive.sweetops.com/helmfile/
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

@mumoshu or anyone else I’m getting more interested in this or something similar due to some requests from my teams - https://github.com/roboll/helmfile/issues/483
Summary In order to maintain predictable deployments, as developer I want to generate and use lock file for all chart versions retrieved from a helmfile. Rationale We have an environment git repo f…

I’ve made a quick proposal for it!
https://github.com/roboll/helmfile/issues/483#issuecomment-476030604
Summary In order to maintain predictable deployments, as developer I want to generate and use lock file for all chart versions retrieved from a helmfile. Rationale We have an environment git repo f…

Perhaps we can import the relevant helm pkg from the upstream, so that we don’t need to shell-out to helm search
..? But it may not worth efforts

If we could decide on a design that also achieves what I need to do I can probably do the PR.

I think a locking mechanism with helm is desperately needed

i think a locking mechanism in helmfile
would help, but where we need it most is when we deploy our own apps, which can happen dozens of times a day

(E.g. unlimited staging environments)

the problem is if helm
is executed twice in short succession to “upgade” an environment, the outcome is unpredictable

we use helmfile
for those deployments as well, however the pipeline is different

and the pipeline isn’t always executed by atlantis
; sometime codefresh

so I see the actual fix as being some kind of locking mechanism that is in k8s ,not on the filesystem

@Shane

a helm upgrade
is predictable if you don’t run repo update

but ya in codefresh you would need to get a new index

to concurrent helm upgrades on the same release is predictable?

where is the locking happening?

Whenever we run helm
standalone we always specify a version so we don’t have your issues.

e.g. helm upgrade --wait
, i see that affected

interesting never had a usecase for a concurrent upgrade against the same helm release

well, we do not want concurrent upgrades

just if we are deploying an app on every commit to a branch, those can happen very close to each other

one upgrade starts before the other one finishes and they both are in --wait
state

ahh ya, that’s an entirely different issue than what a lockfile would accomplish

You want helm to block.

(or helmfile
)

but in a distributed manner


I do agree a change lock should be available.


we do all our app deploys also with helmfile
; we call it the “helm cartridge” strategy. basically with every app, we ship a helmfile.yaml
inside of it that describes how to deploy the app.

“inside of it” = “inside of the docker image”


that said, i don’t personally have any complaints about adding a lock file

…and we’ll probably use the flag if it exists

just it doesn’t solve all our headaches


I think what would solve our problems is terraform-helmfile-provider

then we can piggy back on all the capabilities of terraform

and all the power/flexibility of helmfile

and we’d get locking to boot (since terraform already supports state locking)

@mumoshu first planted this seed in my head

@Shane how’s that for a weekend project?

haha

terraform solves your locking problem, but I personally don’t know if I see the value in other things terraform can do.

soooooo what about creating that IAM role for your service so it can write to S3

what about creating that DNS zone for your external-dns

what about creating that ElasticSearch cluster for your kibana UI

it’s interesting if we can provision the backing services needed to deploy things in k8s

k8s handles a very narrow slice of automation, the rest is handled by tf





there are a few examples of where we wrote the terraform modules for provisioning the backing services needed by apps deployed with `helmfile

Sure, but no specific reason to couple terraform infrastructure and terraform for services. You certainly can, but does not mean they need to be in one location.

Essentially we have a process that runs terraform and a second process that runs helmfile.

yea, we do something similar

certainly coupling them is another option.

just orchestrating the relationships between them is not “solved”

But from our design I don’t view it as preferred to couple them.

If anything I would rather helmfile pull from a terraform state output.

what do you think about Terraform Operators?

Use K8s to Run Terraform. Contribute to rancher/terraform-operator development by creating an account on GitHub.

(experimental)

I think they are a massive ton of work to actually get functional

due to all of the migration edge cases where it would destroy your infrastructure.

lol

yes

I essentially wrote one for provisioning kubernetes, rds, s3 and a few other resources at my last job.

It was essentially a operator to provision cloud resources for our platform

It was …..painful

We essentially used terraform in the background, but we had to have specific migration plans when terraform was not backwards compatible between 2 versions.

overall we decided that using terraform to do that type of work was a disaster for us.

wow, interesting you already went down that path

it works fine if you have well worn modules, but it’s not very good for adhoc oh I need to change X

not like you can just rename resources and apply the terraform

you will break everything at least for the time being

i was concerned about those points. so basically, having purpose built operators in k8s might work, but a generalized operator for tf might be impractical for the reasons you mentioned

purpose built operator = etcd operator, mysql operator, etc

I took a interview with these guys recently and it sounds like they are doing exactly what you are talking about just not in terraform - https://crossplane.io/
The open source multicloud control plane.

that looks interesting

I’m torn. I think what we want to achive is hard to achieve e2e in terraform

ya, they are very very early. I hope they can accomplish what they are aiming for it would be great.

a higher order language is probably required.

ya, which is why they are not doing it in terraform as the evolution is to hard.

Kubernetes Expert! Stakater offers companies a highway to Kubernetes adoption for their DevSecOps automation - [✩Star] if you’re using it! - Stakater

have you seen them?

npe

they are also like cloudposse

more k8s focused

some interesting projects, all open source
2019-03-24
2019-03-25

Is there any performance difference between helmfile sync
& helmfile apply
, like if I am using helmfile in my CI running on every commit. Which should be more preferable?

apply
is preferable, as it sync
s only when there are changes to be applied
also see https://github.com/roboll/helmfile/issues/205#issuecomment-411811549
Any possibility of adding a feature to run the sync for releases in the helmfile that have differences. With the feature of helm diff I would imagine this would be something that would fit?

Agreed, but is helmfile apply
any different from just helmfile diff && helmfile sync
?

yep, the latter creates new helm releases even for unchanged ones, afaik

@mumoshu got it, thanks

in a CI/CD context, helmfile apply
coupled with the intalled
flag is the way to go
2019-03-26
2019-03-27
2019-03-28

shouldn’t helmfile diff
and helm diff
hide diffs of secrets by default? please on this issue if you agree
https://github.com/databus23/helm-diff/issues/128
helm-diff as of today show diffs of secrets, which results in exposing your secret values to stdout. This seems problematic as when used in CI, the user ends up unexpectedly leaves the secret value…

considering to add helmfile destroy
while deprecating helmfile delete
. wdyt?
https://github.com/roboll/helmfile/issues/511
I never understood why you would not want to purge everything upon delete. it seems that is may be used for troubleshooting but if you want to delete a release why would you want to troubleshoot an…

agree about --purge
2019-03-30

thx! i’ll add helmfile destroy
in my next sprint on helmfile

….which would operate only on (installed: true) ?

no, i wasn’t considering to do so! im interested in what would be the benefit, though

was thinking that was maybe the closest to terraform destroy
?

trying to understand

so “that with a state of installed should be destroyed”

ah thats an interesting idea! and sounds feasible

i’m thinking if we need ignored: true
that is different than installed: true
to express it

hm, maybe an overkill…

since the state in helmfile
is not external (e.g. helmfile.state
) but instead defined directly inside of the helmfile.yaml
, that makes the installed state determined by installed: true

but yea, it gets confusing

just for flavor….


terraform plan
has an argument for -destroy
which generates a plan to destroy



or another way of thinking about it:

helmfile apply
+ installed: false
== helmfile destroy
+ installed: true

that equation does help me get the idea

now, i don’t know if that’s what the community wants

just logically that makes sense to me

so am i. and i think theres no downside of making it as such

the nice thing is then terraform destroy
undoes exactly what was created from terraform apply

… which is a nice way to iterate

agreed

you realize what you’re doing…. you’re making it so we can use something like https://github.com/gosuri/terraform-exec-provider
Contribute to gosuri/terraform-exec-provider development by creating an account on GitHub.

execute arbitrary commands on Terraform create and destroy - samsung-cnct/terraform-provider-execute

@Shane ever look into something like this?


I’m familiar with it, but never caught on in my designs.

we could have a generic provider in terraform that calls command
with apply
, or destroy
and maybe we can use helmfile
then with terraform

thought if installed: true
better be ignored: false
or enabled: true
but turned out either of them fits my mental model