#helmfile (2024-08)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2024-08-01
Helmfile release 0.167.0 congratulations to everyone who helped out for this release
2024-08-13
Maybe I’m just having trouble finding the setup docs, but where should helmfiles go in an atmos project? Are they a component?
Atmos natively supports Helmfile components same way as terraform/OpenTofu components.
We used to use Helmfile components s lot.
See some examples here https://github.com/cloudposse/atmos/tree/main/examples/tests/components/helmfile
https://github.com/cloudposse/atmos/tree/main/examples/tests/stacks/catalog/helmfile
Currently our proffered way to provision Helm charts with Terraform is to use the https://github.com/cloudposse/terraform-aws-helm-release module
many EKS components that provision Helm charts are using the helmfile-relese
module, e.g.
https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks
module "cert_manager" {
@Andriy Knysh (Cloud Posse) thanks! that example is great!
this allows us to have everything in Terraform - the core infrastructure components (VPC, EKS, etc.), and the components that are deployed on EKS clusters
the helm-release
component allows provisioning external Helm Charts (from 3rd parties) as well as internal/custom charts, e.g. https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/actions-runner-controller
it’a good question. we will write more docs about how to setup helmfile for new project.
I can help with the docs @yxxhero we should discuss where they will go
2024-08-14
2024-08-15
Hi, follow-up question to ^. My understanding is that Atmos can run an EKS login command before running the helm commands. My question is: how do I specify which cluster and region in particular? Didn’t see this info in the examples or docs, but maybe I just missed it. Thanks!
@Andriy Knysh (Cloud Posse)
we have a few different scenarios here:
• Atmos does have this command https://atmos.tools/cli/commands/aws/eks-update-kubeconfig, but it does not run it automatically, you can run it manually and update the kubeconfig
from the cluster (“login” to the cluster)
Use this command to download kubeconfig
from an EKS cluster and saves it to a file.
• If you are using Helmfile, then Atmos will execute the command aws --profile <profile> eks update-kubeconfig <cluster-name>
, where
<profile>
is the AWS profile defined in atmos.yaml
in helmfile.helm_aws_profile_pattern
section (Atmos processes all the tokens ij the profilem pattern and replaces them with the context vars like tenant, environment, stage, etc
<cluster-name>
is the name of the EKS cluster defined in atmos.yaml
in the helmfile.cluster_name_pattern
section (again, Atmos replaces all the tokens with the context variables
See https://github.com/cloudposse/atmos/blob/main/examples/tests/atmos.yaml#L43 as an example
helmfile:
• If you are using the `https://github.com/cloudposse/terraform-aws-helm-release Terraform module to provision Helm Charts with Terraform (not using Helmfile), see the examples of EKS components that use the module https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/external-dns/main.tf#L21
then Atmos does not access the EKS cluster and does not download the kubeconfig
from it, Terraform code does that.
It uses the EKS cluster remote state https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/external-dns/remote-state.tf#L1
to get the cluster eks_cluster_oidc_issuer_url
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/external-dns/main.tf#L40
and uses the TF helm
provider which logs in to the cluster
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/external-dns/provider-helm.tf
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/external-dns/provider-helm.tf#L154
and provisions the Helm Chart
@cricketsc
Thanks @Andriy Knysh (Cloud Posse), still going through your response but it’s quite helpful
if you are using Helmfile and atmos helmfile diff/apply
commands, look at item #2
2024-08-17
Hi community. Here is a little contribution to promote helmfile. I like it more everyday, so I have written two posts. Let me know if you like it:
• https://medium.com/@pela2silveira/diving-deeper-into-helmfile-a3f77ba10d78
Exploring advanced concepts with this tool
Quick analysis of Terraform integration with Helm, and a superior approach using Helmfile.
2024-08-20
Hi, Trying to figure out the data flow for Atmos + helmfile. How is, if it is, the generated ….helmfile.vars.yaml file supposed to be consumed? If it’s not supposed to be consumed, what does it do?
it’s exactly the same as for Terraform
Atmos generates a varfile for the component in the stack from the stack manifests
then executes the helmfile
command and uses the argument --state-values-file
to point to the generated varfile
--state-values-file stringArray specify state values in a YAML file. Used to override .Values within the helmfile template (not values template).
you have a Helm Chart with some values (defined in values.yaml
, and/or helmfile
environment files with some values, and Atmos will generate the additional varfile and instruct Helmfile to use it
so the final values for the Helm release will be combined from at least three sources:
• Atmos generated varfile (values file)
• Helmfile environments and value files
• Helm Chart value file
@cricketsc
hmm okay, that’s making sense. Thanks ! I think var vs value threw me off a bit
Hey folks. Is there an appetite for this feature? https://github.com/helmfile/helmfile/discussions/178
In roboll/helmfile#591, there was discussion of allowing sub-helmfile releases to be installed concurrently. @mumoshu proposed the following design:
Given all that, I guess our best bet would be to give up controlling order of files under the directory, and only allow controlling the order and the parallelism of
helmfiles:
entries, extending the DAG feature(releases[].needs
) to sub-helmfiles.helmfiles:
- name: infra path: helmfile.infra.yam
- name: apps1 path: helmfile.apps1.yaml needs: infra
- name: apps2 path: helmfile.apps2.yaml needs: infra
This results in Helmfile computing a DAG of helmfiles [ infra ] <– depends on [app1, app2]. As app1 and app2 being independent to each other after infra is given, helmfile will automatically install infra first, and then concurrently install app1 and app2.
This seems like an excellent idea that could be very helpful if you have a setup involving sub-helmfiles because it allows cleaner organization and separation of concerns, but, once all the releases and values are calculated, you still want the actual installation to be done in parallel for performance.
Obviously, I have no idea how much work this would be to implement, but, the fact that @mumoshu is the one who proposed it leads me to believe it’d be at least vaguely plausible to implement. I figured I’d re-post it here so it didn’t get lost in the repository transition.
Hi again , trying to debug that when I run atmos helmfile apply what seems to be a correct kubeconfig gets generated as desired, but then I get failures by helm diff trying to connect via localhost.
Seems to be home directory related actually. How does specify using the home directory in atmos.yaml for where to local the kubeconfig info? Had trouble with $HOME as well.
do you want to set kubecofnig flag in helmfile?
trying to set the home directory in the atmos.yaml file.
more specifically it seems like this example can use env vars but if I try "{{ .Env.HOME }}/"
… it seems to be taken literally.
2024-08-21
Hello, I had an inquiry about the demo-helmfile example:
• Helmfile My understanding is that the stack file pulls in the catalog file and then changes it to be of the “real” type. Then I believe that the helmfile gets included via the key/value pair “component: nginx”. Some of the previous terminology may be off, but I think that’s the general idea.
My inquiry is are the vars of the catalog entry supposed to be injected into the helmfile’s nginx release? How mapping to the nginx release work and how are they picked up? Does it use the state-values-file? I noticed this empty values section in the helmfile. Is that related?
Best to ask in atmos
@Erik Osterman (Cloud Posse) done, thanks!