#helmfile (2020-05)

https://github.com/helmfile/helmfile

Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles

Archive: https://archive.sweetops.com/helmfile/

2020-05-01

adefemi171 avatar
adefemi171

Hi all, am trying to deploy my values into Jenkins environment using helm install Jenkins stable/jenkins --value helm/jenkins-values.yaml but I keep getting some depreciation error. Any help would be highly appreciated

s_slack avatar
s_slack

What is the error? I just deployed this successfully

2020-05-02

2020-05-04

Ufou avatar

Hi, should it be possible for a helmfile environment to have multiple values files with them merged? eg:

 environments:
   default:
     secrets:
       - ../environments/default/secrets.yaml
     values:
       - ../environments/common.yaml
       - ../environments/default/values.yaml
Ufou avatar

so in that example a value specified in environments/common.yaml would be overridden by a value set in environments/default/values.yaml ?

Mashail Almuzaini avatar
Mashail Almuzaini

yes

2020-05-06

2020-05-08

Milosb avatar

HI all, I installed some resources using helm3 in default namespace. What would be proper way to move everything in another namespace without disruption?

Zachary Loeber avatar
Zachary Loeber

define ‘disruption’? as far as I know there is no way to move workloads across namespaces without recreating them. As such, the way to ensure both internal and external services are still available would be doing a blue/green deployment using some external load balancer/service (external to the helmfile deployment that is).

Zachary Loeber avatar
Zachary Loeber

But what does it matter the namespace? did you deploy multiple workloads into one namespace and need to split them out now?

Milosb avatar

I deployed something in default namespace which i dont like

Milosb avatar

i will probably end up creating new release in another namespace and deleting old one

Zachary Loeber avatar
Zachary Loeber

oh, whoops. well I cannot tell you how to achieve zero disruption but if you used helmfile switching namespaces should not be a huge undertaking at least.

Zachary Loeber avatar
Zachary Loeber

helm3 means that the deployment info is held per namespace

Zachary Loeber avatar
Zachary Loeber

so you can simply target the new namespace to have both deployments running simultaneously

Milosb avatar

and its not for application, its for some controller wich handle other objects, so it can be tricky

Zachary Loeber avatar
Zachary Loeber

then remove the old one.

Zachary Loeber avatar
Zachary Loeber

ah, it can never be straightforward can it?

1
Milosb avatar

but at least is in lower environment, so i can give it a shot

Milosb avatar

well all would be fine if i created in non default one at te beginning , but now I will have vailable experience about outcome of two parallel releases in separate namespaces

Milosb avatar

thanks for advice, helpful as always

Zachary Loeber avatar
Zachary Loeber

lol, it is only a failure if nothing is learned

1
Zachary Loeber avatar
Zachary Loeber

sorry I have nothing better off the top of my head

Milosb avatar

dont be, you helped me, and I really apricate it

Andrew Nazarov avatar
Andrew Nazarov

With disruption via helmfile it could be done like: -) installed: false for the old release (beware the possible data loss, make dumps and stuff) -) fix the release definition, set installed back to true and apply -) deal with data migration (restore dumps or reuse volumes if possible)

Or

-) create a definition of the new release -) migrate the data -) installed: false for the old one

That’s the only way that comes to my mind how this can be done with helmfile. I think I’ve seen an issue related to the same problem

Andrew Nazarov avatar
Andrew Nazarov
changing namespace of a deployment has no effect · Issue #1058 · roboll/helmfile

Editing the namespace field in the helmfile does not redeploy the chart to the new namespace. e.g changing this: - name: kubernetes-dashboard namespace: kube-system chart: stable/kubernetes-dashboa…

btai avatar

could you deploy a new release into the different namespace and point traffic to that service instead. the service endpoint would be different because of the namespace. if its ingress i assume you could technically have 2 ingresses in different namespaces w/ the same endpoint and remove the old one once the new one is up?

mumoshu avatar
mumoshu

Yes I believe @Andrew Nazarov and @btai have summarized the state very well! Thanks.

So it highly depends on the type of your service and how you’re exposing it to other in-cluster and inter-cluster services, via ClusterIP service or service loadbalancers or ingresses.

For example, for in-cluster services, you’d need to add a new release adjacent to your release in the old namespace. Switch the dependent apps to use the new service. Remove the old release by setting installed: false and running helmfile apply.

Milosb avatar

Thanks all, I was able to switch release/objects to new namespace. Objects (deployment/pod) that i installed in new namespace had precedence over old one, I realized that since serviceaccount didn’t have permission on some aws resources so it showed permission error. Once i fixed permission issues in iam roles everything started to work, and i safely removed old release/objects from default namespace

1

2020-05-09

2020-05-11

Rameez Iqbal avatar
Rameez Iqbal

Hi Guys. This is probably more related to helm secrets than helmfile itself but I am wondering if I am missing a flag or something when using helmfile . I have a SOPS encrypted secret e.g. bitbucket.key which does get decrypted when using helmfile into bitbucket.key.dec and the original file get deleted. But the problem is helmfile still tries to load origin bitbucket.key which obviously doesn’t exist.

failed to read jenkins.yaml: environment values file matching "../secrets/bitbucket.key" does not exist in "."

It should be loading bitbucket.key.dec or decrypt it to bitbucket.key in the first place. Does anybody know what I am doing wrong here? Thanks in advance for your help.

Vincent Behar avatar
Vincent Behar

you should name your original file with the .yaml extension

Vincent Behar avatar
Vincent Behar
roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Rameez Iqbal avatar
Rameez Iqbal

Thanks @Vincent Behar Will give it a try now

Rameez Iqbal avatar
Rameez Iqbal

It worked. Thanks a lot

1
Paul Catinean avatar
Paul Catinean

Hi guys, helmfile diff shows a change in the deployment but helmfile sync does not re-create the pods. Why is that?

Paul Catinean avatar
Paul Catinean

Oh my bad this is a helm question rather

bradym avatar

Can anyone help me understand the difference between values and valuesTemplate? The only place I see valuesTemplate mentioned in the docs is https://github.com/roboll/helmfile/blob/master/docs/writing-helmfile.md - but it’s still not clear to me how they are different. I tried reading through issue 428 as mentioned in the doc and unfortunately it did not clear anything up for me.

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

2020-05-12

Ufou avatar

Hi! does anyone know how to get past running helmfile against GKE (helm2/tiller installed with TLS enabled) with this error:

  Error: transport is closing
Ufou avatar

i had not enabled tls in the helmfile

Ufou avatar

a better question is, how can I enable helm tls for a specific helmfile environment?

Ufou avatar
helmfile -i apply --args helmDefaults.tls=true
Ufou avatar

does not work

Ufou avatar

ok this works:

helmfile -i apply --args --tls
bradym avatar

I’ve got a some values that I need to set for all of my releasees based on values from AWS SSM Param store, anyone know how to make something like this work?

templates:
  default: &default
    valuesTemplate:
      secret: secretref+awsssm://{{ .Values.repo }}/{{ .Environment.Name }}/secret?region=us-west-1

releases:

  - name: app-{{ .Values.branchSlug }}
    version: 1.0
    values:
      - repo: app
    <<: *default
Andrew Nazarov avatar
Andrew Nazarov

What if you try

{{`{{ .Values.repo }}`}}

?

bradym avatar

I had the same thought, but I still get executing "stringTemplate" at <.Values.repo>: map has no entry for key "repo"

Andrew Nazarov avatar
Andrew Nazarov

Oh, my bad, wasn’t careful enough at first. I might be wrong, but I think it won’t work at all as it’s impossible to use .Values to reference anything from values: of the release.

bradym avatar

I was afraid that might be the case.

bradym avatar

It makes sense why it wouldn’t work, but it sure would be nice.

Andrew Nazarov avatar
Andrew Nazarov

So basically this .Values reference is a reference to so called “environment values”, not to values of a release (which are chart values) :)

bradym avatar

That’s helpful, thanks. I know you can use the release name in the templates section (for example

secret: secretref+awsssm://{{`{{ .Release.Name }}`}}/{{ .Environment.Name }}/secret?region=us-west-1

I wonder if I could use that or another value from .Release to do what I’m attempting. I’ll try it out.

bradym avatar

I’m still frequently confused by where things come from and where they’re available in helmfiles.

bradym avatar

As is I’m getting error during apps.yaml.part.0 parsing: template: stringTemplate:28:48: executing "stringTemplate" at <.Values.repo>: map has no entry for key "repo"

2020-05-13

Paul Catinean avatar
Paul Catinean

Does anyone know how one can execute a command inside the deployed pod of a specific release?

Zachary Loeber avatar
Zachary Loeber

That is not something you would do as part of a helm release unless it were passed in as part of the starting argument for a container.

1
Zachary Loeber avatar
Zachary Loeber

you can use init containers to run initialization commands against a shared volume. otherwise using pre-sync hooks you can spin up containers to run commands (https://github.com/roboll/helmfile/issues/538)

[doc] Recommended way of handling non conventional install such as cert-manager · Issue #538 · roboll/helmfile

HI, When reading the process to install the cert-manager chart (https://hub.helm.sh/charts/jetstack/cert-manager), you can see two steps before installing the chart: installing some CRD prior to th…

Paul Catinean avatar
Paul Catinean

@Zachary Loeber thanks for the reply. This would be part of a ci/cd pipeline and triggering a server update after deployment. But this job can fail and might need to be re-triggered. Not sure if using initContainer in this case will be the best, especially in some cases where you might want to run the update command optionally

bradym avatar

What is it you’re trying to do? It is possible to run a command inside a specific existing pod, but I don’t recommend it. Using a job (https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) is usually a better option.

Zachary Loeber avatar
Zachary Loeber

I’ve done something similar in the past for one-shot operations via a single pod (for the most part the only reason you’d ever want to run just a single pod). The helmfile looked something like this:

Zachary Loeber avatar
Zachary Loeber

- name: kafka-init chart: incubator/raw namespace: database values: - resources: - kind: Pod apiVersion: v1 metadata: name: kafka-init spec: restartPolicy: Never containers: - name: kafka-init image: {{ requiredEnv "CONTAINERREPOSITORY" }}/{{ requiredEnv "STACK_KAFKA_INIT_IMAGE" }}:{{ requiredEnv "STACK_KAFKA_INIT_IMAGE_TAG" }} # command: # - /usr/bin/init_connectors.sh" imagePullPolicy: Always env: - name: 'JDBCPASSWORD' value: '{{ env "JDBCPASSWORD" | default "secretjdbcpassword@azurekeyvault" }}' - name: 'JDBCURL' value: '{{ env "JDBCURL" | default "secretjdbcurl@azurekeyvault" }}' - name: 'STORAGEACCOUNTNAME' value: '{{ env "STORAGEACCOUNTNAME" }}' - name: 'STORAGEACCOUNTKEY' value: '{{ env "STORAGEACCOUNTKEY" | default "secretstorageaccountkey@azurekeyvault" }}' - name: 'JDBCDATABASE' value: '{{ env "JDBCDATABASE" }}' - name: 'JDBCUSER' value: '{{ env "JDBCUSER" }}' - name: 'JDBCSERVER' value: '{{ env "JDBCSERVER" }}' - name: 'JDBCSCHEMA' value: '{{ env "JDBCSCHEMA" }}' - name: SCHEMAREGISTRYHOST value: '{{ env "STACK_KAFKA_SCHEMA_REGISTRY" }}' - name: 'KAFKACONNECTHOST' value: 'confluent-kafka-cp-kafka-connect.database.svc' - name: 'ZOOKEEPERHOST' value: 'STACK_ZOOKEEPER_HOST' - name: 'CONNECT_PLUGIN_PATH' value: '/usr/share/java' - name: 'CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE' value: 'false' - name: 'CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE' value: 'false' - name: 'CONNECT_INTERNAL_VALUE_CONVERTER' value: 'org.apache.kafka.connect.json.JsonConverter' - name: 'CONNECT_INTERNAL_KEY_CONVERTER' value: 'org.apache.kafka.connect.storage.StringConverter' - name: 'KAFKA_BOOTSTRAP_SERVERS' value: '{{ env "STACK_KAFKA_BOOTSTRAP_SERVERS" }}' - name: 'KAFKA_BROKERS' value: '{{ env "STACK_KAFKA_DEFAULT_REPLICA_COUNT" | default "3" }}'

Zachary Loeber avatar
Zachary Loeber

That is overly complex and you can see where I later had the image itself run a default command (thus commenting out the command)

Zachary Loeber avatar
Zachary Loeber

but it was for initializing a kafka instance after the fact in a cicd pipeline

Paul Catinean avatar
Paul Catinean

Interesting

Paul Catinean avatar
Paul Catinean

Well for my case it’s pretty straightforward I think

Paul Catinean avatar
Paul Catinean

I have a helmfile which updates 2-3 releases or so at once when running helmfile sync

Zachary Loeber avatar
Zachary Loeber

yeah, same deal though. you could use a raw chart to run a pod

Paul Catinean avatar
Paul Catinean

On one specific release of the 3 after it has properly rolled out

Zachary Loeber avatar
Zachary Loeber

but also, if the commands are simple enough, you could also simply use kubectl run as well

Paul Catinean avatar
Paul Catinean

It should exec -it update-modules in any of the replica pods

Paul Catinean avatar
Paul Catinean

yeah it’s just a single command but it should be run on that specific deployment after it rolled our properly

Paul Catinean avatar
Paul Catinean

and I don’t want to maintain separate variables in my gitlab-ci where I hardcode the deployment names or so

Zachary Loeber avatar
Zachary Loeber

gotcha, maybe join the office hours chat happening right now and ask if anyone else has better ideas

Paul Catinean avatar
Paul Catinean

ah nice

Paul Catinean avatar
Paul Catinean

I can just bust in and ask questions?

Zachary Loeber avatar
Zachary Loeber

yup, Erik will ask multiple times usually

2020-05-14

Ufou avatar

Do recent version of helmfile still support helm 2?

Zachary Loeber avatar
Zachary Loeber

I believe so but I’d start migrating to 3

3
Andrea Maruccia avatar
Andrea Maruccia

hello I’m having some issue with helmfile, i can’t figure out if I’m doing something wrong or if it’s intended: So i’ve this helmfile which will call another helmfile. You can see that I’m trying to override 1 value.

helmfiles:
  - path: base-opt-in/kube-janitor.yaml
    values:
      - kubejanitor:
          dryRun: false

This is the other helmfile where I have all my default values (that i do not want to repeat)

repositories:
  - name: hjacobs
    url: <https://raw.githubusercontent.com/hjacobs/kube-janitor/master/unsupported/helm>

releases:
  - name: kube-janitor
    chart: hjacobs/kube-janitor
    namespace: kube-system
    values:
      - image:
          repository: hjacobs/kube-janitor
          tag: '19.12.0'
          pullPolicy: IfNotPresent
        kubejanitor:
          dryRun: true
          debug: true
          once: true
Andrea Maruccia avatar
Andrea Maruccia

Problem is though that when I template this, it doesn’t set the dryRun to true

am i doing something wrong?

Andrea Maruccia avatar
Andrea Maruccia

opened issue in case it’s a better way to communicate https://github.com/roboll/helmfile/issues/1262

Values do not merge like expected · Issue #1262 · roboll/helmfile

I&#39;m having some issue with helmfile, i can&#39;t figure out if I&#39;m doing something wrong or if it&#39;s intended: I have this helmfile which will call another helmfile. You can see that I&#…

Andrea Maruccia avatar
Andrea Maruccia

we solved this issue! thx for the support

2020-05-15

hari avatar

Hi Team, i have a requirement to club 3 files and make a configmap file like this and it will work.  {{- range list “dev.properties” “properties.conf” “properties.json” }} However i want pass these names from the values file. I could not able to make the correct syntax. can some one help on it {{- range list .Values.propertiesFileEnv, .Values.propertiesFileCommon, .Values.propertiesFileJson }}    <<< there is a syntax error here… “,” is not supported , but what could be the separator..

jason800 avatar
jason800

Hey all, can helmfile set a kubernetes auth context per release or only per helmfile? I have a bunch of releases I want to go out in parallel, but they are across different clusters

Andrew Nazarov avatar
Andrew Nazarov

It should be possible to do per release. But I think recently I’ve seen the issue that this might not work

Andrew Nazarov avatar
Andrew Nazarov
kubeContext override is broken · Issue #1244 · roboll/helmfile

Hello, I use kubeContext override per release, and it&#39;s broken starting from v.0.106.0. Simple helmfile.yaml is following: helmDefaults: kubeContext: default releases: - name: cert-manager name…

jason800 avatar
jason800

Thank you!!

jason800 avatar
jason800

Looks like I’m able to do it just fine, just can’t do it on the helm file and the release

Marjan Jordanovski avatar
Marjan Jordanovski

Hello all, I have a question: how would you propose in helmfile to pull/reference a helm chart from Azure Container Registry? So if for example I have ACR with login server named acrX.azurecr.io, and inside of it there is a repository named repoX/chartX, and inside of it there are it’s versions, how would I need to update my helmfile (or possibly some other files) to pull/reference that chart?

repositories:
  - name: ?
    url: ?
    username: ?
    password: ?
    . . .
Andrew Nazarov avatar
Andrew Nazarov

I’ve seen once they have this /helm/v1 appended to the url, but don’t know the real thing actually

Andrew Nazarov avatar
Andrew Nazarov

Like

<https://acrX.azurecr.io/helm/v1/repo>
Andrew Nazarov avatar
Andrew Nazarov
roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Zachary Loeber avatar
Zachary Loeber

I’ll only add that the authentication token for ACR is abysmally short-lived

Zachary Loeber avatar
Zachary Loeber

if you are having issues running locally, ensure you auth to the acr just before syncing your charts

Marjan Jordanovski avatar
Marjan Jordanovski

thanks for your answers, but this is happening in gitlab ci, and lint job is failing since

<https://acrX.azurecr.io/helm/v1/repo>

is not reachable publicly

Zachary Loeber avatar
Zachary Loeber

The same principles apply, the runner needs to be authenticated to the acr prior to linting

Zachary Loeber avatar
Zachary Loeber

this is true of helm in general, not just helmfile

2020-05-17

deftunix avatar
deftunix

Hi all, is anyone using helmfile and kustomize together?

Paul Catinean avatar
Paul Catinean

Hi guys, I suppose I am doing something wrong if I use a helmfile for staging and production and separate them by environment

Paul Catinean avatar
Paul Catinean

?

Paul Catinean avatar
Paul Catinean

Because if I specify the environment the releases not belonging to it are deleted…

jason800 avatar
jason800

Kind of going through this myself. Currently I have them separated which obviously makes the whole environment thing pointless. The best I can tell is the other option is to add conditionals for every release saying “if env then….”

jason800 avatar
jason800

It would be much more desirable of you could set those sort of conditionals at the helmfile import level

Paul Catinean avatar
Paul Catinean

Yes I ended up using conditionals in the end

jason800 avatar
jason800

I think I’ll probably do the same in the end

Paul Catinean avatar
Paul Catinean

Would you happen to know if one can set values: in each specific environment and the release to inehrit it?

jason800 avatar
jason800

It can be confusing. Only values set at the release level are relative to the actual helm deploy

jason800 avatar
jason800

All the other values set everywhere else are only relative to helm file

Paul Catinean avatar
Paul Catinean

My issue now (after just using conditionals) would be to use the prod.yaml or staging.yaml depending on the environment

jason800 avatar
jason800

What you can do is set values in the environment for home file to pick up on and then dynamically load values files into the helm chart based on those values from helmfile

Paul Catinean avatar
Paul Catinean

I assume production: values:

• x.yaml is not how it owkrs

Paul Catinean avatar
Paul Catinean

hmm not sure I got that

Paul Catinean avatar
Paul Catinean
Paul Catinean avatar
Paul Catinean

How would I use the respective environment values in the release?

jason800 avatar
jason800

So in prod.yaml you might have “security=production” and then in the release you might have

Values:
  - vars/security/{{ . Environment.Values.security }}-security. Yaml
jason800 avatar
jason800

Forgive typos and pseudo code. Typing from my phone

Paul Catinean avatar
Paul Catinean

Hmm I just want to point the release to the respecitve .yaml file that’s all

jason800 avatar
jason800

I believe I just demonstrated how one could do that in the above example?

Paul Catinean avatar
Paul Catinean

ahhh so you set a random variable and you just use it in the values on the release

Paul Catinean avatar
Paul Catinean

my bad I thought that values: in environment == values in release

Paul Catinean avatar
Paul Catinean

God confused

Paul Catinean avatar
Paul Catinean

Thanks for the explanation

jason800 avatar
jason800

Yeah that’s what I was trying to explain, albeit poorly, above

Paul Catinean avatar
Paul Catinean

I think the naming here is a bit confusing in general

Paul Catinean avatar
Paul Catinean

I just make a variable valuesFile

Paul Catinean avatar
Paul Catinean

And I don’t need this anymore

Paul Catinean avatar
Paul Catinean

using the same logic

jason800 avatar
jason800

Exactly. and I totally agree, it’s confusing. I think they should be named helmfile_ values and helm_release_values, or something to that affect

Paul Catinean avatar
Paul Catinean

Yeah, or anything to distinguish them, especially since the example for environment is still loading an external yaml file

jason800 avatar
jason800

I get the fun of having a multi-regional environment, still not sure how I’m going to work regions into this whole thing

Paul Catinean avatar
Paul Catinean

I’m just learning it, before I did my own python script to do the same thing and it was a lot of redundancy I think

Paul Catinean avatar
Paul Catinean

Also I saw this note

Paul Catinean avatar
Paul Catinean

Prior to this pull request, environment values were made available through the {{ .Environment.Values.foo }} syntax. This is still working but is deprecated and the new {{ .Values.foo }} syntax should be used instead.

feat: state values by mumoshu · Pull Request #647 · roboll/helmfile

This adds values to state files as proposed in #640. values: - key1: val1 - defaults.yaml environments: default: - values: - environments/default.yaml production: - values: - envir…

Paul Catinean avatar
Paul Catinean

neither form is working for me…

Paul Catinean avatar
Paul Catinean

I don’t get it

Paul Catinean avatar
Paul Catinean

It’ just too confusing for me

Paul Catinean avatar
Paul Catinean
Helmfile - How to deal with different versions in different environment?

I’ve been reading the docs at https://github.com/roboll/helmfile. But as the title indicates I’m looking for a way to deal with different versions…

Paul Catinean avatar
Paul Catinean

I tried every possible combination on earth to get a value from environments: x:

Paul Catinean avatar
Paul Catinean

But nothing works, this is terribly confusing

Anirudh Srinivasan avatar
Anirudh Srinivasan

hello everyone, i just got introduces to helmfile and this is turning out to be a great tool for our use case. Appreciate you all working on this making this better day by day.

I got this working with a bunch of addons , but i want to have a time lag between 2 of my addons. i.e iwant to introduce some kind of a delay

2020-05-18

Michael Seiwald avatar
Michael Seiwald

Hi I was wondering if it’s possible to modify a value from a hook. Usecase: getting data from an existing secret with kubectl and pass it to another release as value. Or is there another way to achieve that?

Paul Catinean avatar
Paul Catinean

@Michael Seiwald I’m trying to achieve something similar, I’m building a pipeline where I deploy the application and then I need to run a command inside of a working pod as many times as needed until I get a good exit code

Paul Catinean avatar
Paul Catinean

still fiddling with it

Anirudh Srinivasan avatar
Anirudh Srinivasan
# Ordered list of releases.
helmfiles:
  - "releases/coredns.yaml"
  - "releases/external-dns.yaml"
  - "releases/ingress.yaml"
  - "releases/certmgr.yaml"
  - "releases/certissuer.yaml"
  - "releases/dashboard.yaml"

I have the above charts to be installed in multiple clusters. Now to achieve this, i organized my directory like this using environment:

environments:
  cluster1:
    values:
      - ../cluster1.yaml

So can do helmfile -e cluster1 sync. But this does not seems to be working that way. Any suggestions ?

Anirudh Srinivasan avatar
Anirudh Srinivasan

Getting the following error:

err: no releases found that matches specified selector() and environment(cluster1), in any helmfile
jason800 avatar
jason800

there is currently a bug / feature that you have to define environments: all the way down the pipe

jason800 avatar
jason800

are you doing that?

Anirudh Srinivasan avatar
Anirudh Srinivasan

not sure if i understand what you are saying ?

jason800 avatar
jason800

quite literally, you have to put your environments: block in every helmfile

Anirudh Srinivasan avatar
Anirudh Srinivasan

oh gotcha. let me try that

jason800 avatar
jason800

only the bottom level directories seem to be required in order to actually have the full fleshed out definition

jason800 avatar
jason800

but every file along the way needs at least

environments:
  prod:
jason800 avatar
jason800

that level of definition

Zachary Loeber avatar
Zachary Loeber

or just define it once in yaml and include it in your bases: block.

jason800 avatar
jason800

I tried that, it did not work

Zachary Loeber avatar
Zachary Loeber

bah

Zachary Loeber avatar
Zachary Loeber

copy/paste idiocy

Zachary Loeber avatar
Zachary Loeber

here, a partially working example

Zachary Loeber avatar
Zachary Loeber
zloeber/CICDHelper

Fairly large set of scripts for crafting and working with devops tools - zloeber/CICDHelper

jason800 avatar
jason800

I think that works because you’re doign the bases in the same file as the release

Zachary Loeber avatar
Zachary Loeber

I think the block needs the dividers around it ---

jason800 avatar
jason800

the bug/issue is with helmfile includes

Zachary Loeber avatar
Zachary Loeber

right, and in the umbrella helmfile as well

Zachary Loeber avatar
Zachary Loeber
zloeber/CICDHelper

Fairly large set of scripts for crafting and working with devops tools - zloeber/CICDHelper

jason800 avatar
jason800

unrelated: I am really enjoying your examples!

Zachary Loeber avatar
Zachary Loeber

I wouldn’t use my crappy repo as the best example but I know that it does work

jason800 avatar
jason800

it’s so hard to find people showing their examples

Zachary Loeber avatar
Zachary Loeber

ha, I’m working on my own kubernetes craftsman framework using helmfiles to vet out solutions quickly (for better or worse)

Zachary Loeber avatar
Zachary Loeber

I’m somewhat embarrassed to be using so many makefiles but most the stuff we do are just scripts anyway and Makefiles are generally easy to rip apart for other purposes. But thanks good sir, glad it is helping a little

Zachary Loeber avatar
Zachary Loeber

I’m still struggling to fully wrap my head around the best way to use helmfile so things are dry and multipurpose across various deployments

jason800 avatar
jason800

yea I’ve gone through about 3 iterations on my repo so far

jason800 avatar
jason800

I wanted to keep the command line simple, so I setup some simple go templating to do everything by env

jason800 avatar
jason800

so my top level or 2nd top level, i forget, does conditional includes based on EnvName

Zachary Loeber avatar
Zachary Loeber

you have one public to share so I can rip off your ideas?

jason800 avatar
jason800

I can put up what I have somewhere

Zachary Loeber avatar
Zachary Loeber

you see all the helmfiles/wip/*yaml files? I actually deployed them all at one point all using env vars and a huge env file

Zachary Loeber avatar
Zachary Loeber

it felt…. wrong

jason800 avatar
jason800

I know I can do better and generalize more, I just have to learn more first. This is after 2-3 passes.

Zachary Loeber avatar
Zachary Loeber

seems like you are on the right path though

jason800 avatar
jason800

one YAML line at a time

Zachary Loeber avatar
Zachary Loeber

I dig the use of the preprod_common.yaml insertion for the default release block elements.

Zachary Loeber avatar
Zachary Loeber

I’m going to do that too I think

Zachary Loeber avatar
Zachary Loeber

(if im reading it right)

1
Zachary Loeber avatar
Zachary Loeber

kubeContext is probably a wise idea as well I suppose

Zachary Loeber avatar
Zachary Loeber

gee, thanks for the huge insertion of self doubt on a Monday….

Zachary Loeber avatar
Zachary Loeber

jason800 avatar
jason800

rofl

jason800 avatar
jason800

I just call that Monday

Zachary Loeber avatar
Zachary Loeber

for real though, thanks for sharing. It is helpful

jason800 avatar
jason800

the kube context is invaluable for deploying multi-region multi-cluster

jason800 avatar
jason800

but thats where my pre_hook_bootstrap.sh comes in

jason800 avatar
jason800

it runs terraform with a -target to generate kube configs from the terraform created cloud managed K8s clusters

Zachary Loeber avatar
Zachary Loeber

I keep wanting to force the thing to use its own config file

jason800 avatar
jason800

I’m going to move that logic for kube contexts completely out of helmfile

jason800 avatar
jason800

and just have helmfile expect that these things exist

jason800 avatar
jason800

Sorry, to be specific, the logic for generating them dynamically to be available in ci/cd, I’m going to move out ofhelmfile (which is just running a bash script presync hook)

jason800 avatar
jason800

the kubecontext parameters will be staying

Zachary Loeber avatar
Zachary Loeber

clever use of the presync hook

jason800 avatar
jason800

it makes the deployment really slow. as a bandaid I separated what would have been a parallel deployment of 5 clusters into a bootstrap of 1 arbitrary cluster and then the other 4 go after

jason800 avatar
jason800

but the slowness is due to terraform, not helmfile

Zachary Loeber avatar
Zachary Loeber

that’s terraform for ya

2020-05-19

Paul Catinean avatar
Paul Catinean

Does anyone have a suggestion on how to execute a command on a specified release pod? After I make the deployment I like to have a step in my CI pipeline that runs a module update which can fail. It should retry 4-5 times before calling it quits but for that I need to execute it in another step after deployment several times. Any suggestions?

jason800 avatar
jason800

its unclear to me specifically what you’re trying to do but couldn’t you just run a script to do the necessary things in a post-sync hook ?

Paul Catinean avatar
Paul Catinean

I also thought about that but I have two issues

Paul Catinean avatar
Paul Catinean
  1. I always do a rolling deployment meaning helmfile apply will always re-deploy the release (but let’s say I can turn that off)
Paul Catinean avatar
Paul Catinean

but 2. The stdout from the pod is not being displayed

Paul Catinean avatar
Paul Catinean

even with showlogs: true

Paul Catinean avatar
Paul Catinean

so I do not see the output of the update procedure

Paul Catinean avatar
Paul Catinean

Any idea why that is?

jason800 avatar
jason800

in a bash script?

jason800 avatar
jason800

are you setting set -x ?

Paul Catinean avatar
Paul Catinean

bash script indeed

Paul Catinean avatar
Paul Catinean

set -x ?

Paul Catinean avatar
Paul Catinean

The fact that I don’t know what that does is a good indication I might be doing something wrong

jason800 avatar
jason800
#!/bin/bash
set -x
jason800 avatar
jason800

debug mode

jason800 avatar
jason800

although anything goingto stdout should be showing up for you

jason800 avatar
jason800

I don’t understand why a rolling deployment makes anything different

Paul Catinean avatar
Paul Catinean

I’m not deploying an actual script but running the command directly in the hook but i assume the same priciple applies

Paul Catinean avatar
Paul Catinean

I configured my helm chart to generate a random annotation so the pods are re-creted

Paul Catinean avatar
Paul Catinean

But I can remove that, especially for the production instance

Paul Catinean avatar
Paul Catinean

If the update fails I don’t want to re deploy the pods as many times

jason800 avatar
jason800

Sorry but I’ll need to understand your workflow/use-case better. I’m very confused at what you’re trying to accomplish

Paul Catinean avatar
Paul Catinean

I have a CI pipeline that builds and image, tests the code and deploys the built image to a kubernetes cluster through helmfile. After deployment is succesful an internal command (inside the pod) to execute an upgrade (kinda like a database migration) needs to be started and retried X times if it fails (because of potential db locks)

Paul Catinean avatar
Paul Catinean

Does that sound resonable?

jason800 avatar
jason800

it sounds like you need an init container or maybe a side-car

jason800 avatar
jason800

but as a more temporary solution you could certainly execute a bash script with for loops and conditionals to do what you need

Paul Catinean avatar
Paul Catinean

init container would run once, if it fails then what?

jason800 avatar
jason800

the init container is running a script like anyone else when a pod starts

Paul Catinean avatar
Paul Catinean

Also I want to be able to execute it depending on the environment and also have the logs in my pipeline

jason800 avatar
jason800

it could have some redundancy built in but you have some bigger questions. why is your DB migration always failing?

jason800 avatar
jason800

helmfile I believe can handle go templating to place hooks depending on environment

jason800 avatar
jason800

using conditional blocks

Paul Catinean avatar
Paul Catinean

It’s not a database migration it’s a server upgrade which changes the database schema but it can hit pg locks if someone is using that table then

jason800 avatar
jason800

I don’t mean to tell you how to do your business so please don’t take offense if I’m getting off topic here

jason800 avatar
jason800

but it seems to me like you’d benefit from abstracting the logic of database schema outside of the container init and supply the result of which schema to use as an env var

Paul Catinean avatar
Paul Catinean

Ah no, it’s a constructive and I’m still learning

jason800 avatar
jason800

for example if you could run the same database check as a pre-hook in the bash script , you could have whatever simple redundancy you want built in. a do/until that never ends until it works

jason800 avatar
jason800

then the container deploys and is guaranteed to run

Paul Catinean avatar
Paul Catinean

it’s a running python server that updates the database on command from local files (that’s the core design) and it itself handles running the instance and subsequently upgrading the database (through a separate headless thread)

Paul Catinean avatar
Paul Catinean

So I need the same configuration and server pod essentially to carry out this operation

Paul Catinean avatar
Paul Catinean

Same env vars, same volumes etc etc

Paul Catinean avatar
Paul Catinean

So if I create a side-car of some sorts or a cron it would have to be an exact duplicate of the running deployment in order for things to work properly

Paul Catinean avatar
Paul Catinean

or a very large portion of it

Paul Catinean avatar
Paul Catinean

also I should be able to trigger it form the gitlab ci pipeline, see the logs and even retry if needed

Paul Catinean avatar
Paul Catinean

Of course I could also stop the main server, upgrade and then start it and it would need ot be carried out once but that means downtime

Anirudh Srinivasan avatar
Anirudh Srinivasan
# Ordered list of releases.
helmfiles:
  - "releases/external-dns.yaml"
  - "releases/dashboard.yaml"

in helmfile , is there a way to introduce time delay between 2 addons, like external dns and dashboad. I specify an interval of 5m for external dns so i do not hit rate limit (this is in AWS). So between external-dns and dashboard i want a time delay. Is it possible ? Any ideas ?

jason800 avatar
jason800

Post sync hook with a pause?

Anirudh Srinivasan avatar
Anirudh Srinivasan

thanks, started looking into that. Its great helmfile offer options like presync, postsync

Anirudh Srinivasan avatar
Anirudh Srinivasan

So we can run any bash script in these hooks ?

Anirudh Srinivasan avatar
Anirudh Srinivasan

or for that matter any script , not just bash

jason800 avatar
jason800

I’m modifying some helm values files in a presync hook for a release. The same release loads those variable files as input to the helm chart. It would appear that the modifications are not making it into the helm chart

jason800 avatar
jason800

I’m not sure helmfile was actually ever supposed to work this way because according to the docs it loads the vars before any hooks

Raymond Liu avatar
Raymond Liu

What senario will you modify helm values file directly ? helmfile supports ‘values.yaml.gotmpl’ and can help you render to helm values file.

jason800 avatar
jason800

So, unfortunately, I’m working with a cloud provider that is less than optimal. One of the things they required with their cloud controller on kubernetes is that subnets are specified in the service annotations via their ID (Think AWS ARN).

jason800 avatar
jason800

We use automation (terraform) to provision the cloud infrastructure so although the subnet IDs are pretty likely to rarely change I would still prefer the annotation to be dynamic in that it gathers the fact directly

jason800 avatar
jason800

otherwise I would have to hard-code it as a var and change it anytime our infra rebuild a subnet

jason800 avatar
jason800

@Raymond Liu unless even with that explanation you still think templating could help here? I believe the template would still need to be fed the info on the cloud objects

Raymond Liu avatar
Raymond Liu

Ok, sure, I have done something you need. I use exec in values.yaml.gotmpl for example: {{ exec “python” (list $script “–env” .Environment.Name “–region” .Values.awsRegion “–cf-output” “AZaName”) }}`

jason800 avatar
jason800

ohhhh wow

jason800 avatar
jason800

I had not looked into exec at all

jason800 avatar
jason800

@Raymond Liu would you be willing to share with me an example of your values.yaml.gotmpl? sensitive bits removed of course

Raymond Liu avatar
Raymond Liu
{{- $script := "../../../../scripts/get_cf_output.py" }}
resources:
  # ENIConfig: AZa
  - apiVersion: crd.k8s.amazonaws.com/v1alpha1
    kind: ENIConfig
    metadata:
      name: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "AZaName") }}
      annotation: 
        forceupdate: "k8s-1.16-20200505"
    spec:
      subnet:  {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "EksSecondaryCidrsSubnetIdAZa") }}
      securityGroups:
        - {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "NodeSecurityGroupId") }}

  # ENIConfig: AZb
  - apiVersion: crd.k8s.amazonaws.com/v1alpha1
    kind: ENIConfig
    metadata:
      name: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "AZbName") }}
      annotation: 
        forceupdate: "k8s-1.16-20200505"
    spec:
      subnet: {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "EksSecondaryCidrsSubnetIdAZb") }}
      securityGroups:
        - {{ exec "python" (list $script "--env" .Environment.Name "--region" .Values.awsRegion "--cf-output" "NodeSecurityGroupId") }}
Raymond Liu avatar
Raymond Liu

I use helm chart incubator/raw to deploy two resources

jason800 avatar
jason800

nice thank you

jason800 avatar
jason800

can I use environment values or other helmfile available values in the exec block ?

jason800 avatar
jason800

oh, i see you doing it

jason800 avatar
jason800

this is great, thank you.

2020-05-20

Michael Seiwald avatar
Michael Seiwald

Is it possible to specify the helm version to use per release with helmfile?

jason800 avatar
jason800

I think you might actually be able to use hooks for this?

jason800 avatar
jason800

you could basically have some directory with a bunch of helm versions named by version:

helm/
  helm2.x
  helm3.0
  helm3.1
  helm3.2
jason800 avatar
jason800

and then in the prepare hook you could force-overwrite your symlink

jason800 avatar
jason800

ln -sf helm/helm3.2 /usr/local/bin/helm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(so long as parallelism disabled, which is on by default)

jason800 avatar
jason800

yea good point

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think there’s a helmBinary setting, but not sure if it can be defined per release. If not, that would be my first thinking to set it per release. I feel like this is supported. (I vaguely recall opening a feature request for it, but not )

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add support for per release helmBinary · Issue #1180 · roboll/helmfile

Similar to what has been done in #1083, I think it would be useful to allow to specify the helmBinary for each release.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I think our workaround for this was to define each release in one helmfile .yaml and then define the helm binary in there. That way each one could depend on a different version. Then we include them all together in the main helmfile.yaml

jason800 avatar
jason800

Do you set anything to make it so they run in parallel ?

jason800 avatar
jason800

or just accept that you have a serial release

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

well, if you do this approach, you need no hooks and no symlinking

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so parallel should be

jason800 avatar
jason800

yea, totally. thats great

jason800 avatar
jason800

I guess I’m confused though on how you tell helmfile that “these groups of helmfiles can all execute together instead of sequentially” ?

jason800 avatar
jason800

do you use bases to merge them ? Kind of wondering if I have two helmfiles each defining environment values includes by region, and I merge them via bases, will those environment values get applied to each of the regions individually? or does it merge it all and each releaese would get vars from both environments?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/helmfiles

Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the point is that each $foo.yaml can define it’s own helmBinary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

helmfiles are not merged.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so, in that example, inside of - "releases/external-dns.yaml" , would define it’s own helmBinary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and the same for the rest, and so on

jason800 avatar
jason800

sure, but also in this example every file in that helmfile list is execute sequentially right ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this assumes you have all the helm binaries installed on the system

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, you got me there - i can’t remember if this breaks parallelism or not.

jason800 avatar
jason800

https://github.com/roboll/helmfile/issues/591#issuecomment-492949771 – This is what I was referring to with bases and the merging of helmfiles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, that confirms it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one idea is that you could have a helm2.yaml and a helm3.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then all the releases under helm2 are parallelized, etc.

jason800 avatar
jason800

yea, thats what I’m doing now with my regions

jason800 avatar
jason800

was hoping to leverage bases: to make it all go at once

jason800 avatar
jason800

but I’m worried about what will happen to the region based vars on the merge

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, yea, i can’t picture the mental calculus right now without a reference example to play with

jason800 avatar
jason800

same, I think I’ll just have to find the time to test it

Mathieu Frenette avatar
Mathieu Frenette

Hey guys! :wave: I’m new to helmfile and I was wondering if there’s a way to pass context/arguments to templated values files that we reference from a release, similarly to the way we can pass context/arguments to templates when using include? In my case, I have a plain yaml file listing my releases (./releases.yaml) and I’m using the range operator to generate the release entries, with a reference to an external values file for each release, like so:

releases:
{{ range readFile "./releases.yaml" | fromYaml | get "releases" }}
- name: {{ .name }}
  values:
  - ./values/{{ .name }}.yaml.gotmpl
{{ end }}

What I would like to do is pass the current data item (the current iteration of range) to the external values go template, so that I may dynamically reference its properties from within that template. Is it at all possible?

jason800 avatar
jason800

Hey All, can someone help me understand why i can’t use this value being set in my environment? I’m setting a simple key/value, pulling in the file with what seems to be no issue, but if I try to access {{ .Values.region }} I get this error still that it’s not defined

Mathieu Frenette avatar
Mathieu Frenette

Not sure about your specific issue, but I do notice that your releases: key has a typo: eleases:

jason800 avatar
jason800

thankfully that was just a copy and paste error lol

Mathieu Frenette avatar
Mathieu Frenette

haha, ok, so that was not really the content of your file then!

jason800 avatar
jason800

just the relevant parts, minus a character

jason800 avatar
jason800

So, it appears if I wrap these values in {{{{ .Values.region }}}} it works

Mathieu Frenette avatar
Mathieu Frenette

So it was interpolating the template too early? What you did implies that there is more than one pass of interpolation?

Mathieu Frenette avatar
Mathieu Frenette

Even if it fixes the error you were getting at that stage, are you sure that it’s not actually passing the literal {{ .Values.region }} as value for kubeContext? And therefore that you may get another issue further down the road?

Mathieu Frenette avatar
Mathieu Frenette

I would really double check the actual value that gets rendered as value for kubeContext just to make sure it’s right.

jason800 avatar
jason800

yea thats exaclty what it was doing

jason800 avatar
jason800

this is mind boggling , it seems like what should be a very basic and very obvious feature is simply just not working

Mathieu Frenette avatar
Mathieu Frenette

So, basically, back to square one!

jason800 avatar
jason800

lol I think it may be time for square 0. I cannot get even the most basic functionality to work and I think its just a sign of the tools immaturity

jason800 avatar
jason800

{{ exec "pwd" (list "") }} testing this in a helm values file suffixed with .gotmpl for helmfile

jason800 avatar
jason800

Error: Failed to render chart: exit status 1: Error: failed to parse /tmp/values596195416: error converting YAML to JSON: yaml: line 9: could not find expected ':'

Graeme Gillies avatar
Graeme Gillies

I’m trying to figure out if there is a way I can get it so that in each helmfileenvironment, I define a kubeContext var, and then under helmDefaults set kubeContext to the value of {{ .Values.kubeContext }}

Graeme Gillies avatar
Graeme Gillies

at the moment I am getting the dreaded

in ./helmfile.yaml: error during ../../bases/helmDefaults.yaml.part.0 parsing: template: stringTemplate:12:25: executing "stringTemplate" at <.Values.kubeContext>: map has no entry for key "kubeContext"
Graeme Gillies avatar
Graeme Gillies

I figure because I have

---
bases:
  - "../../bases/environments.yaml"
  - "../../bases/helmDefaults.yaml"

I am getting hit by the double render problem

Graeme Gillies avatar
Graeme Gillies

if I change it to

kubeContext: "{{`{{ .Values.kubeContext }}`}}"

It comes out as a literal

jason800 avatar
jason800

Yea, I had literally the same issues above yesterday

jason800 avatar
jason800

it seems basically impossible to actually use environment values in helmfile at the moment

jason800 avatar
jason800

fyi its the directory nesting

jason800 avatar
jason800

i just figured it out for myself

jason800 avatar
jason800

its because you’re pulling files in another directory. instead of using environments: to pass those vlaues in, i just did it under each helmfile import

jason800 avatar
jason800
helmfiles:
   - path: helmfiles/preprod_us-phoenix-1.yaml
+    values:
+      - '../../vars/helmfile/realms/preprod.yaml'
+      - '../../vars/helmfile/regions/us-phoenix-1.yaml'

2020-05-21

Paul Catinean avatar
Paul Catinean

I can’t seem to understand why doing kubectl exec pod command in a helmfile hook does not print the output to stdout

2020-05-23

muhaha avatar

guys ? when we can expect merge for https://github.com/roboll/helmfile/pull/1172 ?

feat: GA of Kustomize and K8s manifests support by mumoshu · Pull Request #1172 · roboll/helmfile

This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…

mumoshu avatar
mumoshu

Someday. I’m overwhelmed by all the user support tasks and reviews these days

feat: GA of Kustomize and K8s manifests support by mumoshu · Pull Request #1172 · roboll/helmfile

This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…

mumoshu avatar
mumoshu

If anyone could fork/test/fix my pr, I’m more than happy to review/adopt/merge it

muhaha avatar

btw, seems that latest (helm-x_0.8.0_linux_amd64.tar.gz) helm-x does not work, not sure why.. as a standalone binary, i am getting:

panic: exec: "": executable file not found in $PATH

goroutine 1 [running]:
github.com/mumoshu/helm-x/pkg/helmx.(*Runner).IsHelm3(0xbe16d90, 0x8)
        /home/circleci/project/pkg/helmx/helm3.go:21 +0x148
main.main()
        /home/circleci/project/main.go:31 +0x5e

same error for helm plugin. Then i found this pr, which will be superb if implemented as buildin support for patches and kustomize in helmfile

mumoshu avatar
mumoshu

hey- https://github.com/roboll/helmfile/pull/1172/ is merged and available since 0.118.0.

i didn’t re-do throughout testing before merging so it may work or not. your testing and feedback would be much appreciated!

feat: GA of Kustomize and K8s manifests support by mumoshu · Pull Request #1172 · roboll/helmfile

This is the GA version of the helm-x integration #673 developed last year. Benefits? You get all the followings without an extra helm plugin: Ability to add ad-hoc chart dependencies/aliases, with…

yuri avatar

first issue i can see with strategicMergePatches is that it ignores my chart version and tries to upgrade to latest, when i remove strategicMergePatches diff works fine

Comparing release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/8296726526650574646/nginx-ingress
default, nginx-ingress, ServiceAccount (v1) has changed:
- # Source: nginx-ingress/templates/serviceaccount.yaml
+ # Source: nginx-ingress/templates/helmx.all.yaml
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    labels:
      app: nginx-ingress
-     chart: nginx-ingress-1.6.17
+     chart: nginx-ingress-1.39.0
      heritage: Tiller
      release: nginx-ingress
    name: nginx-ingress
1
mumoshu avatar
mumoshu

thx. could you provide me a reproducible example?

yuri avatar
---
environments:
  {{ .Environment.Name }}:
    values:
    - ../common/common.yaml
    - ../clusters/{{ .Environment.Name }}/defaults.yaml
    - ../clusters/{{ .Environment.Name }}/charts.yaml
---
bases:
- ../helmdefaults.yaml
- ../repos.yaml
---
releases:
  - name: nginx-ingress
    chart: stable/nginx-ingress
    labels:
      app: nginx-ingress
      tier: network
    version: {{ .Values.charts.nginx.chartVersion }}
    installed: {{ .Values.charts.nginx | getOrNil "enabled" | default false }}
    namespace: {{ .Values.charts.nginx | getOrNil "namespace" | default "kube-system" }}
    values:
      - ../clusters/{{ .Environment.Name }}/values/nginx-ingress.yaml
    strategicMergePatches:
    - apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-ingress-controller
      spec:
        template:
          spec:
            dnsConfig:
              nameservers:
              - 169.254.20.10
              options:
              - name: attempts
                value: "3"
yuri avatar

do u need all the values or u can try a single release?

mumoshu avatar
mumoshu

Thanks! I can’t get to work on it until tomorrow but I believe we need to add

opts.ChartVersion = release.Version

to

https://github.com/roboll/helmfile/blob/16288dfa7da390bee208f46eaeb69061507c7ca8/pkg/state/helmx.go#L26

roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

yuri avatar

thanks! i appreciate your amazing work and support

2
mumoshu avatar
mumoshu

The general rule is that we need to propagate everything necessary from ReleaseSpec read from your helmfile.yaml, to chartify options defined here https://github.com/variantdev/chartify/blob/3f73ddcc6682fddd4ad12eb2c5d6e7caa553df87/chartify.go#L16-L45

so mistakes there can be the cause of nasty bugs like you’ve encountered.

variantdev/chartify

Convert K8s manifests/Kustomization into Helm Chart - variantdev/chartify

mumoshu avatar
mumoshu

if you have some time to fix it yourself and you’re willing to do so, it would be useful to consider that rule. jfyi.

yuri avatar

thanks, i definitely need to get familiar with helmfiles code, but it will take time

1
mumoshu avatar
mumoshu

no worry! in the meantime, and if you have some time, i’d appreciate it if you could enqueue more potential bugs/issue to me so that i can fix all that tomorrow

yuri avatar

ok ill check more charts and options to see if anything breaks

mumoshu avatar
mumoshu

thank you so much for your help

1
mumoshu avatar
mumoshu
fix: Do not skip passing values files when releases[].adhocDependencies/jsonPatches/jsonPatches exist by mumoshu · Pull Request #1273 · roboll/helmfile

And also fix the bug that resulted in any such release to ignore the chart version number specified in helmfile.yaml. This is a follow-up for #1172

yuri avatar

hey just tested v0.118.1 but its still trying to apply latest chart version

mumoshu avatar
mumoshu

thanks! i think i found another source of issue. will fix it soon

yuri avatar

@mumoshu thanks! im getting something else now maybe the patch itself is wrong

I0528 15:07:27.950506   98087 patch.go:136] generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/role.yaml
- helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml
patchesStrategicMerge:
- strategicmergepatches/patch.0.yaml
I0528 15:07:27.950527   98087 patch.go:139] generating /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress/helmx.2.patched.yaml
in /Users/yurilevin/tapingo-github/k8s_cluster_helmfiles/releases/nginx-ingress.yaml: [exit status 1

COMMAND:
  kustomize build /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress --output /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/1559817118983394007/nginx-ingress/helmx.2.patched.yaml

OUTPUT:
  Error: no matches for OriginalId apps_v1_Deployment|~X|nginx-ingress-controller; no matches for CurrentId apps_v1_Deployment|~X|nginx-ingress-controller; failed to find unique target for patch apps_v1_Deployment|nginx-ingress-controller]
mumoshu avatar
mumoshu

@yuri maybe you’re missing metadata.namespace in your patch

yuri avatar

mmm just added it, but same error:

    strategicMergePatches:
    - apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-ingress-controller
        namespace: default
      spec:
        template:
          spec:
            dnsConfig:
              nameservers:
              - 169.254.20.10
              options:
              - name: attempts
                value: "3"
mumoshu avatar
mumoshu

if it doesn’t work, would you mind running helmfile template | grep -C 20 nginx-ingress-controller and share its result so that i can suggest how you should write the patch

mumoshu avatar
mumoshu

that error message does indicate that any part of the below is incorrect in your patch

apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-ingress-controller
        namespace: default
yuri avatar
I0528 15:14:27.452485   99200 chartify.go:236] using requirements.yaml:
dependencies:
I0528 15:14:32.602784   99200 replace.go:45] options: {false [/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/values189422353] []  1.6.17}
I0528 15:14:32.673409   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/clusterrole.yaml
I0528 15:14:32.673559   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/clusterrolebinding.yaml
I0528 15:14:32.673658   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-configmap.yaml
I0528 15:14:32.673761   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-daemonset.yaml
I0528 15:14:32.673830   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-deployment.yaml
I0528 15:14:32.673889   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-hpa.yaml
I0528 15:14:32.673948   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-metrics-service.yaml
I0528 15:14:32.674013   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-poddisruptionbudget.yaml
I0528 15:14:32.674092   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-service.yaml
I0528 15:14:32.674169   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-servicemonitor.yaml
I0528 15:14:32.674237   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/controller-stats-service.yaml
I0528 15:14:32.674303   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-deployment.yaml
I0528 15:14:32.674362   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-poddisruptionbudget.yaml
I0528 15:14:32.674427   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/default-backend-service.yaml
I0528 15:14:32.674488   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/headers-configmap.yaml
I0528 15:14:32.674568   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/podsecuritypolicy.yaml
I0528 15:14:32.674647   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/role.yaml
I0528 15:14:32.674727   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/rolebinding.yaml
I0528 15:14:32.674820   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/serviceaccount.yaml
I0528 15:14:32.674887   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/tcp-configmap.yaml
I0528 15:14:32.674951   99200 replace.go:77] removing /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/templates/udp-configmap.yaml
I0528 15:14:32.675125   99200 patch.go:37] patching files: [/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/role.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml]
I0528 15:14:32.675675   99200 patch.go:136] generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- helmx.1.rendered/nginx-ingress/templates/serviceaccount.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrole.yaml
- helmx.1.rendered/nginx-ingress/templates/clusterrolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/role.yaml
- helmx.1.rendered/nginx-ingress/templates/rolebinding.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-metrics-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-stats-service.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-service.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/default-backend-deployment.yaml
- helmx.1.rendered/nginx-ingress/templates/controller-hpa.yaml
patchesStrategicMerge:
- strategicmergepatches/patch.0.yaml
I0528 15:14:32.675694   99200 patch.go:139] generating /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.2.patched.yaml
in /Users/yuri/k8s_cluster_helmfiles/releases/nginx-ingress.yaml: [exit status 1

COMMAND:
  kustomize build /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress --output /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/3999714363928625263/nginx-ingress/helmx.2.patched.yaml

OUTPUT:
  Error: no matches for OriginalId apps_v1_Deployment|default|nginx-ingress-controller; no matches for CurrentId apps_v1_Deployment|default|nginx-ingress-controller; failed to find unique target for patch apps_v1_Deployment|nginx-ingress-controller]
mumoshu avatar
mumoshu

ah sry, but could you remove ` strategicMergePatches: from your helmfile.yaml before running that helmfile template` command to obtain the result

yuri avatar
Building dependency release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress
No requirements found in /var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress/charts.

Templating release=nginx-ingress, chart=/var/folders/dl/gs8313s51_d7sshj34p74_f80000gn/T/096868974/nginx-ingress/1.6.17/stable/nginx-ingress/nginx-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress
subjects:
  - kind: ServiceAccount
    name: nginx-ingress
    namespace: default
---
# Source: nginx-ingress/templates/controller-metrics-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller-metrics
spec:
  clusterIP: ""
  ports:
    - name: metrics
      port: 9913
      targetPort: metrics
  selector:
    app: nginx-ingress
    component: "controller"
    release: nginx-ingress
  type: "ClusterIP"

---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "...."
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
--
--
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "..."
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "..."
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
spec:
  clusterIP: ""
  externalTrafficPolicy: "Local"
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: http
  selector:
    app: nginx-ingress
    component: "controller"
    release: nginx-ingress
  type: "LoadBalancer"

---
# Source: nginx-ingress/templates/controller-stats-service.yaml
--
--
      protocol: TCP
      targetPort: http
  selector:
    app: nginx-ingress
    component: "controller"
    release: nginx-ingress
  type: "LoadBalancer"

---
# Source: nginx-ingress/templates/controller-stats-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller-stats
spec:
  clusterIP: ""
  ports:
    - name: stats
      port: 18080
      targetPort: stats
  selector:
    app: nginx-ingress
    component: "controller"
    release: nginx-ingress
  type: "ClusterIP"

---
# Source: nginx-ingress/templates/default-backend-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
--
--
      protocol: TCP
      targetPort: http
  selector:
    app: nginx-ingress
    component: "default-backend"
    release: nginx-ingress
  type: "ClusterIP"

---
# Source: nginx-ingress/templates/controller-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 0%

  minReadySeconds: 0
  template:
    metadata:
      labels:
        app: nginx-ingress
        component: "controller"
        release: nginx-ingress
    spec:
      dnsPolicy: ClusterFirst
      containers:
--
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 0%

  minReadySeconds: 0
  template:
    metadata:
      labels:
        app: nginx-ingress
        component: "controller"
        release: nginx-ingress
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: nginx-ingress-controller
          image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1"
          imagePullPolicy: "IfNotPresent"
          args:
--
          imagePullPolicy: "IfNotPresent"
          args:
            - /nginx-ingress-controller
            - --default-backend-service=default/nginx-ingress-default-backend
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
--
            - --default-backend-service=default/nginx-ingress-default-backend
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=default/nginx-ingress-controller
          securityContext:
            capabilities:
                drop:
                - ALL
                add:
                - NET_BIND_SERVICE
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
--
--
              containerPort: 8080
              protocol: TCP
          resources:
            {}

      serviceAccountName: nginx-ingress
      terminationGracePeriodSeconds: 60

---
# Source: nginx-ingress/templates/controller-hpa.yaml

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
--
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: nginx-ingress-controller
  minReplicas: 5
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 50
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: 50

---
# Source: nginx-ingress/templates/controller-configmap.yaml


---
# Source: nginx-ingress/templates/controller-daemonset.yaml
mumoshu avatar
mumoshu

@yuri thx! probably this one helps?

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.6.17
    component: "controller"
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 0%
mumoshu avatar
mumoshu

you should use the below patch then

    strategicMergePatches:
    - apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: nginx-ingress-controller
      spec:
        template:
          spec:
            dnsConfig:
              nameservers:
              - 169.254.20.10
              options:
              - name: attempts
                value: "3"
mumoshu avatar
mumoshu

apparently apiVersion was wrong

yuri avatar

strange the chart is already installed on the cluster with apps/v1

mumoshu avatar
mumoshu

i thought recent k8s versions automatically translated deprecated apiVersion to the newest variant

yuri avatar

not sure, anyway with beta api it works! perhaps someone changed manually the deployment on the cluster, will have to check it on a fresh cluster thank u very much for your support

mumoshu avatar
mumoshu

awesome! thanks a lot for testing

muhaha avatar

@mumoshu ping ^

2020-05-24

2020-05-26

2020-05-27

jason800 avatar
jason800

Had anyone else had trouble with values file includes in helmfile requiring different relative paths depending on the directory you’re executing helmfile from ?

Andrew Nazarov avatar
Andrew Nazarov

I think I have some strange case if I understand you correctly. I’ll try to find the example. It’s not in the Slack anymore

Andrew Nazarov avatar
Andrew Nazarov

Ok, I found it in SweetOps archive. My problem was the following:
I’ve just noticed a little issue with paths. When I run helmfile command against some custom path and this helmfile file contains “helmfiles:” I have to make the helmfiles’ values path relative to cli, i.e.
I run helmfile -f environments/dev/helmfile.yaml

inside this helmfile there is helmfiles<i class="em em- block”></i>

helmfiles:
  - path: git::<<https://my_user>>:{{ requiredEnv "REPO_TOKEN" }}@my_domaincom/my_repo.git@deployment/helmfile.yaml?ref={{ env  "INFRA_VERSION" }}
    values:
      - ../../values.yaml

The folder structure is the following

├── environments
│   └── dev
│       ├── helmfile.yaml
│       ├── values.yaml


Is this an expected behaviour? Docs say the path in the manifest should be relative to this manifest.

jason800 avatar
jason800

yea, it sucks

jason800 avatar
jason800

there is an explanation i found

jason800 avatar
jason800
roboll/helmfile

Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.

Zachary Loeber avatar
Zachary Loeber

something I ran across the other day, ‘jxl’ (jenkins-x labs) includes enhancements for supporting helm 3 and helmfiles for spinning up apps : https://jenkins-x.io/docs/labs/enhancements/proposals/2/readme/

Boot Apps with Helm 3 and Helmfile

Boot Apps with Helm 3 and Helmfile

Jacob Harter avatar
Jacob Harter

Hi there, I’m curious if there is a way (and RTFM is a perfect answer) to get helmfile to print out just the rendered/merged values files from a specified environment. From what I understand it does all of this before going into executing on the selected operation for the releases defined. I’d like to run some preflight/sanity checks on the values that are about to be applied to releases.

bradym avatar

Take a look at helmfile template it will render the templates but not apply anything.

Jacob Harter avatar
Jacob Harter

This is basically exactly what I need, but instead of the rendered kubernetes manifests, I’m looking for the rendered values before it goes to render the manifests.

bradym avatar

Maybe helmfile --log-level debug template will include what you’re looking for? I don’t know of a way to print only the values and not the rendered template

Jacob Harter avatar
Jacob Harter

Ah, that’s a bummer.

2020-05-28

Aaron Brewbaker avatar
Aaron Brewbaker

How do I use release secrets if secrets have to be in environment values

2020-05-29

Craig Dunford avatar
Craig Dunford

Hello - is anyone using the new jsonPatches feature?

Shikhar Goel avatar
Shikhar Goel

Is there a way in helmfile where i can stop it to upgrade job and stateful sets.Actually currently what is happening is that i have labels in helm charts but when i use helmfile to upgrade the deployed helm charts it is failing because job and statefull sets cannot be updated(i.e. cannot add labels in my case).

Shikhar Goel avatar
Shikhar Goel

@mumoshu Is there some way for this?

mumoshu avatar
mumoshu

Unfortunate no

mumoshu avatar
mumoshu

And I don’t get your situation…? If you have certain resources that you can’t update, you should not do that.

mumoshu avatar
mumoshu

I guess you can instead create a brand new release with another name with additional labels

Shikhar Goel avatar
Shikhar Goel

ok Momush thanks..

1
Shikhar Goel avatar
Shikhar Goel

@mumoshu I have a use case in which env in the job has variable named method which changes when we try to upgrade the exsisting helm chart…is there a way where we can force it to delete exsisting job and create a new job

Shikhar Goel avatar
Shikhar Goel

I have tried –force=true but it is not working

mumoshu avatar
mumoshu

well

mumoshu avatar
mumoshu

shouldn’t helm just delete the old job on helm upgrade --install in that case?

mumoshu avatar
mumoshu

would u mind sharing the error message you’re seeing?

Shikhar Goel avatar
Shikhar Goel

nope it doesnot do so anyways we are using helmfile apply to upgrade all the helmcharts

Shikhar Goel avatar
Shikhar Goel

Sure…

Shikhar Goel avatar
Shikhar Goel
in ./helmfile.yaml: in .helmfiles[2]: in environments/multinode/10-helmfile.yaml: failed processing release mysql: helm exited with status 1:
Error: UPGRADE FAILED: failed to replace object: Job.batch "mysql-load" is invalid: [spec.selector: Required value, spec.template.metadata.labels: Invalid value: map[string]string{"com.fico.dmp/owner":"dmp-installer", "com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql"}: `selector` does not match template `labels`, spec.template.metadata.labels: Invalid value: map[string]string{"com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql", "com.fico.dmp/owner":"dmp-installer"}: `selector` does not match template `labels`, spec.selector: Invalid value: "null": field is immutable, spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"mysql-load", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"com.fico.dmp/version":"3.5.5p", "com.fico.dmp/chart":"mysql", "com.fico.dmp/owner":"dmp-installer"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"mysqlload", Image:"fico-dmp-docker-development.jfrog.io/fico/init-db-dev:3.5.5p_c6ad143ff8_2020-06-01_222556", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource{core.EnvFromSource{Prefix:"", ConfigMapRef:(*core.ConfigMapEnvSource)(0xc421c55f20), SecretRef:(*core.SecretEnvSource)(nil)}}, Env:[]core.EnvVar{core.EnvVar{Name:"MYSQL_PORT_3306_TCP_ADDR", Value:"mysql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_PORT_3306_TCP_PORT", Value:"0", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_ENV_MYSQL_DB_USER", Value:"DMP_ADMIN", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"MYSQL_ENV_MYSQL_DB_PASSWORD", Value:"Dmp1234567890!", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"hostname", Value:"mysql.drcluster.onprem.dmsuitecloud.com", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"port", Value:"3306", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"hostPath", Value:"mysql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"method", Value:"applyDeltaBasedOnCommaSeperatedSqlList", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releaseDeployed", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releaseToBeDeployed", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"deltaFilesList", Value:"create_dmp_mgr_ddl_v3.6.4-HF01_DMPR-43730.sql", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"releasesList", Value:"", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"installationType", Value:"install", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"ADM_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55f40)}, core.EnvVar{Name:"DMP_SERVICE_PROVIDER_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55f60)}, core.EnvVar{Name:"ODS_DATABASE_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc421c55fc0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc4451f2778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"dedicated":"dmp-system"}, ServiceAccountName:"dmpsvcacct", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc4598600e0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(0xc421c55fe0), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration{core.Toleration{Key:"dedicated", Operator:"Equal", Value:"dmp-system", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil)}}: field is immutable]

Installation Failed on: Fri Jun 12 17:53:51 IST 2020
mumoshu avatar
mumoshu

hmm? so are you trying to change the pod selector of the job, right?

mumoshu avatar
mumoshu

i thought it was an immutable field indeed

Shikhar Goel avatar
Shikhar Goel

Yes along with method in the env method", Value:"applyDeltaBasedOnCommaSeperatedSqlList"

mumoshu avatar
mumoshu

you’d need to somehow change the job name, so that helm can create another job for the new selector

Shikhar Goel avatar
Shikhar Goel

Yes that i can skip the env should be updated and new job should be created as a part of upgrade

mumoshu avatar
mumoshu

if your chart doesn’t support changing the job’s metadat.name, all you can do is create a brand new release for the new job with different pod selector

Shikhar Goel avatar
Shikhar Goel

Yes but I dont want that…i used to put timestamp so that new job is created

Shikhar Goel avatar
Shikhar Goel

but i want the job to be created if there is some change in job…not every time i run helmfile

mumoshu avatar
mumoshu

and your job’s pod selector can change at any time?

mumoshu avatar
mumoshu

then your job’s name should include a hash value calculated from the content of your job’s pod selector, at least

Shikhar Goel avatar
Shikhar Goel

Yes it depends on the release for eg if we currently are on 3.5.0p then it will remain same but if we give customer our new release 3.5.5p then it will change

Shikhar Goel avatar
Shikhar Goel

then your job's name should include a hash value calculated from the content of your job's pod selector, at least

How can i do that can you please elaborate
mumoshu avatar
mumoshu

ok. the only way would be to improve your chart so that you can achieve https://sweetops.slack.com/archives/CE5NGCB9Q/p1591964910406700?thread_ts=1590774931.314100&cid=CE5NGCB9Q

then your job’s name should include a hash value calculated from the content of your job’s pod selector, at least

Shikhar Goel avatar
Shikhar Goel

ok i will give it a try will it create a hash for complete pod yal or just the selector?

mumoshu avatar
mumoshu

not sure. that depends on your requirement.

mumoshu avatar
mumoshu

include all the fields your customer would like to change

Shikhar Goel avatar
Shikhar Goel

Sure will give it a try Thanks Alot for your help…It was really helful

mumoshu avatar
mumoshu
helm/charts

Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.

Shikhar Goel avatar
Shikhar Goel

Cool it looks Great…It might solve the problem

mumoshu avatar
mumoshu

your job template should look like:

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "yourchart.fullname" . }}-{{ toYaml .Values.whatever.your.customer.change | sha256sum | quote }}
mumoshu avatar
mumoshu

i hope it solves your issue. good luck!

Shikhar Goel avatar
Shikhar Goel

Thanks yup it will ..

1
Shikhar Goel avatar
Shikhar Goel
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "yourchart.fullname" . }}-{{ toYaml .Values.whatever.your.customer.change | sha256sum | quote }}

what is quote in this?

mumoshu avatar
mumoshu

it surrounds the argument with "

Shikhar Goel avatar
Shikhar Goel

ok Thanks i will read about it…

Shikhar Goel avatar
Shikhar Goel

Really Appreciate your help…Thanks

1
Kenny Younger avatar
Kenny Younger

Hi. I see that if I use --debug I can see where helmfile generates the actual values.yaml file that helm uses to do the install/upgrade. I can even go view them on disk, which is really nice. Is there any way to generate a values file for a particular release? I tried a lot of things, and in particular helmfile build looked promising, but it doesn’t seem to have a lot of options, and only generates what the helmfile.yaml looks like (which is super helpful, don’t get me wrong), not the values for each release (which I template heavily).

mumoshu avatar
mumoshu

hey! unforutnately it isn’t possible today.

it can be implemented. what command would you like for that?

For example, it can be helmfile -f helmfile.yaml export-values RELEASE_NAME ./path/to/dir. not sure if you like this tho

Kenny Younger avatar
Kenny Younger

Hm. Yeah hadn’t really thought about the UI/UX of this.

I would just dump the yaml to stdout, personally. Let me deal with where they go.

helmfile export-values RELEASE_NAME would be all I need. I can always tack on > destination.yaml

Kenny Younger avatar
Kenny Younger

at least as a MVP of that command, I think that seems like a good start

mumoshu avatar
mumoshu

Thanks! That looks great

mumoshu avatar
mumoshu

Would u mind submitting a github issue/feature request for that, so that i wont forget about it?

Kenny Younger avatar
Kenny Younger

Will do

1
Kenny Younger avatar
Kenny Younger
Add export-values subcommand · Issue #1286 · roboll/helmfile

I see that if I use –debug I can see where helmfile generates the actual values.yaml file that helm uses to do the install/upgrade. I can even go view them on disk, which is really nice. Is there …

1
Kenny Younger avatar
Kenny Younger

@mumoshu btw, helmfile is a killer utility. I have been absolutely in love with using it lately.

Kenny Younger avatar
Kenny Younger

Keep up the awesome work

mumoshu avatar
mumoshu

Thanks for your support

2020-05-30

Zachary Loeber avatar
Zachary Loeber

Can I use helmfile to apply straight yaml from a url as if it were a chart instead of having to transpose the thing into a raw chart?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, we have done that for configmaps

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use exec curl

Zachary Loeber avatar
Zachary Loeber

ah, clever, thanks

2020-05-31

voron avatar

I’m trying to pass all helmfile release labels as helm values under helmfileLabels . Something like

    setTemplate:
    {{ range $key,$value := .Release.Labels }}
    - name: {{ printf " helmfileLabels.%s" $key}}
      value: {{$value}}
    {{ end }}

but, as expected, .Release.Labels is evaluated later than range cycle. Stuff like

    setTemplate:
    {{`{{ range $key,$value := .Release.Labels }}`}}
    - name: {{`{{ printf " helmfileLabels.%s" $key}}`}}
      value: {{`{{$value}}`}}
    {{`{{ end }}`}}

doesn’t pass too. Any ideas on how to implement this ?

mumoshu avatar
mumoshu

Maybe this one would work?

setTemplate: [
  {{`{{ range $i,$key := (keys .Release.Labels) }}{{ if gt $i > 0 }},{{end}}{{ $value := (.Releases.Labels | get $key) }}{"name":{{ ... }}, "value": {{...}}}{{ end }}`}}
]
voron avatar

I’ve started from simple

setTemplate: [{{`{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}`}}]

and I cannot get it to work. It renders to line #3

 3:     setTemplate: [{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}]

` and err: failed to read helmfile.yaml: reading document at index 1: yaml: line 3: did not find expected ',' or ']'

voron avatar

as after first {{}} removal we don’t get valid array yet

voron avatar

it looks like it needs 3 passes instead of 2 to get it done

mumoshu avatar
mumoshu

thats why I’ve added {{ if gt $i > 0 }},{{end}} in my example?

voron avatar

Can’t we use

{{range pipeline}} T1 {{else}} T0 {{end}}
	The value of the pipeline must be an array, slice, map, or channel.
	If the value of the pipeline has length zero, dot is unaffected and
	T0 is executed; otherwise, dot is set to the successive elements
	of the array, slice, or map and T1 is executed.

to achieve the same w/o if ?

mumoshu avatar
mumoshu

Why you think so? According to the error message you’ve shared, it seems like you do need to remove the trailing redundant, after the last array elemet(or preceeding redundant “,” before the first element)

and i don’t know how range helps that

voron avatar

I mean range else end instead of if end inside range

voron avatar

And I think error message is not about extra comma, it says it cannot find comma or ]

mumoshu avatar
mumoshu

ah thanks. my fault

voron avatar

I cannot get range working inside string quote

{{`{{ range }}`}}
mumoshu avatar
mumoshu

well, where is setTemplate written in?

mumoshu avatar
mumoshu

helmfile.yaml?

mumoshu avatar
mumoshu

would u mind giving me more lines around setTemplate?

mumoshu avatar
mumoshu

or perhaps the whole helmfile.yaml

voron avatar
values:
  - i0: v0
    i1: v1
---
templates:
  default: &default
    chart: stable/nginx-ingress
    setTemplate: [{{`{{ range .Values }}{ name: "l0", value: "v0" },{{ end }}`}}]
releases:
  - <<: *default
    name: test
    labels:
      controller.image.repository: "image0"
      controller.image.tag: "1.0"
      label0: value0
      label1: value1
voron avatar

just a PoC, I wanna loop over Release.Labels

mumoshu avatar
mumoshu

ah ok i got it. you can’t use setTemplate for that

mumoshu avatar
mumoshu

i thought only value is rendered as a template under setTemplate

voron avatar

at the same time I’m able to use single labels values like

    valuesTemplate:
      - helmfileLabels:
          app: "{{`{{ .Release.Labels.app }}`}}"
mumoshu avatar
mumoshu

so probably it should look like:

valuesTemplate:
- helmfileLabels: {{`{{ range $i,$key := (keys .Release.Labels) }}{{ if gt $i > 0 }},{{end}}{{ $value := (.Releases.Labels | get $key) }}{ {{ $key | quote }} : {{ $value | quote }} }{{ end }}`}}
]
voron avatar

will try it

mumoshu avatar
mumoshu

ah well you want a json object here, right? my above example won’t work as it rendersto a single string

voron avatar

well, I need a map like

- helmfileLabels:
    label0: value0
    label1: value1
mumoshu avatar
mumoshu

gotcha

mumoshu avatar
mumoshu

seems like it’s impossible

voron avatar

ok, thanks for your time

mumoshu avatar
mumoshu

a workaround would be

mumoshu avatar
mumoshu
{{ $app1labels := (dict "key1" "val1" "key2" "val2") }}
{{ $app2labels := ... }}

releases:
- name: app1
  chart: ./charts
  values:
  - helmfileLabels:
      {{ toYaml $app1labels | nindent 6 }}
  labels:
    {{ toYaml $app2labels | nindent 6 }}
- name: app2
  chart: ./charts
  values:
  - helmfileLabels:
      {{ toYaml $app2labels | nindent 6 }}
  labels:
    {{ toYaml $app2labels | nindent 6 }}
voron avatar

release-specific labels are ignored in this way

mumoshu avatar
mumoshu

yeah, so you need to define template variables like $labels` for each release

voron avatar

yes, it becomes a little bit unclear

mumoshu avatar
mumoshu

unfortunately, yes

voron avatar

hardcoded label name pass in template looks better with 2-4 labels, like

templates:
  eth: &eth
   ....
    labels:
       chart: my-chart-name
    valuesTemplate:
      - type: "{{`{{ .Release.Labels.type }}`}}"
      - chain: "{{`{{ .Release.Labels.chain }}`}}"
releases:
 - <<: *eth
    name: eth-blocks
    labels:
      chain: ethereum
      type: blocks
1
    keyboard_arrow_up