#helmfile (2020-02)
Questions and discussion around helmfile https://github.com/roboll/helmfile and https://github.com/cloudposse/helmfiles
Archive: https://archive.sweetops.com/helmfile/
2020-02-03
Has anyone set up keycloak using helmfile? We used to have to set everything up manually, and had a startup script to add the custom realms as well as a role-mapping, but would like to move away from that. I’ve managed to add the realms properly, but I’m not sure how to add the role-mappings without using some sort of script.
We deploy keycloak with helmfile, but only to the point the software is running.
The configuration therein is manual
Our keycloak helmfile his here: https://github.com/cloudposse/helmfiles/tree/master/releases
Comprehensive Distribution of Helmfiles for Kubernetes - cloudposse/helmfiles
I’ve done something like that a long time ago - but with Ansible
was a pain in the ass
I have a command that will call an external source and return json output, which I am currently using inside an {{ exec }} block in a helmfile template file for values, I don’t suppose there is a way in template I can unmarshal that json into a struct or map so I can access inner attributes easily?
Has anyone had any luck getting inline values working with release templating? I cannot get this example from the best practices doc to work:
inline values
map:
# ...
valuesTemplate:
- image:
tag: `{{ .Release.Labels.tag }}`
# ...
In my case I’m trying to have access to .Release.Name
in a values.yaml.gotmpl.
It seems like the issue is Environment.Values completely clobbers my values:
and valuesTemplates:
definitions from my template as I get:
executing "stringTemplate" at <.Values.releaseName>: map has no entry for key "releaseName"
error in first-pass rendering
When I strip out the environment section, the templates render without error, but I get: err: no releases found that matches specified selector() and environment(staging), in any helmfile
.
In my helmfile when I reference Release.Name
I use:
{{`{{ .Release.Name }}`}}
have you tried that?
I feel like I’ve tried every variation of the double bracket syntax. I can use it for the path to a values file but not inline values despite the example in the best practices doc. This works:
valuesTemplate:
- config/{{`{{ .Release.Name }}`}}/values.yaml
This doesn’t seem to:
valuesTemplate:
- releaseName: `{{ .Release.Name }}
`
I think you need to wrap it in quotes when it’s used at the start of the line:
labels:
app: "{{`{{ .Release.Name }}`}}"
^^ that works in my helmfile
You are able to reference {{ [Values.labels.app](http://Values.labels.app) }}
in a template?
Trying your syntax above I still get: executing "stringTemplate" at <.Values.releaseName>: map has no entry for key "releaseName"
when I try to render it in a template.
Oh, I’m using that in the helpfile, not in a template.
May have mis-read what you posted.
Well I would love to call .Release.Name
directly in a template, but because it’s only in scope of the helmfile, the best practices suggests to use an inline value, but I can’t get it to work.
I see something similar in a go test as your usage, but I don’t think it tries it with a template: https://github.com/roboll/helmfile/blob/fc75f25293055003d8159a841940313e56a164c6/pkg/app/app_test.go#L3701-L3702
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
I would imagine using .Release.Name
in a values template would be a common use case, so I feel like I’m missing something obvious.
I’m doing a similar thing with .Environment.Name
not sure if it would work with .Release.Name
. Here’s what I’ve got:
templates:
default: &default
chart: stable/{{`{{ .Release.Name }}`}}
labels:
app: "{{`{{ .Release.Name }}`}}"
namespace: infra
missingFileHandler: Error
values:
- environment: {{ .Environment.Name }}
- config/{{`{{ .Release.Name }}`}}.values.yaml.gotmpl
I don’t really understand the valuesTemplate
thing, maybe try putting it in values
and it’ll work?
I’ve also tried it in values
and valuesTemplate
, but happy to try again.
I’m new to helmfile, so I’m honestly just making some guesses and hoping they’ll work.
I am as well. I appreciate the help in any case
I ended up using helmfiles overrides as a workaround to hardcode my release names: https://github.com/roboll/helmfile/issues/387#issuecomment-513737164
It would really help if values from different places were available as .Values (like in standard Helm), to be referred and used. Not alle values are Environment values, example: We have set up Open…
2020-02-04
Anyone able to get --helm-binary
declared in their helmfiles? I’d prefer user not have to know which version of helm should be used.
@Jeremy G (Cloud Posse) you are doing this right?
No, I am using export HELM_BINARY=helm3
to set which binary helmfile
should use. I don’t think you can select from within the helmfile itself.
@Bart M. Here is another option that might prove less onerous.
hmm couldn’t find anything about this in the sourcecode?
I would expect this to be present somewhere, but that env var is never referenced… unless that cli flag package does some magic?
Yeah, where is this thing referenced? I’m keen to know if it gets around some of the helm plugin issues I’ve seen where they ignore the users $PATH variable when looking for and using the helm binary
2020-02-05
How can I reference a Value in my environments.yaml.gotmpl? This produces an error indicating clusterName is not set. This is odd because in my releases I use .gotmpl for values files and the same Value can be found in that context. I run this via helmfile -e developer apply
with the CLUSTER_NAME and AWS_REGION env vars exported correctly.
current solution is to duplicate the '{{ coalesce (env "CLUSTER_NAME") (env "CLUSTER") }}'
in the environments.yaml.gotmpl
this is the error err: error during environments.yaml.gotmpl.part.0 parsing: template: stringTemplate19: executing “stringTemplate” at <.Values.clusterName>: map has no entry for key “clusterName” in ./helmfile.yaml: error during environments.yaml.gotmpl.part.0 parsing: template: stringTemplate19: executing “stringTemplate” at <.Values.clusterName>: map has no entry for key “clusterName”
Is a top-level values
map supported in a helmfile? I thought values
had to be defined within an environment, release, or release template
Hmm, I’m not sure if it is supported but it does work. I can reference the values in the top level helmfile in my release values files
This makes sense for me since I want to single source several different values from env vars, I prefer to do it in one place.
Interesting, I was going off https://github.com/roboll/helmfile#configuration.
I want to single source several different values from env vars, I prefer to do it in one place.
Maybe readFile could be a good option?
{{ readFile "common.yaml" }}
so it’d be 1) set env vars 2) generate common.yaml programmatically from env vars 3) run helmfile?
Has anyone done work to get helmfile (and diff etc) output in a computer readable format? Mostly looking for a JSON format of the info displayed after a release is completed.
@Roderik van der Veer that’s a cool idea! then we could use OPA to set up some policies.
@mumoshu have you seen anyone do that?
@Roderik van der Veer what’s your use-case for the JSON data?
We are using helmfile to orchestrate dedicated k8s clusters + a lot of services from a web platform. The only feedback that i can five to the user waiting is “start helmfile run” and “helmfile run complete”. Having each “step” output a json log line, i can read those and show “deployed x” “deployed y” but also filter out failures when they happen.
aha, ok - so a bit different use-case
it might be as easy as supporting the json output from helm directly, we use that in some other cases (list releases etc)
sounds more like a ndjson
log format rather than json output, but interesting use-case!
https://github.com/roboll/helmfile/issues/913 could be about adding --output json
to helmfile.
this ndjson one could be about logging and possibly addressed by adding --log-format json
where the default is text
to helmfile.
We recently had a need to parse the output of helmfile list. It would be ideal if we could utilize an –output json flag (like helm allows) to return structured json output.
2020-02-06
Where in helmfiles is .Release
in scope? As far as I can tell the only place I can access it is in Release Templates with double interpolation. I’m finding it very painful to get access to anything from .Release
in my values templates. Anyone have any tips?
For example I have several values I’d like to do something like someUrl: {{ printf "https://%[s.mydomain.co](http://s.mydomain.co)" .Release.Namespace }}
in my values.yaml.gotmpl.
Luckily I realized .Namespace
also exists and is in scope for the values templates, but I think the point about .Release
remains for things like .Release.Name
.
2020-02-07
Is there a recommendation for storing the helm definition alongside the application code in the same git repo vs storing it in a separate infrastructure/IaC repo?
Good question. We keep terraform things separate so I would like people to give their opinions here, cz I am soon switching to Helm structure as well. Currently we have a legacy setup of bunch of yaml files (deploy,service,ingress etc) in a separate repo. CI-CD systems read from the repo and control of that repo is with Devops/SRE people
yeah, our terraform is separate, too, which is why I could see it going either way….
I’m on the fence with this one. I personally keep a rather large omnichart (or archtype chart or whatever one would call the thing) in its own repo then use it in downstream per-app charts that can reside in the application repose themselves
the idea of keeping these massive standardized charts in each app repo just doesn’t appeal to me with any more than a handful of apps at play
I’m reading the README and it isn’t clear what the difference between apply and sync is. sync does upgrade –install, apply runs a diff before, but doesn’t that mean the same thing?
just speaking for where i work, we do the following:
- git repository for each microservice app
- src folder contains the app source code
- charts folder contains the helm chart source
- build process generates container image and helm chart, pushes both to container/chart registry
@James Huffman for #4, when you say the process generates the chart, you’re referring to a “helm package” call, right?
correct
as i understand it, apply
is being deprecated and sync
is the way forward
Where did you get this info?
i must have misread something. there were certain features that only worked on a sync
and not apply
and it seemed as if apply
had reduced capabilities
apologies for any misinformation!
we recently migrated all our helmfile usage from apply
to sync
@James Huffman so you keep the source chart and a backup in some registry as well. What tooling you use for Helm chart for registry? S3 or something?
Azure Container Registry and Google Container Registry
you can store both container images and helm charts in those
@James Huffman for #4, do you keep the whole chart there or do you use subcharts at all?
because we have dozens of microservices, we’ve made a “common” chart for which every app’s chart is a thin subchart
I’m in the process of implementing this pattern myself
We are doing it the same way.
that way we can add features to every app’s chart just by changing the common one
Does ECR support helm charts? We use ECR for dockerized images of microservices
@James Huffman That sounds sane to me
I totally used that pattern today to migrate ingress on a cluster. It made me feel like a wizard.
2020-02-08
I’m having some issues with helmfile + helm3 + dynamic clusterIP’s. It is this issue: https://github.com/helm/helm/issues/7082#issuecomment-575514155 but when i put force to false in my helmfile i’m still hit with Error: UPGRADE FAILED: an error occurred while cleaning up resources. original upgrade error: failed to replace object: Service "violet-reindeer-mint-webserver" is invalid: spec.clusterIP: Invalid value: "": field is immutable: unable to cleanup resources: object not found, skipping delete
Steps to reproduce the issue: helm create tmp echo ' apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{ include "tmp.fullname" $ }} roleRef: apiGroup: rbac….
For us, setting force:
to false
helped. Before that we had to delete the resource manually and then to apply again.
Steps to reproduce the issue: helm create tmp echo ' apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{ include "tmp.fullname" $ }} roleRef: apiGroup: rbac….
it also puts my release in a failed state
and i have cleanupOnFail set to true, but all the pods and services appear to be there
and i do not have clusterIP in my templates in case you were wondering
You must have clusterIP: “” somewhere in your source template or this would not occur right?
in either case, I’ve found I’ve had to blow away the entire deployment to get around this three-way merge madness/issue myself :(*
2020-02-10
Hmm, so is there a consensus on the differences between apply and sync?
apply is diff then sync if diff shows a change. Sync skips the diff step.
right, but if there aren’t any deltas with the resources in k8s, nothing happens either in the case of sync. It’s not clear what apply is trying to achieve.
sync still runs helm upgrade --install
even if there is nothing to change. apply won’t do that if nothing has changed
apply is the intended command to use
is there some penalty for running helm upgrade --install
if nothing has changed?
probably not, other than whatever computer/memory the command uses
then again, depending on your setup you might have your pods restarted for instance
2020-02-11
hmm doesn’t seem to be possible to define in a helmfile which helm version should be used?
it only seems to be possible on the cli
this is a bit messy since I currently have multiple environments using helm2 and helm3… now I have to specify the --helm-binary
/ -b
every time
From within helmfile.yaml, is there anyway I can detect if helmfile is being run with helm2 or helm 3?
We have a shared repo that we’ll be migrating namespace by namespace and I’d like to add logic that says “this helmfile.yaml is only deloyed with helm 2 or 3”
~I believe the right way is to set ~helm_binary
inside each helmfile, then install both versions of helm.
looks like this parameter isn’t supported
In my team we use both helm 2 and 3 on different clusters and we ship a docker image containing all the binaries such as helm (named helm2 and helm3), helm plugins, helmfile, etc. What happens is t…
Thanks for the help, maybe I can find out another way
If you’re using alpine linux, we distribute helm2
and helm3
packages
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
2020-02-12
2020-02-14
Wish there was a cloudposse group I could @ mention, but I have a very small PR to make a helmfile a bit more friendly: https://github.com/cloudposse/helmfiles/pull/224
what [fluentd-elasticsearch-aws] Allow configurable resource requests/limits from environment why Lots of CPU throttling happening, would like to provide a higher limit.
they are very active on here…I bet you won’t wait too long
Usually don’t
@Maxim Mironenko (Cloud Posse) is working full time on our PR backlog
It’s a 5 minute PR tops unless I missed some naming convention, I matched a similar one I found in Sentry helmfile
@Alex Siegman here it is: <https://github.com/cloudposse/helmfiles/releases/tag/0.92.0>
2020-02-16
Is there a way to have the u/p for a repository to come from a file? Trying to do the following where the ‘docker’ key is part of an environments values file. I tried making a base repositories file and make it a gotmpl but no joy there.
- name: settlemint
url: <https://harbor.settlemint.com/chartrepo/launchpad>
username: {{ .Values.docker.username }}
password: {{ .Values.docker.password }}
It’s probably related to the double rendering bug
is helmfiles:
the only section of a helmfile that can take git repo refs/paths. E.g. can I reference another file in bases:
or values:
?
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
I guess I could use exec
to do a git command to clone/cat the file but I was wondering about native support, especially for bases:
you can use readFile
to pull things in from a file
2020-02-17
hey all. has anyone deployed the elk stack with metricbeat on aws with helm? I’m going with the defaults and it sort of works, but i get an error on one of the monitoring services
2020-02-18
Hi all, is helmfile usable with helm3 ?
yes
Thnx
Is there a way of pre render a single release,
i dont see any options in helmfile template --help
helmfile -e your-environment -l name=name-of-release template
Thanx works , missed global options, sorry
Hi all, how to make helmfile
to delete chart, when I remove it from helmfile.yaml
?
Neither helm apply
nor helm sync
seems working for me
try to use installed
option. set it to false
and do apply
(here is documentation: <https://github.com/roboll/helmfile/blame/master/README.md#L160>
)
thnx, will try
So, should I set this flag on each release separately ?
Ahh, seems I got the idea - instead of removing release from helmfile.yaml
I should set this flag
Am I right ?
@Ievgenii Shepeliuk you are right!
Excuse me, are you a helmfile dev ? Could you plz explain a motivation for this flag ?
Why just not purge releases if they are missing from helmfile.yaml
?
Hi all, I’m doing helmfile -n myns apply
and then helmfile -n myns status
from my CI \ CD jenkins pipeline
Although helmfile apply
executes successfully , helmfile status
fails with following error
23:34:14 err: release "eshepelyuk-fastapi" failed: helm exited with status 1:
23:34:14 Error: release: not found
23:34:14 err: release "hrzn-docs" failed: helm exited with status 1:
23:34:14 Error: release: not found
23:34:14 in ./helmfile.yaml: 2 errors:
23:34:14 err 0: release "eshepelyuk-fastapi" failed: helm exited with status 1:
23:34:14 Error: release: not found
23:34:14 err 1: release "hrzn-docs" failed: helm exited with status 1:
23:34:14 Error: release: not found
Locally, both commands work, Can anyone give an advice ?
Anyone plz, at least some clues where to start from ?
maybe supply the namespace with helmfile status command?
My helmfile.yaml doesnt habe namespace , though. I rely on current NS, provided by -n flag.
Well after checking debug log, i see that -n is not passed to helm status
command from helmfile status
23:20:47 Getting status eshepelyuk-fastapi
23:20:47 exec: helm status eshepelyuk-fastapi
23:20:47 worker 1/2 started
23:20:47 Getting status hrzn-docs
23:20:47 exec: helm status hrzn-docs
Is it a bug ?
I found this PR https://github.com/roboll/helmfile/pull/1098 that should be fixed in 0.99.1 I am using 0.100.0, and bug is still here
fixes same as #1050 , but for helmfile status command.
Interesting observation when trying to workaround as suggested in that PR
- when running
helmfile status -n myns --args "--namespace myns"
- I receive the same error - when running
helmfile status --args "--namespace myns"
- all works fine, So, apparently there’s a bug with-n
flg inhelmfile status
command Could someone confirm ?
@mumoshu
I think the PR should be reopened
Another question is about helmfile.lock
In case I’m not gonna use version ranges (ie 1.x
or ~2.0)
is there any reason to care about helmfile.lock
Can I put it to gitignore ?
2020-02-19
Anybody here ?
You can safely ignore the helmfile.lock
if you’re not going to use the functionality.
Thnx
@Ievgenii Shepeliuk Hey! Are you sure you’re using the same version of Helm(3 or 2) on your machine and Jenkins?
Yes i am
The difference i see , locally i use windows and in jenkins we run helmfile from debian based docker container
Actually doesnt matter since bug is reprodeced locally too
-n
just doesn’t work for helmfile status
@mumoshu yse hel list works fine
Everything works except -n
flag of helmfile status
--args
works
Okay thanks! Not sure why --args
works, but anyway, it does seem like a bug.
I’ll fix it tonight but if you have some time until then, could you try adding st.ApplyOverrides(&release)
here?
Deploy Kubernetes Helm Charts. Contribute to roboll/helmfile development by creating an account on GitHub.
FYI, this should be fixed in v0.100.1 via https://github.com/roboll/helmfile/pull/1108
@Ievgenii Shepeliuk Thanks. One more thing - Does helm list -n myns
show you eshepelyuk-fastapi
on both your machine and Jekins?
2020-02-20
I need to configure replica sets with different configurations for mongodb - https://github.com/helm/charts/tree/master/stable/mongodb-replicaset/templates. could anyone point me in the right direction?
Curated applications for Kubernetes. Contribute to helm/charts development by creating an account on GitHub.
I’ve got a question about helmfile netsted helmfiles and the values key in nested helmfiles. What exactly is that meant to override and what format is expected?
Assuming you are talking about the highlighted key under releases - It is equivalent to passing a values.yaml file using helm install --values values.yaml
. The benefit is (as the docs show), you can do it in a bunch of different ways, such as passing a values.yaml.gotmpl
file with full templating power
I am not. I am referring to the helmfiles key as we’re including a nested state - however, it’s not clear where these values are used and what format and I’ve not found any way to override or influence values anywhere in nested state
The helmfiles.values:
key is for overrides only, not for setting new values. I don’t actually use the values:
key here as we use environment variables as inputs to the nested helmfiles.
overrides of what
the values in the nested helmfile
Sure, so I have values - set via a yaml file, in the nested helmfile which includes several releases
I’m trying to override a value in that yaml file and there’s no documentation for how to key into that releases values and set the override
would an example our our helmfile structure help?
I’ve never used the values key here. Perhaps the resident experts like @mumoshu or @Erik Osterman (Cloud Posse) might have more insight?
Thanks, I’ll create a sample repo in the mean time to outline what we’re attempting to do
An example layout of our repo: https://github.com/marcoceppi/nested-helmfiles We have nearly 100 sites which all start from the same base template and generally have the same config. Only a few values in a few configs need to be overwritten. Instead of duplicating the master 100 times we’re looking to use nested states to grab a master template and override a few sections of config
@Marco Ceppi Hey! What’s you’re missing there is explicit propagation of helm state values to helm chart values here
https://github.com/marcoceppi/nested-helmfiles/blob/master/models/unit1/helmfile.yaml
Contribute to marcoceppi/nested-helmfiles development by creating an account on GitHub.
https://github.com/marcoceppi/nested-helmfiles/blob/970aa58ad4ec638e71f6792b48d58ae479d093c3/deploy/site1/helmfile.yaml#L5 means that you’re delegating to models/unit1/helmfile.yaml with helm “state values” of
config:
ip: 10.11.0.1
It’s now models/unit1/helmfile.yaml ‘s responsibility to use the state values to correctly render helm charts
Contribute to marcoceppi/nested-helmfiles development by creating an account on GitHub.
@mumoshu Thank you for the reply. We ended up coming to the same conclusion after /a lot/ of experimentation. The result feels a little wonky, but it works
Usually, your model helmfile.yaml should look like this:
repositories:
- name: vio
url: <https://charts.vio.sh>
releases:
- name: {{ .Environment.Name }}-test
namespace: default
chart: vio/test
version: 0.0.1
values:
- ./values/test.yaml.gotmpl
- {{ .Values | toYaml | nindent 4}}
so that it will (internally) be rendered to
releases:
- name: {{ .Environment.Name }}-test
namespace: default
chart: vio/test
version: 0.0.1
values:
- ./values/test.yaml.gotmpl
- config:
ip: 10.11.0.1
Glad it worked!
JFYI, many people asked that state values should “automatically” be propagated to helm chart values. That that breaks existing use-cases and also makes it fundamentally difficult to use helm state values as “intermediates” to render chart values
Yeah, we ended up with something like this:
environments:
default:
values:
- site_name: {{ .Values.site_name | default "virt-gcp" }}
site_domain: {{ .Values.site_domain | default "virt0" }}
plc_host: {{ .Values.plc_host | default "10.193.0.250" }}
snmp_host: {{ .Values.snmp_host | default "10.193.0.201" }}
At the top of the models/unit1 helm file. These values get sensible defaults but are also overridden by the deploy/site helmfiles. The values end up getting used in values.yaml.gotmpl for each of the releases when needed
Feels weird, but works consistently so I’m happy with it and it’s pretty explicit so it leaves a nice cookie trail for future developers
I was going to suggest
values:
- defaults.yaml
environments:
myenv:
values:
- myenv.yaml
with overwrites myenv.yaml to defaults.yaml when run with helmfile -e myenv apply
The concept of helmfile state variables was a topic not clearly outlined and it took us a lot of example hoping to realize that you could leverage values in the helmfiles themselves (be it from environment, etc) once we grok’d that it was a lot more clear the path forward
but yeah yours seems better for readers
We’ve not had as much success with environments cascading properly, but it was where we started first
I guess I didn’t realize we could just set values
as a top level key
maybe i’m not following you currectly, but they don’t cascade. but you can layer environment values as i’ve done
that makes sense. i thought the top-level values
isn’t well documented(either, our bad)
Right, we were setting the environment “overrides” in the site based helmfile but not seeing the values propagate to the model file
No worries. Once we finish up our deployment templates I’ll try to get it at least updated in the README.md template via PR
Right, we were setting the environment “overrides” in the site based helmfile but not seeing the values propagate to the model file
i think i’m catching up now.
so, you tried “overriding” state values of the “model” helmfile.yaml via https://github.com/marcoceppi/nested-helmfiles/blob/970aa58ad4ec638e71f6792b48d58ae479d093c3/deploy/site1/helmfile.yaml#L3-L5 , right?
Contribute to marcoceppi/nested-helmfiles development by creating an account on GitHub.
as you may already know as you got it to work, it DOES override state values of the model helmfile.yaml.
in your case, it was just that it doesn’t automatically cascade to helm “chart” values
We were declaring the environment in there
environments:
default:
values:
...
That was happening in that file, but we couldn’t seem to get that to properly propagate to the nested helmfile
So we went with the solution I outlined earlier
ah yes, it doesn’t propagate automatically, either. it is basically “opt-in” or “selective”.
Just trying to input ideas that aren’t documented well, but if your values files are structured uniformly, you can make your site helmfile.yaml like this:
yaml
- site1.defaults.yaml
environments:
site1Prod:
values:
# helmfile -e site1Prod makes .Values = site1.defaulst.yaml + site1.prod.yaml
- site1.prod.yaml
helmfiles:
- path: ../../models/unit1/helmfile.yaml
values:
# this way you don't need to repeat keys already defined in site1.defaults.yaml and site1.prod.yaml
- {{ .Values | toYaml | nindent 4 }}
Not sure what works best for you, but please consider this as an option if you like
2020-02-21
This message was deleted.
Guys, i don’t know who are the maintainers, but please accept a great appreciation from our team to all of them.
we’ve been able to adopt helmfile in a day (sure with some issues, i’ve asked in this chat)
and this tool really enriched our CD pipelines, so many problems solved and so easy to adopt
it’s really a great tool for k8s ecosystem
pity, we’re not golang
team and can’t contribute freely as we do to JVM OSS projects
@scorebot let’s keep tabs!
excuse me, what does it mean ?
oh haha, you can ignore that. I just added the scorebot so we can reward everyone for helping out.
okok
@scorebot has joined the channel
Thanks for adding me emojis used in this channel are now worth points.
Wondering what I can do? try @scorebot help
2020-02-23
Hello. this is my first post here.
I would like to provision thousands of namespaces, one for each customer, or even several, namespaces per customer.
Every day more and more customers would join and more and more namespaces would be created.
Would helmfile
accomodate that? What would be the best practices in this case, please?
Thanks a lot
sure, but it may be more efficient to just script out the creation of namespaces based on your naming convention and flat yaml/json at those numbers.
Here is a great write up on kube thresholds and scalability: https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md#kubernetes-thresholds
Kubernetes community content. Contribute to kubernetes/community development by creating an account on GitHub.
@Zachary Loeber: Thanks a lot
2020-02-25
@Richard Gomes thousands would be the point where I’d consider writing a dedicated thing in Go if all these things are very similar… you can for example use helm3 as a go library, and accessing the API is pretty easy…
{{- $namespace := requiredEnv "GITLAB_KUBE_NAMESPACE" }}
How can I make that fail if the string is longer than 23 characters?
I’d suggest to use standard helm instead
• required
• ternary
• env
So idea is - if env exists and longer than 23 or env var doesn’t exist - then you set set it to null from ternary
and then use required
on that value, that wil lfail
eventually resilting in what you want
ended up going with
{{- $namespace := requiredEnv "GITLAB_KUBE_NAMESPACE" }}
{{- if gt (len $namespace) 23 }}
{{- fail "GITLAB_KUBE_NAMESPACE may not exceed 23 characters" }}
{{- end }}
Well this is kinda clever -> https://github.com/bitsofinfo/helmfile-deploy
helmfile based framework for managing the desired state of apps with using the appdeploy & appconduits Helm charts - bitsofinfo/helmfile-deploy
That’s interesting
Feels quite similar to what we are doing with Helmfile and monochart, but his more opinionated approach is appealing
@Igor Rodionov
maybe i’m doing it wrong, but i did not expect that with 2 releases with the same name, one in a cluster-specific helmfile and one later in a common helmfile, helmfile will take the later one?
joey@isp : ~/dev/personal/minikube/helmfile > tree
.
├── common
│ └── helmfile.yaml
└── v1.14.9
├── helm_values
│ └── prometheus.yaml
└── helmfile.yaml
3 directories, 3 files
joey@isp : ~/dev/personal/minikube/helmfile > cat v1.14.9/helmfile.yaml
releases:
- name: prometheus
namespace: monitoring
chart: stable/prometheus
version: ^9.0.0
values:
- helm_values/prometheus.yaml
helmfiles:
- path: ../common/helmfile.yaml
joey@isp : ~/dev/personal/minikube/helmfile > cat v1.14.9/helm_values/prometheus.yaml
alertmanager:
enabled: false
pushgateway:
enabled: false
joey@isp : ~/dev/personal/minikube/helmfile > cat common/helmfile.yaml
repositories:
- name: stable
url: <https://kubernetes-charts.storage.googleapis.com>
releases:
- name: prometheus
namespace: kube-system
chart: stable/prometheus
version: ^9.0.0
- name: grafana
namespace: monitoring
chart: stable/grafana
in this case when running helmfile diff
or helmfile apply
in the v1.14.9 folder i would’ve expected helmfile to take the prometheus definition with alertmanager and pushgateway disabled and push prometheus in the monitoring
namespace, but instead helmfile chose the last specification from ‘common’.
when i rearranged the cluster-specific helmfile.yaml to have the ‘common’ included first, prometheus was installed without alertmanager and pushgateway enabled, so i know it read the values from the releases in the cluster-specific folder, but it was still installed in the kube-system namespace, so it read the namespace from the first definition as opposed to the last definition of the release
is that working as intended?
2020-02-26
anyone else thats deploying prometheus-operator getting this issue? It was supposedly fixed in helm 2.14 but even testing with more recent versions of helm2 (2.16.3) or helm3, I still get the error:
failed processing release prometheus-operator: helm exited with status 1:
Error: validation failed: unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
What did you do? minikube start –kubernetes-version=v1.11.1 –memory=8192 –bootstrapper=kubeadm –extra-config=kubelet.authentication-token-webhook=true –extra-config=kubelet.authorization-mode=…
What is your version of the chart?
What did you do? minikube start –kubernetes-version=v1.11.1 –memory=8192 –bootstrapper=kubeadm –extra-config=kubelet.authentication-token-webhook=true –extra-config=kubelet.authorization-mode=…
version: "6.2.1"
I guess I should bump the version to 8.9.2, i had taken it from the cloudposse helmfile
Hm, we had some troubles in the past, but cannot say about this one for sure. We are successfully deploying 8.7.0 with Helm 3.
I still run into this one every so often. Usually it is ordering of my deployments (so if I try to install something that includes a servicemonitor but don’t have prometheus installed, locally for instance). I’d ensure that you are not combining other charts with the operator deployment.
I am just installing the prometheus-operator helmfile. It includes the crd install hooks beforehand
Perhaps do it in phases then, first with just the operator, then once again with the default rules. Looks like their chart has been updated since I used it and it now enables a boat load of default rules to be installed in the same chart.
those default rules create the servicemonitors (I’m assuming at least)
typically the pre-hook install of crds would catch this I’d think….
@Zachary Loeber I also feel like the pre-hook installs of the crds should which is whats weird to me
I jumped to helm 3.1.1 and it seems to work now on the latest prometheus-operator version (8.9.2)
there’s no more pre-hook CRD installs in helm3
it’s been replaced by a directory in the helm chart called crds
, which by their chart history was only created 8 days ago and 8.9.2 cut 5 days ago
would that apply to a helm 2 chart?
or would it honor prehook configs still (guessing not, the helm 3 upgrade has been a little goofy)
hrm. I will prob leave the prehook crd config in my chart although I am on 8.9.2
it’s been replaced by a directory in the helm chart called crds
, which by their chart history was only created 8 days ago and 8.9.2 cut 5 days ago
The crd directory was created long ago:
https://github.com/helm/charts/commit/89b233eef6dbc1b6fac418bde3a5a6f4e14406d4#diff-4dbc73be2076b0f18519c7c8b1add2b2
…ty with Helm v3 (#18721) * Introduce crds directory for compatibility with Helm v3 - This adds crds
directory which has all 5 CRDs - As files from crds directory needs to be plain YAMLs, keeps…
I know its a race condition that I can easily script around, but it reads to me that it should be fixed?