#helm (2022-06)
Archive: https://archive.sweetops.com/helm/
2022-06-01
Bitnami chart repo appear to be broken:
$ curl -Ss <https://charts.bitnami.com/bitnami> | xmllint --format /dev/stdin
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3JJ1S8D309MNZNFT</RequestId>
<HostId>JD7bxIL1yZW9N+OX4yl5JWgcL6e1TR9TMGWY2YEVYUYor2mwM8EJw9XXTS1GH8hwAY2L/Begmhc=</HostId>
</Error>
This happens from time to time wit bitnami repos. We do automatic retries to mitigate this problem
And they are trying to mitigate this https://github.com/bitnami/charts/issues/10539
However this solution broke our dependencies.
As reported in this issue (#8433), in the last few times we are facing some issues with the index.yaml
associated with the Bitnami Helm charts repository.
Current situation
After some investigation, it seems the root cause is related to CloudFront reaching some limits due to the volume of traffic when serving the index.yaml
.
This index.yaml
contains all the Bitnami Helm charts history (around 15300 entries), producing a pretty fat 14MB file. Given the size of the file and the volume of traffic, thousands of terabytes of download traffic per month are being generated.
One of the alternatives considered was the use of compression at CloudFront, in that case, this solution doesn’t work since compression is not used by the Helm client (helm
) itself (see helm/helm#8070) so it doesn’t solve the reported issue.
Mitigation
As the first line of action, we will reduce the size of the index.yaml
by removing some old versions and keeping all versions for a period of time (6 months).
:warning: Please note this action is not removing/deleting any Helm chart, packaged tarballs (.tgz
) won’t be removed, this action is only affecting index.yaml
used to list the Helm charts. Previous versions of the index.yaml
can be used to install old Helm charts.
Please note Helm charts tarballs (.tgz
) won’t be removed, this action is only affecting index.yaml
.
Result
Applying this approach (#10530), we obtained the following results:
Total chart versions
* Before: 15260
* Removed: 12138
* After: 3122
Producing a reduced 3.5MB index.yaml
.
:wrench: Workaround for previous versions
The index.yaml
is stored in this repository under the index branch, users should be able to use any commit in that branch to add a previous version of the index.yaml
.
• Manually using helm repo add
$ helm repo add bitnami-pre-2022 <https://raw.githubusercontent.com/bitnami/charts/eb5f9a9513d987b519f0ecd732e7031241c50328/bitnami>
"bitnami-pre-2022" has been added to your repositories
• When used as a dependency in Chart.yaml:
- name: postgresql
version: 8.1.0
- repository: <https://charts.bitnami.com/bitnami>
+ repository: <https://raw.githubusercontent.com/bitnami/charts/eb5f9a9513d987b519f0ecd732e7031241c50328/bitnami>
condition: postgresql.enabled
Yeah, broke a dependency for us too.
And they’re working on it, just found this github issue: https://github.com/bitnami/charts/issues/8433
Which chart: Not chart but bitnami helm repo - helm repo add bitnami <https://charts.bitnami.com/bitnami>
The name (and version) of the affected chart
Describe the bug
As part of CI tests, we add bitnami repo and do some chart-testing. This breaks CI flow . is there any option to retry or is it a known issue
To Reproduce
Steps to reproduce the behavior:
helm repo add bitnami <https://charts.bitnami.com/bitnami>
Error: looks like “https://charts.bitnami.com/bitnami” is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Expected behavior
A clear and concise description of what you expected to happen.
Version of Helm and Kubernetes: 3.5.4
• Output of helm version
:
(paste your output here)
• Output of kubectl version
:
(paste your output here)
Additional context
This doesn’t happen always but fails 2 out of 10 times atleast
Seems to be an intermittent issue, a retry seems to be the current workaround.
2022-06-08
2022-06-19
Working on drafting a Helm chart from scratch, my first ever, and could use a sounding board for suggestions and ideas on situation I’m seeing.
I currently have an initContainer
that launches and executes DB migration/init steps which is fine if replicas: 1
is set but if I increase it as I would expect in a non-dev mode the initContainer
pod going into a locked state as the DB isn’t exclusive so I really need to find a way to execute the migration/init stageas a pod without using it as an initContainer
and have the main container
for the deployment hold until the migration/init pod has completed and terminated. I think this is also why my helm upgrade
hangs as it launches the new deployment which hangs on the init process.
Maybe a single job that is started and have something for the main pods to hold their horses?
Yeah that’s what I’m trying now. I have the database migration/init cut out from being an initContainer as a Job. I haven’t found a way to get the main pod Deployment to wait to start before running itself or to stop during an upgrade to allow the migration/init Job to execute
2022-06-20
2022-06-23
Does anyone know of an alternative thats like Infracost for Helm charts?
kubecost maybe
Kubecost helm chart
I’ve never used it or heard about it until today
here’s a blog post on it https://loft.sh/blog/kubernetes-cost-monitoring-with-kubecost/
A hands-on look at using Kubecost to observe the costs of your Kubernetes clusters
Thanks. That way you’ll have to create the diff yourself and afterwards.
The beauty of Infracost, is that you can see the cost diff upfront.