#codefresh (2020-03)
Archive: https://archive.sweetops.com/codefresh/
2020-03-05
I’m trying to build 2 images. The second one depends on the first one. I’m trying to pass NGINX_IMAGE=${{build_nginx}}
as a build argument, but the value comes in blank.
Is there something that I’m missing here, or do I have to construct the image name myself (which I would prefer to avoid).
That’s a special sha that is understood by the pipeline and not something that would work outside our our image:
construct.
Let me look into a couple things and come back to you on this Harrison.
Thanks.
I would think it should work for anything though, because really it should just represent the image name.
Here is the variable you are likely looking for but even that will possibly need some modifications ${{build_nginx.imageId}}
Gotcha, I will give it a shot.
That at least prints image information. I am looking over some other details.
Example output: [r.cfcr.io/dustinvanbuskirk/dustinvanbuskirk/rails-app-with-knapsack_pro-test@sha256//r.cfcr.io/dustinvanbuskirk/dustinvanbuskirk/rails-app-with-knapsack_pro-test@sha256:f65847a0ee9f868582a7b36a5602077c6708414a72ac71a679472f1a10939b9c)
That link doesn’t work but I am giving this a shot myself
Sorry Slack just made it a link…
It’s the output of the imageId from one of my builds.
Oh I gotcha.
I would think that should work. Thanks for the quick answer. The pipeline is running right now.
Here is the tag example.
Executing command: echo ${{BuildTestDockerImage.tag}}
master-a6a1d3f
Oddly this did not work. I must have done something wrong.
Oh, the change didn’t push for some reason, that’s why
Not necessarily. Like I said that SHA is something internal to us.
I am checking the name as well so you can concat the two
Executing command: echo ${{BuildTestDockerImage.imageName}}
dustinvanbuskirk/rails-app-with-knapsack_pro-test
It did push, I just forgot I had this in 2 spots, so I’m trying again
You might need to do NGINX_IMAGE=${{build_nginx.imageName}}:${{build_nginx.tag}}
Okay, hopefully this helps. Ping me using @dustinvb
if this doesn’t get you where you need to be.
What’s weird is I am getting invalid reference format: repository name must be lowercase
with both ways of doing it.
That is a little bizarre maybe with the second name we need repo information.
I’m running it again with a step to output what the value is for debug purposes.
[r.cfcr.io/linioit/shop-front-nginx@sha256:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46](http://r.cfcr.io/linioit/shop-front-nginx@sha256:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46)
was the value
So this is weird
It works when I do it locally. Could it be the version of docker being used by venona?
That should be 18.09 last I checked. I assume the node Docker is the same or newer.
I would imagine so. We’re on GKE 15
Could CF be passing the value in caps for some reason?
You Dockerfile, if the first like FROM: ? or the Build Arg?
No it shouldn’t be doing that. I printed using echo those vars in my step. They were all lower case. The build arg should not change that case.
ARG NGINX_IMAGE=gcr.io/linio-support/shop-front-nginx:latest
FROM $NGINX_IMAGE AS ASSETS
build_arguments:
- NGINX_IMAGE=r.cfcr.io/linioit/shop-front-nginx@sha256:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46
This worked
So it is 100% CF messing up the value when passing it in
Can you open a Zendesk that includes the Build ID of the build with the error about capitalization so we can both review the YAML and see the error?
Yeah.
Can you also share the build id with me over DM here so I can just quickly peek into the YAML?
Sure, one second
Thanks, I recreated and will update ticket here soon.
In the meantime can you try this as a workaround, let me know if you have better results by using a direct variable instead of the metadata in the build_arguments.
ExportVar:
image: alpine
commands:
- cf_export NGINX_IMAGE='${{BaseImage.imageId}}'
BuildArg:
title: 'Build image: php-test'
stage: build
type: build
image_name: '${{CF_REPO_NAME}}-php-test'
dockerfile: Dockerfile
build_arguments:
- GITHUB_TOKEN=${{GITHUB_TOKEN}}
- NGINX_IMAGE=${{NGINX_IMAGE}}
tag: '${{CF_SHORT_REVISION}}'
2020-03-07
Is it possible in codefresh to authenticate before a docker build using the builtin build step?
I have a Dockerfile that uses a private image as a base image i.e.
FROM private/image/repo:v1.1.1
I’d like to be able to authenticate (docker login) for the build. I know I can do this in a freestyle step but I’d like to know if i can with the builtin build step.
We should be authenticating with all private repositories for this purpose using your Docker Registry connection. If you’ve configured the private registry and your Dockerfile cannot pull this private base image please put in a ticket with our Support team. https://support.codefresh.io
ooo ok @dustinvb good to know
Helm quick start guide for Helm 3 - documentation () (Feed generated with FetchRSS)
2020-03-09
2020-03-10
Helm 3 recommended labels - documentation () (Feed generated with FetchRSS)
Helm 3 release dashboard - documentation () (Feed generated with FetchRSS)
Deployment example for Helm 3 - documentation () (Feed generated with FetchRSS)
2020-03-12
If you’re relying on the internal codefresh registry a lot (and I definitely am) you’ve got your work cut out for you:
https://codefresh.io/codefresh-news/deprecation-of-the-codefresh-private-registry/
Today we are announcing the removal of the built-in Docker registry that comes built-in with all Codefresh accounts. The registry will become read-only on April 15, 2020 and we’re aiming to remove it completely May 1, 2020. Table of Contents FAQ Timeline of deprecation Customer action required Summary FAQ What is announced today? The built-in … Continued
1.5 month notice to change your repos
disappointing they would give such a short period of time
Also, no real way to see your usage of it and be able to validate you’ve actually migrated your stuff off of it
2020-03-13
I am also using it for one of my private projects kinda sucks.
2020-03-17
Hi there: I wanted to update this channel here that we got an extension from our partner to keep CFCR active for longer
We are finalizing the dates now and will announce. But we have at least until July for now
Thank you for your patience with us while we go through this.
Thanks @Vidhya Vijayakumar for the update
@Vidhya Vijayakumar will codefresh provide a way to test or validate that a pipeline is not affected?
We are working on a migration guide to help everyone. I have requested this already with our team. But, it may not be in the first document released
We will definitely push updates
@Vidhya Vijayakumar Did something change with the ${{build.imageId}}
today? Because our builds have started failing. It no longer outputs the repository.
@Harrison Heck Let me escalate this to our support channel.
Can you quickly create a support ticket in Zendesk and include a couple examples?
Sure. I just posted some in Slack FYI as well
Thanks, I will take that for now and ask folks to review.
Thanks @Harrison Heck
The team is checking it, thank you
Thanks.
@Harrison Heck can you please send a failed build?
Thanks
Sorry, I was at lunch
Seems like it’s been fixed?
Previous: [r.cfcr.io/linioit/shop-front-nginx@sha256//r.cfcr.io/linioit/shop-front-nginx@sha256:a424b6cf529d897f35ea172ca502ba9ebb4e4ab6bdf6aac9c0314508c14782ff) Current: sha256:b9fa6fd9b0724f8b26da274e36c58886aabd37eb69fd7c3ba0b7678a169121cb
2020-03-19
Environment board now supports Helm 3 - documentation () (Feed generated with FetchRSS)
2020-03-23
Before I open a ticket, was hoping someone might have some insight on this error I’m getting. This is during the first step of my pipeline when pulling the Docker image to run the step with:
failed to register layer: Error processing tar file(exit status 1): write /var/lib/helm/cache/plugins/https-github.com-mstrzele-helm-edit/vendor/git
hub.com/gobwas/glob/glob_test.go: no space left on device
I’m assuming there is some cleanup that isn’t happening and I’ve filled the mounted volume that exists within each codefresh pipeline, but not sure best route to handle that. Unfortunately, a few of my images are rather large, around 1GB, and so are some of my git repos. I just started running in to this though on pipelines that have been working, so I’m wondering if there is some docker build cache that’s not being cleaned up or something.
Check your docker settings. Preferences > Resources > Advanced > Disk image size. I’ll bet it is pretty full
oh that’s in codefresh… nevermind
Yeah, codefresh issue. I made a support ticket, we’ll see. They have a shared volume that I think contains the build cache. I’m betting that’s filling up. However, this problem seems to be scoped to a branch, not ALL of my builds in a project or whatever
Haha, it’s probably our cloudposse/packages
build which is up to 22GB
Yeah, but I’m not pulling that down, just geodesic which has already dealt with that and provided from you~
Haha no I mean (jokingly) you were unlucky enough to run a pipeline on a host our pipeline ran on - which is why you encountered the out of space error
2020-03-25
It looks like the 1.x version of venona was released, but I don’t see any documentation on upgrading. Nor do I see any way to install this with regular manifests or a helm chart (preferable)
Unless it hasn’t been released?
Yeah, just looks like poor versioning.
There should not be a 1.0.X tag yet. Instead, they should be 0.9 or something. A pre-release
on GitHub is one thing, but 1.0.0 still implies a stable release.
1.0.0-rc1 for example would signify a pre-release version of 1.0.0
Semantic Versioning spec and website
2020-03-26
Removal of the Codefresh registry - documentation () (Feed generated with FetchRSS)
2020-03-27
Adding @discourse_forum bot
@discourse_forum has joined the channel