#codefresh (2020-03)

codefresh

Archive: https://archive.sweetops.com/codefresh/

2020-03-27

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Adding @ bot

discourse_forum avatar
discourse_forum
10:00:27 PM

@ has joined the channel

2020-03-26

Codefresh Release Notes avatar
Codefresh Release Notes
12:11:20 PM

Removal of the Codefresh registry - documentation () (Feed generated with FetchRSS)

2020-03-25

Harrison Heck avatar
Harrison Heck

It looks like the 1.x version of venona was released, but I don’t see any documentation on upgrading. Nor do I see any way to install this with regular manifests or a helm chart (preferable)

Harrison Heck avatar
Harrison Heck

Unless it hasn’t been released?

Harrison Heck avatar
Harrison Heck

Yeah, just looks like poor versioning.

Harrison Heck avatar
Harrison Heck

There should not be a 1.0.X tag yet. Instead, they should be 0.9 or something. A pre-release on GitHub is one thing, but 1.0.0 still implies a stable release.

Harrison Heck avatar
Harrison Heck

1.0.0-rc1 for example would signify a pre-release version of 1.0.0

Harrison Heck avatar
Harrison Heck
Semantic Versioning 2.0.0

Semantic Versioning spec and website

2020-03-23

Alex Siegman avatar
Alex Siegman

Before I open a ticket, was hoping someone might have some insight on this error I’m getting. This is during the first step of my pipeline when pulling the Docker image to run the step with:

failed to register layer: Error processing tar file(exit status 1): write /var/lib/helm/cache/plugins/https-github.com-mstrzele-helm-edit/vendor/git
[hub.com/gobwas/glob/glob_test.go](http://hub\.com/gobwas/glob/glob_test\.go): no space left on device      

I’m assuming there is some cleanup that isn’t happening and I’ve filled the mounted volume that exists within each codefresh pipeline, but not sure best route to handle that. Unfortunately, a few of my images are rather large, around 1GB, and so are some of my git repos. I just started running in to this though on pipelines that have been working, so I’m wondering if there is some docker build cache that’s not being cleaned up or something.

roth.andy avatar
roth.andy

Check your docker settings. Preferences > Resources > Advanced > Disk image size. I’ll bet it is pretty full

roth.andy avatar
roth.andy

oh that’s in codefresh… nevermind

Alex Siegman avatar
Alex Siegman

Yeah, codefresh issue. I made a support ticket, we’ll see. They have a shared volume that I think contains the build cache. I’m betting that’s filling up. However, this problem seems to be scoped to a branch, not ALL of my builds in a project or whatever

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha, it’s probably our cloudposse/packages build which is up to 22GB

Alex Siegman avatar
Alex Siegman

Yeah, but I’m not pulling that down, just geodesic which has already dealt with that and provided from you~

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Haha no I mean (jokingly) you were unlucky enough to run a pipeline on a host our pipeline ran on - which is why you encountered the out of space error

Alex Siegman avatar
Alex Siegman

Ah, fair enough. Quit being bad neighbors!

hiding1

2020-03-19

Codefresh Release Notes avatar
Codefresh Release Notes
10:11:24 AM

Environment board now supports Helm 3 - documentation () (Feed generated with FetchRSS)

2020-03-17

Vidhya Vijayakumar avatar
Vidhya Vijayakumar

Hi there: I wanted to update this channel here that we got an extension from our partner to keep CFCR active for longer

Vidhya Vijayakumar avatar
Vidhya Vijayakumar

We are finalizing the dates now and will announce. But we have at least until July for now

:--1:2
Vidhya Vijayakumar avatar
Vidhya Vijayakumar

Thank you for your patience with us while we go through this.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Vidhya Vijayakumar for the update

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Vidhya Vijayakumar will codefresh provide a way to test or validate that a pipeline is not affected?

Vidhya Vijayakumar avatar
Vidhya Vijayakumar

We are working on a migration guide to help everyone. I have requested this already with our team. But, it may not be in the first document released

:--1:2
Vidhya Vijayakumar avatar
Vidhya Vijayakumar

We will definitely push updates

Harrison Heck avatar
Harrison Heck

@Vidhya Vijayakumar Did something change with the ${{build.imageId}} today? Because our builds have started failing. It no longer outputs the repository.

dustinvb avatar
dustinvb

@Harrison Heck Let me escalate this to our support channel.

Can you quickly create a support ticket in Zendesk and include a couple examples?

Harrison Heck avatar
Harrison Heck

Sure. I just posted some in Slack FYI as well

dustinvb avatar
dustinvb

Thanks, I will take that for now and ask folks to review.

Vidhya Vijayakumar avatar
Vidhya Vijayakumar

Thanks @Harrison Heck

Oleg Sucharevich avatar
Oleg Sucharevich

The team is checking it, thank you

Harrison Heck avatar
Harrison Heck

Thanks.

Oleg Sucharevich avatar
Oleg Sucharevich

@Harrison Heck can you please send a failed build?

Oleg Sucharevich avatar
Oleg Sucharevich

Thanks

Harrison Heck avatar
Harrison Heck

Sorry, I was at lunch

Harrison Heck avatar
Harrison Heck

Seems like it’s been fixed?

Harrison Heck avatar
Harrison Heck

Previous: r.cfcr.io/linioit/[email protected]:a424b6cf529d897f35ea172ca502ba9ebb4e4ab6bdf6aac9c0314508c14782ff Current: sha256:b9fa6fd9b0724f8b26da274e36c58886aabd37eb69fd7c3ba0b7678a169121cb

2020-03-13

Pierre Humberdroz avatar
Pierre Humberdroz

I am also using it for one of my private projects kinda sucks.

joshmyers avatar
joshmyers

Better the devil you know….

1

2020-03-12

Alex Siegman avatar
Alex Siegman

If you’re relying on the internal codefresh registry a lot (and I definitely am) you’ve got your work cut out for you:

https://codefresh.io/codefresh-news/deprecation-of-the-codefresh-private-registry/

Deprecating the Codefresh Registry - Codefresh

Today we are announcing the removal of the built-in Docker registry that comes built-in with all Codefresh accounts. The registry will become read-only on April 15, 2020 and we’re aiming to remove it completely May 1, 2020. Table of Contents FAQ Timeline of deprecation Customer action required Summary FAQ What is announced today? The built-in … Continued

1
Alex Siegman avatar
Alex Siegman

1.5 month notice to change your repos

btai avatar

disappointing they would give such a short period of time

Alex Siegman avatar
Alex Siegman

Also, no real way to see your usage of it and be able to validate you’ve actually migrated your stuff off of it

2020-03-10

Codefresh Release Notes avatar
Codefresh Release Notes
07:31:26 AM

Helm 3 recommended labels - documentation () (Feed generated with FetchRSS)

Codefresh Release Notes avatar
Codefresh Release Notes
07:31:26 AM

Helm 3 release dashboard - documentation () (Feed generated with FetchRSS)

Codefresh Release Notes avatar
Codefresh Release Notes
07:31:26 AM

Deployment example for Helm 3 - documentation () (Feed generated with FetchRSS)

2020-03-09

2020-03-07

btai avatar

Is it possible in codefresh to authenticate before a docker build using the builtin build step?

I have a Dockerfile that uses a private image as a base image i.e.

FROM private/image/repo:v1.1.1

I’d like to be able to authenticate (docker login) for the build. I know I can do this in a freestyle step but I’d like to know if i can with the builtin build step.

dustinvb avatar
dustinvb

We should be authenticating with all private repositories for this purpose using your Docker Registry connection. If you’ve configured the private registry and your Dockerfile cannot pull this private base image please put in a ticket with our Support team. https://support.codefresh.io

btai avatar

ooo ok @dustinvb good to know

btai avatar

@dustinvb thanks that fixed it

superfresh1
Codefresh Release Notes avatar
Codefresh Release Notes
06:51:23 AM

Helm quick start guide for Helm 3 - documentation () (Feed generated with FetchRSS)

2020-03-05

Harrison Heck avatar
Harrison Heck

I’m trying to build 2 images. The second one depends on the first one. I’m trying to pass NGINX_IMAGE=${{build_nginx}} as a build argument, but the value comes in blank. Is there something that I’m missing here, or do I have to construct the image name myself (which I would prefer to avoid).

dustinvb avatar
dustinvb

That’s a special sha that is understood by the pipeline and not something that would work outside our our image: construct. Let me look into a couple things and come back to you on this Harrison.

Harrison Heck avatar
Harrison Heck

Thanks.

Harrison Heck avatar
Harrison Heck

I would think it should work for anything though, because really it should just represent the image name.

dustinvb avatar
dustinvb

Here is the variable you are likely looking for but even that will possibly need some modifications ${{build_nginx.imageId}}

Harrison Heck avatar
Harrison Heck

Gotcha, I will give it a shot.

dustinvb avatar
dustinvb

That at least prints image information. I am looking over some other details.

Harrison Heck avatar
Harrison Heck

That link doesn’t work but I am giving this a shot myself

dustinvb avatar
dustinvb

Sorry Slack just made it a link…

dustinvb avatar
dustinvb

It’s the output of the imageId from one of my builds.

Harrison Heck avatar
Harrison Heck

Oh I gotcha.

Harrison Heck avatar
Harrison Heck

I would think that should work. Thanks for the quick answer. The pipeline is running right now.

dustinvb avatar
dustinvb

Here is the tag example.

Executing command: echo ${{BuildTestDockerImage.tag}}             
master-a6a1d3f                           
Harrison Heck avatar
Harrison Heck

Oddly this did not work. I must have done something wrong.

Harrison Heck avatar
Harrison Heck

Oh, the change didn’t push for some reason, that’s why

dustinvb avatar
dustinvb

Not necessarily. Like I said that SHA is something internal to us.

dustinvb avatar
dustinvb

I am checking the name as well so you can concat the two

dustinvb avatar
dustinvb
Executing command: echo ${{BuildTestDockerImage.imageName}}                                  
dustinvanbuskirk/rails-app-with-knapsack_pro-test  
Harrison Heck avatar
Harrison Heck

It did push, I just forgot I had this in 2 spots, so I’m trying again

dustinvb avatar
dustinvb

You might need to do NGINX_IMAGE=${{build_nginx.imageName}}:${{build_nginx.tag}}

dustinvb avatar
dustinvb

Okay, hopefully this helps. Ping me using @dustinvb if this doesn’t get you where you need to be.

Harrison Heck avatar
Harrison Heck

What’s weird is I am getting invalid reference format: repository name must be lowercase with both ways of doing it.

dustinvb avatar
dustinvb

That is a little bizarre maybe with the second name we need repo information.

Harrison Heck avatar
Harrison Heck

I’m running it again with a step to output what the value is for debug purposes.

Harrison Heck avatar
Harrison Heck

[r.cfcr.io/linioit/[email protected]:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46](http://r\.cfcr\.io/linioit/shop\-front\[email protected]:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46) was the value

Harrison Heck avatar
Harrison Heck

So this is weird

Harrison Heck avatar
Harrison Heck

It works when I do it locally. Could it be the version of docker being used by venona?

dustinvb avatar
dustinvb

That should be 18.09 last I checked. I assume the node Docker is the same or newer.

Harrison Heck avatar
Harrison Heck

I would imagine so. We’re on GKE 15

Harrison Heck avatar
Harrison Heck

Could CF be passing the value in caps for some reason?

dustinvb avatar
dustinvb

You Dockerfile, if the first like FROM: ? or the Build Arg?

dustinvb avatar
dustinvb

No it shouldn’t be doing that. I printed using echo those vars in my step. They were all lower case. The build arg should not change that case.

Harrison Heck avatar
Harrison Heck
ARG NGINX_IMAGE=[gcr.io/linio-support/shop-front-nginx:latest](http://gcr\.io/linio\-support/shop\-front\-nginx:latest)
FROM $NGINX_IMAGE AS ASSETS
Harrison Heck avatar
Harrison Heck
build_arguments:
      - NGINX_IMAGE=[r.cfcr.io/linioit/[email protected]:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46](http://r\.cfcr\.io/linioit/shop\-front\[email protected]:d19c2542673f9c191cdc66a43960052e4a65ad3b00718c69776778840d428d46)

This worked

Harrison Heck avatar
Harrison Heck

So it is 100% CF messing up the value when passing it in

dustinvb avatar
dustinvb

Can you open a Zendesk that includes the Build ID of the build with the error about capitalization so we can both review the YAML and see the error?

Harrison Heck avatar
Harrison Heck

Yeah.

dustinvb avatar
dustinvb

Can you also share the build id with me over DM here so I can just quickly peek into the YAML?

Harrison Heck avatar
Harrison Heck

Sure, one second

dustinvb avatar
dustinvb

Thanks, I recreated and will update ticket here soon.

In the meantime can you try this as a workaround, let me know if you have better results by using a direct variable instead of the metadata in the build_arguments.

  ExportVar:
    image: alpine
    commands:
      - cf_export NGINX_IMAGE='${{BaseImage.imageId}}'
  BuildArg:
    title: 'Build image: php-test'
    stage: build
    type: build
    image_name: '${{CF_REPO_NAME}}-php-test'
    dockerfile: Dockerfile
    build_arguments:
      - GITHUB_TOKEN=${{GITHUB_TOKEN}}
      - NGINX_IMAGE=${{NGINX_IMAGE}}
    tag: '${{CF_SHORT_REVISION}}'
    keyboard_arrow_up