#codefresh (2019-06)
Archive: https://archive.sweetops.com/codefresh/
2019-06-02
Codefresh availability Jun 2, 19:05 UTC Investigating - We are currently having an availability issue, from initial investigation this is happening due to Google Cloud Platform cross region issue. We are working to resolve the issue
Codefresh’s Status Page - Codefresh availability.
Codefresh availability Jun 2, 19:23 UTC Monitoring - Issue have been resolved and system is fully operational, we’re still monitoring the status Jun 2, 19:05 UTC Investigating - We are currently having an availability issue, from initial investigation this is happening due to Google Cloud Platform cross region issue. We are working to resolve the issue
Codefresh availability Jun 2, 20:32 UTC Update - We are still having issues due to a cross region incident over at GCP: https://status.cloud.google.com/ Jun 2, 19:23 UTC Monitoring - Issue have been resolved and system is fully operational, we’re still monitoring the status Jun 2, 19:05 UTC Investigating - We are currently having an availability issue, from initial investigation this is happening due to Google Cloud Platform cross region issue. We are working to resolve the issue
Codefresh’s Status Page - Codefresh availability.
Codefresh availability Jun 3, 00:45 UTC Resolved - GCP has updated its Status Page and has confirmed that the incident is resolved. We validated that Codefresh is working as expected now. The incident is resolved.
More information about GCP’s incident here: https://status.cloud.google.com/incident/compute/19003 Jun 2, 20:32 UTC Update - We are still having issues due to a cross region incident over at GCP: https://status.cloud.google.com/ Jun 2, 19:23 UTC Monitoring - Issue have been resolved and system is…
Codefresh’s Status Page - Codefresh availability.
2019-06-10
Hey ya’ll. Running into a weird issue. I’m trying to run the following step in a freestyle project:
steps:
install_deps:
title: Install deps
image: 'python:3'
working_directory: ${{main_clone}}
commands:
- python3 -m venv venv
- . venv/bin/activate
- pip install -r requirements.txt
- ./scripts/test.sh
And it falls on its face. It seems to be getting confused about the order in which the commands are supposed to be run? This runs perfectly fine on CircleCI.
How does CodeFresh run commands, does it use the shell of the image I specify, or something else?
Based on what you’ve shared, there’s no main_clone
step defined. I know that it used to be implicit, but I think that codefresh has been moving towards making that explicit.
Also, it helps if you share the literal error, otherwise we don’t know what’s broken.
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
Here’s an example of the main_clone
step
Sure thing, this is the error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'nltk' has no attribute '__version__'
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'nltk' has no attribute 'download'
Reading environment variable exporting file contents.
[SYSTEM]
Message Failed to run freestyle step: Install deps
Caused by Container for step title: Install deps, step type: freestyle, operation: Freestyle step failed
with exit code: 1
Documentation Link <https://codefresh.io/docs/docs/codefresh-yaml/steps/freestyle/>
Action Items Fix command : ./scripts/test.sh
Exit code 1
Name NonZeroExitCodeError
Can you share your corresponding circle config?
Managed to get around it by moving the steps inside a Dockerfile instead of the codefresh.yml
.
Still new to this platform so will post any other issues I encounter on here.
2019-06-12
Public #office-hours starting now! Join us on Zoom if you have any questions. https://zoom.us/j/684901853
is there a way to exec into the codefresh container in your pipeline?
im trying to get the python kube client to run on the container and codfresh might be setting the kube-context up a little different. i wanted to avoid passing in the kube api token if possible
I have used tmate for this
2019-06-13
hello I have question about running pipelines locally via codefresh CLI
first of all thanks for the codefresh
CLI. it’s nice and easy to use
I’d like to confirm whether I’m doing something wrong here that’s causing my local builds to count toward my builds quota
or, OTOH if that’s intended & expected.
For example, here: https://codefresh.io/docs/docs/configure-ci-cd-pipeline/running-pipelines-locally/#keeping-the-pipeline-volume-in-the-local-workstation
How to run Codefresh pipelines on your workstation
Note that the engine has transparent network access to all the other settings in your Codefresh account and therefore will work exactly the same way as if it was run on Codefresh infrastructure (e.g. use the connected Docker registries you have setup in the UI)
but if I’m running a pipeline with --local
and --local-volume
opts, then I’d expect I’m like, fully local
2019-06-19
[Docker Hub Incident//status.codefresh.io/incidents/mspyvvq8yr3d) Jun 19, 15:23 UTC Investigating - Docker Hub has reported an incident: Components: Docker Hub Registry, Docker Hub Web More info here: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5d0a4fd6b684ad142660aa9d
Codefresh’s Status Page - Docker Hub Incident:.
Our system status page is a real-time view of the performance and uptime of Docker products and services.
Codefresh Case Study with Cloud Posse
#office-hours starting now! https://zoom.us/j/684901853
Have a demo of using Codefresh for ETL
These are the ETL jobs: https://github.com/singer-io
Simple, Composable Open Source ETL. Singer has 30 repositories available. Follow their code on GitHub.
[Docker Hub Incident//status.codefresh.io/incidents/mspyvvq8yr3d) Jun 19, 18:30 UTC Identified - The issue has been identified and a fix is being implemented.Jun 19, 15:23 UTC Investigating - Docker Hub has reported an incident: Components: Docker Hub Registry, Docker Hub Web More info here: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5d0a4fd6b684ad142660aa9d
Codefresh’s Status Page - Docker Hub Incident:.
Our system status page is a real-time view of the performance and uptime of Docker products and services.
How to bring up environments when GitHub labels are added or removed:
[Docker Hub Incident//status.codefresh.io/incidents/mspyvvq8yr3d) Jun 19, 21:07 UTC Resolved - Docker has reported the incident has been solved.Jun 19, 18:30 UTC Identified - The issue has been identified and a fix is being implemented.Jun 19, 15:23 UTC Investigating - Docker Hub has reported an incident: Components: Docker Hub Registry, Docker Hub Web More info here: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5d0a4fd6b684ad142660aa9d
Codefresh’s Status Page - Docker Hub Incident:.
Our system status page is a real-time view of the performance and uptime of Docker products and services.
2019-06-20
Can you use integration secrets in a pipeline? Can’t find anything in the docs. Might be completely off-base here, but I’m looking for a way to easily query ECR within a step.
sorry, so many terms and they can be overloaded
by integration secrets, do you mean what codefresh calls in their UI “Shared Configurations” (under “Account Settings”)
These can have secrets
then you can “import configuration” to a pipeline.
(though “import” is a semi-misnomer. they are not copied, but linked to)
@Erik Osterman (Cloud Posse) Thanks for replying. Sorry about the lack of context. No, I meant to refer to the secrets that are part of the integrations, in the Integrations section. When you add an ECR registry, you add your AWS credentials. Was wondering if there was a way to access/use said creds in a pipeline.
Ahhhhaaa
Now I see what you mean. Yes, that would be nice.
sec
you can try something like this:
Figured it wasn’t possible haha. Should be an enhancement though. If codefresh were to add secrets management that would be beyond great.
codefresh get context github --decrypt -o yaml | yq -y .spec.data.auth.password
Now this example is for the github
integration
I’ve never tried it for the docker registry integrations
2019-06-26
I’ve really bungled my pipelines.
I’m deploying containers using Geodesic but I realised that master
!= prod
, as silly as that sounds…
deploy:
title: Deploying Docker Image with Geodesic
stage: "deploy"
image: r.cfcr.io/he/${{CF_BRANCH_TAG_NORMALIZED}}.he.co.uk:master
volumes:
- ./deployment:/deployment
commands:
- ansible --version
So when branch is develop, the develop Geodesic module is used. Staging:staging… But… hold on… master branch uses which geodesic module?
Obviously there are some expensive changes I can make like renaming my prod
geodesic module to master
etc but I don’t like that
how can I create a sort of dictionary or something
Now I had tried this…
nvm think I may have solved that pickle
Lmk if you still want/need any pointers
but I realised that `master` != `prod
That statement may have just changed my mind on how I usually name my master aws accounts. Maybe root
is the better name for it.
Because codefresh isnt a language I couldn’t use dictionaries of elif statements so I just have a conditional that sets STAGE variable to prod if branch is master and if branch isnt master I set STAGE to CF BRANCH NAME which works just great!
yep, that’s a good approach
basically have a step that executes some business logic (E.g. in bash) and calls cf_export
I had a real panic when I went to soft launch a production machine first time with all these tools
It’s a little bit of a mindwarp the first time!
For ref and @sweetops
revision: master
ask_for_permission:
type: pending-approval
stage: "deploy"
title: Deploy release?
when:
branch:
only: [ master ]
check_geodesic_master:
title: Set Geodesic module - production
stage: "deploy"
description: Set Geodesic module
image: alpine:latest
commands:
- cf_export STAGE=prod
when:
branch:
only: [ master ]
check_geodesic_other:
title: Set Geodesic module
stage: "deploy"
description: Set Geodesic module
image: alpine:latest
commands:
- cf_export STAGE=${{CF_BRANCH_TAG_NORMALIZED}}
when:
branch:
ignore: [ master ]
deploy:
title: Deploying Docker Image with Geodesic
stage: "deploy"
image: r.cfcr.io/he/${{STAGE}}.he.co.uk:master
volumes:
- ./deployment:/deployment
commands:
- ansible --version
- ansible-playbook ${{PLAYBOOK}} -i inventory/${{STAGE}}.yml -u he --private-key=~/ssh/id_rsa -e "tag=${{CF_BRANCH_TAG_NORMALIZED}}"
when:
Naming conventions:
Branches: develop Staging Master
Geodesic modules: develop Staging Prod
Ansible inventory names: Develop Staging Prod
Container tags: Develop Staging Master
So as you can see a dictionary would have been beaut
No caps. Just on mobile
2019-06-27
do you guys have multiple pipelines for deploying to dev, qa, prod for a specific service. or one single long running pipeline?
So what works for us is to reuse the same deploy.yml
(manifest) but define the pipeline multiple times for each environment or cluster
e.g. deploy-testing
would have environment variables setup for deploying perhaps to the staging
cluster in the testing
namespace.
then all of those pipelines can be triggered at once by calling the codefresh cli from inside of a deploy
pipeline
and those can be concurrent or serial
sweet
thats exactly what i was going to do
less so
the triggering them all at once part. it was going to be more serial for us
merge into develop -> deploy dev merge into master -> deploy qa create git release off mater -> deploy prod
yea, makes sense
in fact, you can create one pipeline now that calls each one of those pipelines serially with an approval step in between
on an unrelated note, new codefresh pricing is online
Start free for both public and private repositories, no credit card required. Unlimited builds, unlimited private repos, built-in Docker registry, built-in Helm repository
i saw
it was updated a couple weeks ago
they reached out a week or two ago
its nice
much more palatable pricing
oh yeah