#codefresh (2021-02)
Archive: https://archive.sweetops.com/codefresh/
2021-02-04
conditional execution of Codefresh steps based on files modified pattern?
For monorepos - seems like nothing out of the box that would allow us to execute specific steps only if specific files get modified. For example run stepA, stepB if files in serviceA/** were modifed, but do not run these steps if files in serviceB/** were modified.
Got some good suggestions from Codefresh on scripting this, but wondering if anyone else hit this/has a step ready?
Yea, seems like there’s still no variable that contains the modified files. You’ll need a step that calls jq
on /codefresh/volume/event.json
to load it into a codefresh variable. Then you can use something like:
when:
condition:
any:
serviceA: "match('${{MODIFIED_FILES}}', 'serviceA/.*', false) == true"
heh this is cool
We’ve released our GitHub action for managing codefresh pipelines: https://github.com/cloudposse/actions/tree/master/codefresh/pipeline-creator
Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.
this is awesome…. wish was part of codefresh github app, out-of-the-box…
Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.
ya, though this also supports gomplate
templating so it’s VERY powerful
Hey! Im using also Codefresh and in every new repository (microservice, severless, etc.), we’re adding a new trigger to the relevant pipeline which sits in external managed Github repository. Do you think it’s better to have automatically synced pipeline in every repository? can you share why?
What I don’t like about the centralized pipeline with triggers to each repo is that the pipeline history is muddled with success/failures across all services. Using views is cumbersome
This is why I like the approach we have taken which is to centralize the definitions but use the API and specs to create a dedicated pipeline per service. You can version the pipelines and services aren’t required to be on the bleeding edge.
The build history is separate too.
Got your point, that’s great! thanks
this allows for centralized management of pipelines so that individual repos only need to specify a single action referencing a catalog of pipelines (E.g. a microservice
catalog, an spa
catalog, etc)
name: codefresh
on:
push:
branches:
- main
paths:
# When this file is merged to the default branch, then perform codefresh CRUD
- '.github/workflows/codefresh.yml'
# Synchronize pipelines with Codefresh nightly
schedule:
- cron: '0 0 * * *'
jobs:
pipeline-creator:
runs-on: ubuntu-latest
steps:
- uses: cloudposse/actions/codefresh/[email protected]
with:
# GitHub owner and repository name of the application repository
repo: "${{ github.repository }}"
# Codefresh project name to host the pipelines
cf_project: "${{ github.event.repository.name }}"
# URL of the repository that contains Codefresh pipelines and pipeline specs
cf_repo_url: "<https://github.com/cloudposse/codefresh.git>"
# Version of the repository that contains Codefresh pipelines and pipeline specs
cf_repo_version: "0.1.0"
# Pipeline spec type (microservice, spa, serverless)
cf_spec_type: "microservice"
# A comma separated list of pipeline specs to create the pipelines from
cf_specs: "preview,build,deploy,release,destroy"
env:
GITHUB_USER: "xxxxxxxxx-bot"
# Global organization secrets
GITHUB_TOKEN: "${{ secrets.CF_GITHUB_TOKEN }}"
CF_API_KEY: "${{ secrets.CF_API_KEY }}"
@dustinvb
Also, here’s a sample catalog of pipelines for a kubernetes microservice (that we use) https://github.com/cloudposse/codefresh/tree/main/specs/microservice
Catalog of reusable Codefresh pipelines, pipeline specs, and pipeline shared steps. - cloudposse/codefresh
This is excellent! We’ve got other ways of doing this through GLOB expressions on file changes but this is pretty cool approach as well very dynamic way to generate pipelines with little recoding. I do have to ask though are you seeing more desire for this over our 1 to many pipeline to GIT projects capabilities. I’ve been working with most prospects and we often make 3 common pipelines. 1 to programmatically update the pipeline other 2 (CI/CD) and GIT Projects associated. I am going to invite our Codefresh TAM working in this area to the channel here to review what you’ve put together.
I really don’t like the one-to-many pipelines because the pipelines status and history is polluted with services
filtering and creating views is slow and tedious
editing the pipeline in one place breaks it for all services “in real time”
while this approach allows each service to reuse the pipelines, without being strongly coupled.
@Laurent Rochette See above.
2021-02-06
Hey guys, maybe you already discussed this here. But what you guys thing about this blog post: https://codefresh.io/kubernetes-tutorial/kubernetes-antipatterns-1/ specific about the 4th pattern, Mixing application deployment with infrastructure deployment
I am facing a similar concern with ECS were, I am adding 2 workflows in Github Actions, one for build and deploy the app, and another one for app-specific infra, like ECR or ECR task definition and service. My issue is to improve how we deal with race conditions, like ECR is not created yet and the app is being deployed. How do you guys deal with it?
@Kostis (Codefresh) Maybe you can join office hours this week to hear some more on this subject?
Hey guys, maybe you already discussed this here. But what you guys thing about this blog post: https://codefresh.io/kubernetes-tutorial/kubernetes-antipatterns-1/ specific about the 4th pattern, Mixing application deployment with infrastructure deployment
I am facing a similar concern with ECS were, I am adding 2 workflows in Github Actions, one for build and deploy the app, and another one for app-specific infra, like ECR or ECR task definition and service. My issue is to improve how we deal with race conditions, like ECR is not created yet and the app is being deployed. How do you guys deal with it?
2021-02-09
2021-02-10
@Erik Osterman (Cloud Posse) Appreciate the attention you brought to Codefresh ! Let me know if we can ever help out with things.
I am sure you’re already aware but we’re committed to maintaining the following Terraform provider for Codefresh.
https://github.com/codefresh-io/terraform-provider-codefresh
Terraform provider for Codefresh API - https://g.codefresh.io/api/ - codefresh-io/terraform-provider-codefresh
2021-02-11
trying to figure out array merging with yaml anchors in codefresh.yml
this should work, I think:
indicators:
- environment:
&aws_regions
AWS=us-east-1,us-east-2,us-west-2
steps:
test:
environment:
- *aws_regions
- CLUSTER=us-1-west
but codefresh validate fails with: "0" must be a string. Current value: [object Object]
Have you tried something like the solution proposed here to merge the arrays?
https://stackoverflow.com/questions/24090177/how-to-merge-yaml-arrays
I would like to merge arrays in YAML, and load them via ruby - some_stuff: &some_stuff - a - b - c combined_stuff: <<: *some_stuff - d - e - f I’d like to have the combined…
I haven’t run into this before. I might need to set this up and attempt myself just seeing what all you’ve explored so far.
Maybe
indicators:
environment:&aws_regions
- AWS=us-east-1,us-east-2,us-west-2
steps:
test:
environment:
- *aws_regions
- CLUSTER=us-1-west
If you come up empty after a few tries I’ll take the experimentation on for myself. Just ping me back here.
yeah, tried that and codefresh validate fails with: "0" must be a string. Current value: AWS=us-east-1,us-east-2,us-west-2
Okay, I am going to open a support ticket on your behalf and record this conversation there. There must be an issue that exists with the validation with array anchoring. That or something special is required that is not documented to make it work in our YAML.