#codefresh (2021-02)

codefresh

Archive: https://archive.sweetops.com/codefresh/

2021-02-21

2021-02-20

2021-02-12

2021-02-11

michal.matyjek avatar
michal.matyjek

trying to figure out array merging with yaml anchors in codefresh.yml

this should work, I think:

indicators:
  - environment:
      &aws_regions
      AWS=us-east-1,us-east-2,us-west-2

steps:
  test:
    environment:
      - *aws_regions
      - CLUSTER=us-1-west

but codefresh validate fails with: "0" must be a string. Current value: [object Object]

dustinvb avatar
dustinvb

Have you tried something like the solution proposed here to merge the arrays?

https://stackoverflow.com/questions/24090177/how-to-merge-yaml-arrays

How to merge YAML arrays?

I would like to merge arrays in YAML, and load them via ruby - some_stuff: &some_stuff - a - b - c combined_stuff: <<: *some_stuff - d - e - f I’d like to have the combined…

dustinvb avatar
dustinvb

I haven’t run into this before. I might need to set this up and attempt myself just seeing what all you’ve explored so far.

dustinvb avatar
dustinvb

Maybe

indicators:
  environment:&aws_regions
    - AWS=us-east-1,us-east-2,us-west-2
steps:
  test:
    environment:
      - *aws_regions
      - CLUSTER=us-1-west
dustinvb avatar
dustinvb

If you come up empty after a few tries I’ll take the experimentation on for myself. Just ping me back here.

michal.matyjek avatar
michal.matyjek

yeah, tried that and codefresh validate fails with: "0" must be a string. Current value: AWS=us-east-1,us-east-2,us-west-2

dustinvb avatar
dustinvb

Okay, I am going to open a support ticket on your behalf and record this conversation there. There must be an issue that exists with the validation with array anchoring. That or something special is required that is not documented to make it work in our YAML.

1
dustinvb avatar
dustinvb

Ticket: 7320

1

2021-02-10

dustinvb avatar
dustinvb

@Erik Osterman (Cloud Posse) Appreciate the attention you brought to Codefresh superfresh! Let me know if we can ever help out with things.

I am sure you’re already aware but we’re committed to maintaining the following Terraform provider for Codefresh.

https://github.com/codefresh-io/terraform-provider-codefresh

codefresh-io/terraform-provider-codefresh

Terraform provider for Codefresh API - https://g.codefresh.io/api/ - codefresh-io/terraform-provider-codefresh

2021-02-09

2021-02-06

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey guys, maybe you already discussed this here. But what you guys thing about this blog post: https://codefresh.io/kubernetes-tutorial/kubernetes-antipatterns-1/ specific about the 4th pattern, Mixing application deployment with infrastructure deployment

I am facing a similar concern with ECS were, I am adding 2 workflows in Github Actions, one for build and deploy the app, and another one for app-specific infra, like ECR or ECR task definition and service. My issue is to improve how we deal with race conditions, like ECR is not created yet and the app is being deployed. How do you guys deal with it?

superfresh2
dustinvb avatar
dustinvb

@Kostis (Codefresh) Maybe you can join office hours this week to hear some more on this subject?

Hey guys, maybe you already discussed this here. But what you guys thing about this blog post: https://codefresh.io/kubernetes-tutorial/kubernetes-antipatterns-1/ specific about the 4th pattern, Mixing application deployment with infrastructure deployment

I am facing a similar concern with ECS were, I am adding 2 workflows in Github Actions, one for build and deploy the app, and another one for app-specific infra, like ECR or ECR task definition and service. My issue is to improve how we deal with race conditions, like ECR is not created yet and the app is being deployed. How do you guys deal with it?

2021-02-04

michal.matyjek avatar
michal.matyjek

conditional execution of Codefresh steps based on files modified pattern?

For monorepos - seems like nothing out of the box that would allow us to execute specific steps only if specific files get modified. For example run stepA, stepB if files in serviceA/** were modifed, but do not run these steps if files in serviceB/** were modified.

Got some good suggestions from Codefresh on scripting this, but wondering if anyone else hit this/has a step ready?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, seems like there’s still no variable that contains the modified files. You’ll need a step that calls jq on /codefresh/volume/event.json to load it into a codefresh variable. Then you can use something like:

when:
  condition:
    any:
     serviceA: "match('${{MODIFIED_FILES}}', 'serviceA/.*', false) == true"
1
michal.matyjek avatar
michal.matyjek

heh this is cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve released our GitHub action for managing codefresh pipelines: https://github.com/cloudposse/actions/tree/master/codefresh/pipeline-creator

cloudposse/actions

Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.

1
michal.matyjek avatar
michal.matyjek

this is awesome…. wish was part of codefresh github app, out-of-the-box…

cloudposse/actions

Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, though this also supports gomplate templating so it’s VERY powerful

Noa Ginzbursky avatar
Noa Ginzbursky

Hey! Im using also Codefresh and in every new repository (microservice, severless, etc.), we’re adding a new trigger to the relevant pipeline which sits in external managed Github repository. Do you think it’s better to have automatically synced pipeline in every repository? can you share why?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What I don’t like about the centralized pipeline with triggers to each repo is that the pipeline history is muddled with success/failures across all services. Using views is cumbersome

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is why I like the approach we have taken which is to centralize the definitions but use the API and specs to create a dedicated pipeline per service. You can version the pipelines and services aren’t required to be on the bleeding edge.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The build history is separate too.

Noa Ginzbursky avatar
Noa Ginzbursky

Got your point, that’s great! thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this allows for centralized management of pipelines so that individual repos only need to specify a single action referencing a catalog of pipelines (E.g. a microservice catalog, an spa catalog, etc)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
name: codefresh
on:
  push:
    branches:
      - main
    paths:
      # When this file is merged to the default branch, then perform codefresh CRUD
      - '.github/workflows/codefresh.yml'
  # Synchronize pipelines with Codefresh nightly
  schedule:
    - cron:  '0 0 * * *'

jobs:
  pipeline-creator:
    runs-on: ubuntu-latest
    steps:
      - uses: cloudposse/actions/codefresh/[email protected]
        with:
          # GitHub owner and repository name of the application repository
          repo: "${{ github.repository }}"
          # Codefresh project name to host the pipelines
          cf_project: "${{ github.event.repository.name }}"
          # URL of the repository that contains Codefresh pipelines and pipeline specs
          cf_repo_url: "<https://github.com/cloudposse/codefresh.git>"
          # Version of the repository that contains Codefresh pipelines and pipeline specs
          cf_repo_version: "0.1.0"
          # Pipeline spec type (microservice, spa, serverless)
          cf_spec_type: "microservice"
          # A comma separated list of pipeline specs to create the pipelines from
          cf_specs: "preview,build,deploy,release,destroy"
        env:
          GITHUB_USER: "xxxxxxxxx-bot"
          # Global organization secrets
          GITHUB_TOKEN: "${{ secrets.CF_GITHUB_TOKEN }}"
          CF_API_KEY: "${{ secrets.CF_API_KEY }}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@dustinvb

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, here’s a sample catalog of pipelines for a kubernetes microservice (that we use) https://github.com/cloudposse/codefresh/tree/main/specs/microservice

cloudposse/codefresh

Catalog of reusable Codefresh pipelines, pipeline specs, and pipeline shared steps. - cloudposse/codefresh

dustinvb avatar
dustinvb

This is excellent! We’ve got other ways of doing this through GLOB expressions on file changes but this is pretty cool approach as well very dynamic way to generate pipelines with little recoding. I do have to ask though are you seeing more desire for this over our 1 to many pipeline to GIT projects capabilities. I’ve been working with most prospects and we often make 3 common pipelines. 1 to programmatically update the pipeline other 2 (CI/CD) and GIT Projects associated. I am going to invite our Codefresh TAM working in this area to the channel here to review what you’ve put together.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I really don’t like the one-to-many pipelines because the pipelines status and history is polluted with services

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

filtering and creating views is slow and tedious

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

editing the pipeline in one place breaks it for all services “in real time”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

while this approach allows each service to reuse the pipelines, without being strongly coupled.

dustinvb avatar
dustinvb

@ See above.

    keyboard_arrow_up