#release-engineering (2019-01)
All things CI/CD. Specific emphasis on Codefresh and CodeBuild with CodePipeline.
CI/CD Discussions
Archive: https://archive.sweetops.com/release-engineering/
2019-01-16
A sematic tag script for Git. Contribute to pnikosis/semtag development by creating an account on GitHub.
Awesome! I was looking for this myself!
A sematic tag script for Git. Contribute to pnikosis/semtag development by creating an account on GitHub.
(via @mumoshu)
(saw he was using it in #variant)
We do manual version bumping, i have never found a way of doing directly from CI that “knows” when to bump each part, unless the team is strict about idk, commit messages or similar
do you guys integrate this into your pipeline?
sooooo one way I’ve seen this done that I like:
add a file to the repo called VERSION
or something
as a human, you stick the version in there.
you can use a tool like semtag
to do that.
Then the CI process will look at that file on a merge to master
and call out to github to do a release tag with VERSION
.
I think this is a pretty elegant way of doing things.
Yeah, that is basically what we do, minus the tool for bumping, as tbh its not that much extra work to bump manually
then we trigger a GH release if VERSION > (existing tags)
for that we do use a tool to do semver comparison, actually python
for our branch model, a PR with a released version already existing, breaks the build
in a master only branch model, you can just version-commitsha
if version
already exists (for the docker/artifact tag)
we’re considering using something like https://github.com/mkj28/semversions for CI/CD where each commit to, say, master gets unique, sequential semver-like tag. Uses vYYYY.MMDD.nnnn format, so allows us to avoid “what does semver mean for my project” discusssion
For testing semantic versions in git. Contribute to mkj28/semversions development by creating an account on GitHub.
(it is Codefresh-specific)
(but can be easily repurposed as needed)
@Igor Rodionov
2019-01-17
Nice
A place for everything without a home. Contribute to jessfraz/junk development by creating an account on GitHub.
Hey we are evaluating diff tools for CI/CD (codefresh, buildkite, codepipeline, etc). I have a question for the users of codepipeline, how do you CD your pipeline definitions? EG: in codefresh or buildkite, this is read from VCS when a run is triggered
Close I could see is have the pipeline, update the pipeline itself, but im a bit concerned how would this effect the run itself that updated the build. Lets say you have a PR that updates the pipeline and some code to acomodate the pipeline update, you PR CI runs using the old pipeline. Lets say you merge, it deploys the new pipeline update first, would the subsequent steps run with the updated pipeline?
@pecigonzalo we have an example of using CodePipeline here https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L115
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
it deploys this repo (by default) https://github.com/cloudposse/jenkins
Contribute to cloudposse/jenkins development by creating an account on GitHub.
Yeah I have seen that @Andriy Knysh (Cloud Posse) thanks
so i think everything goes into https://github.com/cloudposse/jenkins/blob/master/buildspec.yml
Contribute to cloudposse/jenkins development by creating an account on GitHub.
which is in the repo itself
i mean you can add any commends in there
Yeah,but I dont know if that answers my question tho
yea, I don’t know either was long time when we tested it
I mean, the module deploys a pipeline, that is great, and then the jenkins repo has its own build definitions
which is also good
but then, what deploys the pipeline itself? another pipeline?
(talking about AWS codepipeline)
the module deploys it
when using codefresh, buildkite, etc you can define the pipeline itself in the repo
the module defines the code for it, but does not deploy it
you have to “apply” that module somewhere
ah i see now what you mean
I have seen some other CI/CD tools/companies have a repo containing all the pipelines, and that itself had its own pipeline, manually deployed as it was simple enought
We have a strategy of provisioning the pipeline creation via GitOps via a centralized repo
but then keep the pipelines themselves in the repos they manage.
Im not sure I follow, you have repo with the list of repos, clone all, parse/deploy the pipelines in them?
so, in most CICD systems (other than CodeBuild/CodePipeline with terraform), the act of adding the repo, configuring the pipeline to use a manifest (e.g. circle.yml
, travis.yml
, codefresh.yml
, Jenkinsfile
) is a manual process
We’ve automated that provisioning using GitOps style best practices
For #codefresh, we stick that in a org/codefresh
repo
Well actually, for travis and I believe also circle its just enabling it like, travis enable
from the repo, then it automatically pics the file
But I think I got the idea, and if I understood correctly it is similar to the workflow I was describing on the top of the thread I think
but meh, not a fan of that workflow
so for a workflow to update the pipeline itself, maybe consider atlantis
(which of cause needs its own pipeline to be deployed or you can just apply it to deploy on fargate as @Erik Osterman (Cloud Posse) is doing now)
Exactly! its the chicken and egg
Yeah fargate seems perfect for atlantis
TBH, some of the things that we dont deploy often, I dont mind deploying “automanually”
as a side note, I think I mentioned this before, as much as I like the idea of atlantis
I dont like the apply before PR merge workflow. It would be like deploying before merge
it can even have issues on non up to date branches