#release-engineering (2018-07)
All things CI/CD. Specific emphasis on Codefresh and CodeBuild with CodePipeline.
CI/CD Discussions
Archive: https://archive.sweetops.com/release-engineering/
2018-07-24

@Erik Osterman (Cloud Posse) has joined the channel

set the channel description: CI/CD Discussions

@rohit.verma has joined the channel

@Igor Rodionov has joined the channel

@Jeremy G (Cloud Posse) has joined the channel

@Max Moon has joined the channel

@dave.yu has joined the channel

@jonathan.olson has joined the channel

@Andriy Knysh (Cloud Posse) has joined the channel

@evan has joined the channel

@rohit.verma
how to use semantic versioning, I actually couldn’t figure out how to start from a specific version. We have created charts in a repo under path charts/
. I have write a make step to detect the charts which are changed. I want to automatically increment their semantic version. Is this something which can be achieve by this module. What I found is it generates semantic version as 0.0.0-sha(commit-id)

We follow the convention of tightly coupling charts with the micro service. We stick them in the the charts sub folder. The calculus of knowing which charts work with with docker images is a lost cause. The official chart repo by Kubernetes is optimizing a different use case, which is why they have all charts in one repo. The semantic version of our charts is derived from the nearest git tag in the tree. So if no previous tags, then you get 0.0.0
.

We have some documentation on our process here: https://docs.cloudposse.com/release-engineering/cicd-process/semantic-versioning/

(with codefresh)

set the channel topic: All things CI/CD. Specific emphasis on Codefresh and CodeBuild with CodePipeline.

@Igor Rodionov is working on a way to promote charts and images between repos (for @Jeremy G (Cloud Posse))

the rational of defining charts in a separate repos are
- Encapsulate ops from devs (both code and structure wise)
- Within dev.niki.ai we have the required service/helmfile with image tags, which makes sense to us to syncup whole infra from one repo
- The infra-changes (including services) can be monitored in one Pull request
- Even we can move chartmuseum to gitlab pages in future, or if we got a chance to free chart hosting service, it can be synced up using 1 repo only

Encapsulate ops from devs (both code and structure wise)

What I was looking up from SEMVER is a way to incrementally update the semantic version only

isn’t that antithetical from devops?

i think restriction is unethical, encapsulation is not

i want the developers to write the charts to deploy their apps

i’ll write one as an example, they write the rest

when their app architecture changes, I won’t know

they should update the chart respectively

I never mentioned that dev’s shouldn’t commit on other repos, but this won’t be as frequent as the code commits

i think the semver stuff will work though for your use case anyways

but versioning your charts will be manual

even that is also one point,
- Seperate pipelines for charts and code

the same way they are versioned in kubernetes/charts

but the pipeline will generate the semvers for the docker containers

and will pass that to the chart as the image tag to deploy

our strategy has been to pin charts to containers

that way for every single version on an app, there’s a chart that will deploy it

makes it very easy to maintain and understand what’s going on. departing from that will introduces new challenges.

in place of pinning to charts, i think this it should be pinned to helmfile

version of charts is entirely separate thing than version of code

but there’s no artifact storage for helmfiles

it can still be managed in same way, using yq

- name: '{{ env "RELEASE_NAME" }}'
labels:
chart: "somechart"
component: "app"
version: '{{ env "CHART_VERSION" }}'
chart: 'chart-repo/{{ env "CHART_NAME" }}'
namespace: '{{ env "NAMESPACE" }}'
values:
- '{{ env "RELEASE_NAME" }}.yaml'

so that’s the helmfile from a microservice that says how to deploy it

the envs come from the pipeline

you could add image tag also as part of set

yep

so that’s the way to do it if you want to decouple the charts from the service repo

the point is that, charts represent how a service should run infrastructure wise, but not what a service is running

what a service is, its being defined by container

something similar could be said for the Dockerfile
, no?

OS ~~~ Dockerfile
~~~ervice

not at all, its all about how we package the service

cluster ~~~ chart ~~~ocker

service ~~~ Dockerfile | cluster ~~~hart | release ~ version |

services are tightly coupled with dockerfile, we can’t use one service’s dockerfile for another

if using a monochart (declarative helm chart) for multiple services, i think it makes sense to move it out

but if a chart is 1:1 to a service, there’s no overwhelming reason to separate them

to be frank we are not using monochart but the same replica of chart for each service

just in case we need to modify something

its like we have 3 charts

external, internal, job

all external services had just required external chart, nothing else

all internal services require internal chart

but our differences aside, I think doing what you said with passing the image tag in the helmfile from an ENV

and same for jobs

will accomplish what you want in the end, no?

kind of, its more of doing a git commit on service updates within helmfile

(i want to move to a monochart for new services so they all follow a similar architecutre)

yeah but everything apart => I doubt we have a function which do this, Input => 1.0.0 Output => 1.0.1

yea, we don’t do that

knowing when to bump versions I think requires a human


our semantic versioning takes the x.y.z from the most recent tag

and computes metadata based on branch information (for staging environments)

(so what we’re doing is really not that magical at all)
2018-07-25

@Arkadiy has joined the channel
2018-07-26

Gitlab is one of the supported GIT providers in Codefresh. In this article, we will look at the advantages of Codefresh compared to the GitlabCI platform.

@Yoann has joined the channel

just a heads up, if anyone else had the “CodeFresh” status check as a passing requirement for PRs, it is no longer. Now your PR will get a status update that is pipeline specific. If you have repos that require the CodeFresh status, disable it in favor of Pipeline status

thanks @Max Moon! didn’t know that.

@dave.yu @jonathan.olson

NP!

@michal.matyjek has joined the channel

2018-07-27

@michal.matyjek not sure if this is too complicated of a PR for you take an interest in

What Use helm package to change version and app Version of chart Use convention that default image tag is based on app Version Create promote targets that allow to promote chat to required versio…

but we are working on the ability to easily/cleanly promote images and charts between repositories

The goal is to be able to do something like
make release/promote \
CHART_NAME=api-app \
SOURCE_VERSION=0.1.0 \
TARGET_VERSION=0.2.0 \
SOURCE_IMAGE=api-app:0.1.0-sha.a989ads8 \
TARGET_IMAGE=api-app:0.2.0

yeah

(interface not yet formalized)

we’re not there yet but the more I think about it - this seems to be the way to go

are you guys using multiple codefresh accounts?

we are not

that also seems like a must have

elaborate?

it’s the only way to RBAC production kubernetes integration from staging clusters kubernetes integration

on what you posted - in our flow we may do this for releases, for “master” CI we would autoversion

RBAC - waiting for CF to implement, on our “must-have” list, last I heard it is in progress

multi-account is available now I thought

so you setup all produciton pipelines in the production account

all staging pipelines in the other account

that way staging pipelines can never accidentally modify production

yeah makes sense

Not so long ago, hosting your code on the cloud almost always meant that you used Github. Teams were fairly standardized in their choice of git provider. This is no longer true. Several other solutions are now challenging Github, including Bitbucket and Gitlab. Moreover, several companies have chosen to actually host their code on-premises creating …
2018-07-29

Anyone using dockerfile linter they can recommend?

A ways back, I had looked into it and didn’t find one that I liked, but that was partially because they were all in either python or ruby.

If you find one you like, lmk! Would love to add it to our own build harness. I might concede on the ruby/python now :-)

I don’t think twistlock can do the level of linting we would prefer, so looking for something. I did not realize there are so many (simple google search shows 4-5 right away)

Also do a GitHub repo search

@rohit.verma have any linting suggestions
2018-07-30

Looked at 4-5 docker linters I could find and ran our one of our Dockerfile through them. The two that I liked the most are https://github.com/RedCoolBeans/dockerlint and https://github.com/hadolint/hadolint - the only 2 that complained about the non-array-style CMD or ENTRYPOINT
https://github.com/hadolint/hadolint seems well maintained (last updated 8 days ago) has most stars and forks
I also like the configuration and per-line excludes.
dockerlint - Linting tool for Dockerfiles
hadolint - Dockerfile linter, validate inline bash, written in Haskell