#release-engineering (2020-04)
All things CI/CD. Specific emphasis on Codefresh and CodeBuild with CodePipeline.
CI/CD Discussions
Archive: https://archive.sweetops.com/release-engineering/
2020-04-01
2020-04-02
2020-04-04
noooooooooooo the thread about atlantis is gone!!!!!!!!
the first thread…. wasn’t there an archive ?
I found it….
I found this one :
SweetOps Slack archive of #atlantis for October, 2019.
which is not mine but basically explains the same things I asked a while ago
2020-04-08
Deploys require a careful balance of speed and reliability.
2020-04-09
does someone here have a tool for a timeline ?
Currently we are using slack and a gitlab snippet as a log of when we updated certain components but I think it would be nice to have a tool that allows you to say we have updated this and just go back in time?
Also it should maybe be able to aggregate other sources like alert manager and bugsnag.
something like this
2020-04-10
I’m in the middle of a rather unpleasant task of choosing a company-wide CI, rather than letting each team roam free. The requirements are: manually-triggered jobs and pipelines from the UI, webhook pipelines, automatically detected and invoked pipelines from a repository, restarting individual jobs, UI for inputting variables, GitHub repo integration and SSO, configuration as (non-arcane) code, Docker-based, kubernetes-native workers, parallelism, possibly Secrets Manager integration.
I’m preferring self-hosted or stuff we already pay for.
So far I’ve rejected:
• Travis - can’t believe we’re paying for it
• Drone, got better since the last time but still no manual builds
• GitHub Actions (not self-hosted by we effectively already pay for it) - multiple features missing, including UI for manual triggering
• Concourse - workers basically pretend Kubernetes pods are machines
• GitLab - i used it internally in a team but it is ridiculously expensive when used as a CI. Would be 5 times cheaper if used as a Git hosting platform AND a CI at once but i doubt the business will want to migrate all repos. Not yet rejected:
• Jenkins - requires a bazillion of plugins to do the right thing, and configuring them with configuration as code is truly arcane. Basically does the wrong thing by default, and making it work the way we want it in 2020 is laborious. It’s also depressingly ugly, and to fix it more plugins are needed, installing which defeats the point of having Docker containers, unless someone builds their own (sound idea). I’m guessing pipeline DAG view also requires a plugin. Java stack traces. Devs have bad memories. Fortunately its stable Helm chart gets the Kubernetes part right, I am appalled however the declarative Jenkinsfiles need to for some reason know they use kubernetes… They should only know about Docker…
• GoCD - sadly, basically Jenkins made 10 years later. Kilometre-long tracebacks on startup but it serves. Requires non-official plugins to register with GitHub… The plugins don’t seem well-maintained. Seems to need less adjusting than Jenkins. Would pick it over Jenkins, just the plugins… Not yet tried:
• ArgoCD - but it seems like it won’t have UI for manual triggering
• Tekton
• SourceHut
• CodeBuild (not self-hosted but we’re on AWS anyway) Not yet looked at:
• Jenkins-X - who knows
• Bazel
• Buildbot What else should I consider?
Tekton/Jenkins-x are kubernetes only. Does that eliminate them from the mix?
And your list of requirements are basically everything, you want all the things.
So pipeline as code as a requirement is easy to meet but then you also ask for manual input of variables and manual releasing of your CD process.
You can sort of get most of your requirements from Azure DevOps (I’m a bit surprised to be saying right now) but that manual release process would just mean a separate pipeline as code you’d set to run without a trigger (so manually kicked off)
and you could update variables in the pipeline at that point I believe
I just got off a project wherein I created a ton of pipeline as code for the platform so its on my brain atm….
ArgoCD is all kube cluster driven btw. There are manual deployments but, as the name states, it is meant for CD, not really for CI
Tekton can be whatever you like and there is a nifty UI that is available for it but I wouldn’t call the pipeline as code an easy thing to consume for it. That’s what jenkins-x wraps around and attempts to orchestrate
Bazel is a build tool you would use in a pipeline so that doesn’t really equate
I question the need to manually kick off builds. It makes me think that you aren’t using version control correctly or that you are lacking an artifact store
look into Buddy (buddy.works)
Yes, i’m looking for something that can utilise Kubernetes autoscaling I already have.
There are multiple users - dev, devops, and techops (let’s call them like that) and while for some the git-driven workflow is acceptable, for others it won’t be, and even getting them to move to Jenkisfiles would be quite a leap.
GitLab basically checked most of these points (apart from having a decent UI for custom job parameters) the external repo CI use case is priced prohibitively, IMO to force people to migrate repos there. It was acceptable for one of the core teams with relatively few members.
Are you certain you don’t need manually triggered deployments only though?
In either case, buddy seems like an interesting one to look into. Not certain about ability to self host with it though
It’s complicated - in one team there’s one testing deployment per branch (GitLab manual step triggering a pipeline in the deployment repo), in another - it’s not limited (Jenkins form invoking a ton of scripts + ansible orchestrating an EC2).
There’s definitely a number of periodic jobs there.
Sounds like a party
We’ve got 10 years of free for all with no devops person behind.
I’ll take a look at the solutions suggested. Thanks.
Do you have to run on k8s? What about ECS / fargate, or combination (e.g. workers are plain EC2 instances in an AWS autoscale group)?
I’d rather minimise the amount of environments if possible.
I’d rather swallow the clunkiness of Jenkins than introduce ECS when I already have EKS figured out. As long as the CI i use is docker-based, i’ve got everything figured when it comes to deployments there.
If you go with Jenkins, you should look at the Blue Ocean suite of plugins. Blue Ocean is basically a prettier interface for pipelines that sits on top of Jenkins. If you’re willing to split your CI and your CD, you may also want to look at harness.io. It’s not self-hosted but it’s easy for your development teams to create pipelines and view deployments.
I definitely want at least Material UI so to save developers some trauma Blue Ocean is actually broken on our current installation of Jenkins. Want to try it there. I’m now suffering through configuring it with code, which is not a pleasant experience (still trying to do it right).
(we’re experimenting now with jenkins-operator, too early to give a but so far so good)
I’m trying to use the helm chart and it kind of works. Sadly I started using helm and now it’s too late.
btw, @Karoline Pauls see this: https://cloudposse.com/devops/jenkins-pros-cons-2020/
Big fan of azure devops yaml pipelines. Super easy to get going, use powershell or bash or python… Use docker commands with support for nested virtualization :-)
The Kubernetes Plugin works well but complicates Docker Builds. Running the Jenkins Slaves on Kubernetes and then building containers requires some trickery. There are a few options, but the easiest one is to modify the PodTemplate
to bind-mount /var/run/docker.sock
. This is not a best-practice, however, because it exposes the host-OS to bad actors. Basically, if you have access to the docker socket, you can do anything you want on the host OS. The alternatives like running “PodMan“, “Buildah“, “Kaniko“, “Makisu“, or “Docker BuildKit” on Jenkins have virtually zero documentation, so I didn’t try it.
isn’t it possible to use a dind sidecar, like in GitLab?
greenballs
I thought blue was chosen as the colour of success because of a large amount of people with red-green colour-blindness.
But overall i appreciate the article and the list of plugins.
@Karoline Pauls curious what you ended up going with and if others were added to the mix like CircleCI, CloudBees, etc?
I learned a lot from this thread, so thank you all for sharing!
Jenkins :<
I also tried Drone but it’s no longer truly OSS. the woodpecker fork of it was not finished on the kubernetes front
@Karoline Pauls, Thanks for sharing that fact so I didn’t have to figure that out myself. I had mentally put drone on my list of pipelines to checkout. Did you ever make any progress with Tekton by chance?
I played with it but it seems to be more of a CI/CD framework than a CI/CD system
https://github.com/laszlocph/woodpecker
This project needs love however, it’s a fork of Drone 0.8
An opinionated fork of the Drone CI system. Contribute to laszlocph/woodpecker development by creating an account on GitHub.
@Zachary Loeber But after al, if i could, I would use GitLab
2020-04-11
2020-04-12
2020-04-13
Am i right that there is zero support for Blue Ocean in Jenkins configuration-as-code?
Not sure what you mean
It seems that Blue Ocean will force every user to insert their own GitHub API key when creating repositories. Later I can find this API key in my user’s credentials.
I’d rather configure that with the configuration-as-code module but it doesn’t seem possible.
Looks like this kind of prompt will only be required the first time a pipeline is created in Blue Ocean for a specific Git server because it wants to be able to write Jenkinsfiles to the repo
at one point I put together a jenkins job builder driven workflow for automatically precreating several jobs for a Jenkins deployment into containers that were using blue ocean and the resulting deployment did not require additional prompting
I’ll create a different user and check if I’m asked to supply the key again.
or better, create a global credential similar to the one created by Blue Ocean, which I will delete, and check if i’m asked to supply it at all
not sure if this uses blue ocean or not but it looks interesting nonetheless - https://github.com/odavid/my-bloody-jenkins
Self Configured Jenkins Docker image based on Jenkins-LTS - odavid/my-bloody-jenkins
IMO it’s hard to automate Jenkins, not automation systems in general.
Plus its older than dirt
jx is Jenkins modernized for cloud native development in a highly opinionated manner. Unfortunately if you don’t do trunk based development internally it can be a hard sell.
(some of the things jx does is pretty darn cool though…)
I think you can still have an unopinionated tool that’s cleaner than jenkins, as long as it avoids dependency hell by having one opinion: containers.
ok, so the “robocop” bot just scolded me for using bad words… i said “dependency h e l l”
yeah, h-e-double-hockey-sticks got me slapped a few times now
it makes me chuckle every time
2020-04-14
Are there any advisories about security when running Jenkins jobs on Kubernetes defined with a jenkinsfile? There’s nothing stopping me from defining serviceAccountName: jenkins
in my Jenkinsfile, and accessing all secrets Jenkins has been given access to.
It seems that the only way would be to make Jenkins run its slaves in a separate namespace.
I don’t really see much use in the ability to specify a pod template, other than specifying different containers and resource requests/limits but there doesn’t seem to be a way to restrict that.
I tried going even further and testing the security of my RBAC setup by specifying namespace: kube-system
. I could see Jenkins failing to create a pod in the system namespace but for some reason Jenkins committed a suicide afterwards.
The Helm chart i’m using allows specifying the namespace for Jenkins quite trivially. I I tested the restriction and confirmed it to be working.
Hi there! I am trying to write scripts to reset database states so that some tests can be done. Are there any good tools for managing that?
Also trying to figure out if there are good tools for anonymizing data from prod. Thank you.
Possibly liquibase or Flyway. I’ve only used Flyway before via Maven though. Its more involved than running a simple script though. The clean command would ‘reset’ things
@Zachary Loeber Thanks a lot. Will look into those recommendations. We live in the python world mostly.
Let us know how things turn out. I’m mostly interested in the db realm from a distance. Maybe I can learn a thing or two from your efforts
For sure. happy to share after we implement something
Recently I’d seen a framework for this. I can’t find the one though.
It was like this: https://github.com/DivanteLtd/anonymizer (but I don’t think this is the one)
Universal tool to anonymize database. GDPR (General Data Protection Regulation) data protection act supporting tool. - DivanteLtd/anonymizer
i am going to try all these
and let you know.
thanks
Data anonymizing is highly dependent upon your schema and so I’m thinking you are going to be doing manual scripting. Knowing nothing about your database there are a few tools out there worth taking a peek at, https://github.com/davedash/mysql-anonymous or http://sunitparekh.github.io/data-anonymization/ are a few
A support group for people who have PII in their mysql databases. - davedash/mysql-anonymous
2020-04-15
https://plugins.jenkins.io/aws-secrets-manager-credentials-provider/ Am i right that this plugin basically allows any project to use any credential, with no possibility of scoping credentials to projects?
I was joking that, like with Java, no one ever got fired for choosing Jenkins, but now I feel like I should (1) hand my resignation and (2) wear a cardboard box on my head for the rest of my life.
no one ever got fired for choosing Jenkins
was thinking this the other day when you brought up the topic with your requirements.
@Jeremy G (Cloud Posse) relates to our current project
@Jeremy G (Cloud Posse) which plugin did we settle on?
We are planning to use configuration-as-code-secret-ssm but have not implemented it, as that also requires creating an AWS IAM role for Jenkins, and we don’t have a use case yet. most likely, thought, it is going to have the same behavior,
I’ve got a crazy idea of adding a mutating admission webhook handler to k8s that matches the part of the pod metadata Jenkinsfiles are not capable of specifying (candidates: metadata.annotations.{buildUrl,runUrl}
, metadata.labels.jenkins/label
, maybe even metadata.name
and injects secrets based on the project name found there.
Heavy guns.
i just found that there’s nothing stopping anyone from specifying:
volumeMounts:
- name: etc
mountPath: /hostetc/
volumes:
- name: etc
hostPath:
path: /etc/
in the Jenkinsfile.
If it was only possible to vary the default pod security policy per namespace… not sure how.
sadly, https://plugins.jenkins.io/script-security/ didn’t work for me when i tried it
2020-04-16
does someone have a semver check regex handy that works in bash.?
Gitlab actually has the ability to filter on more complex regex
SOMESTRING=”v0.2.1” regex=’([0-9]+.[0-9]+.[0-9]+)-?(.*)?’ if [[ $SOMESTRING =~ $regex ]]; then echo “$SOMESTRING matches semver!”; fi
probably not the best regex that misses all kinds of edge cases but it will generically do the needful
thanks!
semver bash implementation. Contribute to fsaintjacques/semver-tool development by creating an account on GitHub.
or, just the regex from it:
^[vV]?(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(\-(0|[1-9][0-9]*|[0-9]*[A-Za-z-][0-9A-Za-z-]*)(\.(0|[1-9][0-9]*|[0-9]*[A-Za-z-][0-9A-Za-z-]*))*)?(\+[0-9A-Za-z-]+(\.[0-9A-Za-z-]+)*)?$
that regex sadly did not work in bash
zsh: failed to compile regex: repetition-operator operand invalid
I used to have a problem. I used a regular expression to solve it, now I have 2 problems.
this isn’t exactly what’s being asked for, but it’s a semver tool in bash. we use and like it a lot, it’s great: https://github.com/pnikosis/semtag
A sematic tag script for Git. Contribute to pnikosis/semtag development by creating an account on GitHub.
ditto on lifting the regex out of it
2020-04-17
2020-04-18
2020-04-19
2020-04-21
am i right that Jenkins cannot interpolate variables defined in the same environment
block?
like:
environment {
A = "A"
B = "${A}-B"
}
EDIT:
‘${env.GIT_COMMIT}” passes but returns nulls…
EDIT 2:
it needs agent any
in the pipeline
2020-04-23
I am trying to re-do one of my test environment to allow testing SMS and email. Rather than sending real SMS and email, any best practices for how to set this up. Should we mock them or are there 3rd party services we can use for this?
@awatson is this something you have worked with?
@awatson has joined the channel
I generally mock them for testing, for Twilio which we use there is also an option to use an API token earmarked for development for billing purposes.
We also keep dev/qa/staging/training accounts and have special phone numbers isolated to those environments
I’d also be interested if you come across third party services .
@awatson Yes we are on Twilio as well. In general, we just have a prod and a non-prod acct. We don’t further differentiate like you have done. Any special advantage for doing that?
Mostly for end users, we do a lot with Flex and between QA, Training, and all the other teams it makes life easier.
Often staging different plugins for the UI, or various systems, or things like tearing a back office system down each night on a training space for call centers etc.
ICT/FAT/UAT… we have a lot of hoops to jump through.
2020-04-24
2020-04-27
2020-04-30
dunno the best place to put this, but currently working on a release pipeline. for shell gurus out there, is there an easy way to dynamically parse like env vars into json config file i.e.
{
"foo": "${BAR}",
"asdf": "${ASDF}",
"apple": "${PIE}",
}
There’s a command called envsubst that ships with most Linux distros
It can handle exactly the example you provide above
i second to it either envsubst
or some places i do sed
replacement
and the different json files can have any env variables?
DEPLOYVARS='$service_endpoint_id:$group_name:$group_description:$keyvault_name:$ado_project_name:$ado_project_id'
envsubst "$DEPLOYVARS"< "${TEMPLATEFILE}" >$(basename ${TEMPLATEFILE}).out
whoops
well that is a snippet of how you might use it to selectively replace variables
wow this is the perfect tool
After a while, you’ll probably reach a point where envsubst
is not enough
That’s when you can use https://gomplate.ca/
note that gomplate
let’s you load settings from files, SSM, http, and probably a dozen different places as well as use logic in the form of conditionals
like parameter store ?
so somewhat a replacement for chamber ?
or this is only good to build templates from those sources ?
gomplate documentation
I’ll go as far as to recommend skipping over envsubst and using something like gomplate to start with. That way you aren’t converting template files later on to the tooling you end up eventually switching over to when you get sick of bash hacks (whether it ends up being gomplate or dockerize or anything else that is way more capable than envsubst).
I did that (envsubst), then brief foray into helm, then Spinnaker parameters then back to raw yaml… Now jsonnet and considering cue or go.
the double-whammy is that you are then also practicing the templating language that helm uses (with some of its own special functions)