#release-engineering (2020-05)

jenkins_ci All things CI/CD. Specific emphasis on Codefresh and CodeBuild with CodePipeline.

CI/CD Discussions

Archive: https://archive.sweetops.com/release-engineering/

2020-05-22

Callum Robertson avatar
Callum Robertson

Hey team

Callum Robertson avatar
Callum Robertson

has anyone here done any setup of Jenkins on K8’s?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, we are using the Jenkins operator

Zachary Loeber avatar
Zachary Loeber

you have any caveats to using it? Scaling agent workloads, things like that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s been non trivial to run on Kubernetes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii (Cloud Posse) and @Jeremy (Cloud Posse) are leading the charge

Zachary Loeber avatar
Zachary Loeber

I worked with a client once that told me he slammed a cluster with a build once via an uncontrolled Jenkins agent

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Callum Robertson are you setting it up on EKS?

johncblandii (Cloud Posse) avatar
johncblandii (Cloud Posse)

@Callum Robertson we used https://github.com/jenkinsci/kubernetes-operator to deploy Jenkins. There are some caveats/issues with it being a “perfect” deploy, but overall it works and, once running, is pretty easy to update.

I’ve also used CloudBees so installed it with https://docs.cloudbees.com/docs/cloudbees-jenkins-distribution/latest/distro-install-guide/kubernetes#install-cjd-helm.

jenkinsci/kubernetes-operator

Kubernetes native Jenkins Operator. Contribute to jenkinsci/kubernetes-operator development by creating an account on GitHub.

Installing on Kubernetes | CloudBees Docs

CloudBees simplifies the installation of CloudBees Jenkins Distribution on Kubernetes by providing you with a Helm chart.

Jeremy (Cloud Posse) avatar
Jeremy (Cloud Posse)

The Jenkins operator is only “alpha” quality right now and is likely to evolve significantly before being production-ready. Jenkins itself has been around a long time and is showing signs of old age, but is battle-tested and full of options. Still, I would take a look at alternative CI/CD systems like GitHub actions.

Any CI/CD system you deploy in your cluster is going to have the potential to slam your cluster if not properly configured or managed. This is why most people run them on dedicated, isolated clusters.

Callum Robertson avatar
Callum Robertson

native master on K8’s and agents, etc

2020-05-15

btai avatar

yes!!

btai avatar

big news on org secrets

btai avatar

now if they will just allow us to rerun successful builds (for implementing/testing purposes) that’d be awesome

2020-05-14

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) just shared with me that GitHub Actions finally has organizational secrets!!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not GitHub Actions per se, it’s all Secrets (they could be used for GH Actions)

Gabe avatar

Can the secrets be read by things other than actions? Like GitHub applications?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh sorry, looks like you can create a secret from an app, but you can read only the encrypted value of it https://developer.github.com/v3/actions/secrets/

Secrets attachment image

Get started with one of our guides, or jump straight into the API documentation.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

only GitHub Actions (for now) can decrypt the secrets

Gabe avatar

Ah okay. I’m waiting for them to allow Gitub Applications to read the secrets :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no more copying a secret to 300+ repos

1
loren avatar
loren

Oh that’s awesome

2020-05-13

Steve Boardwell avatar
Steve Boardwell

Ha! Talking of repository scanning, someone just commented on this ticket I created last year https://issues.jenkins-ci.org/browse/JENKINS-56878

2020-05-12

Karoline Pauls avatar
Karoline Pauls

It is quite interesting, there is a “set environment variables” step that the declarative pipeline inserts. If i do an SCM checkout explicitly (and skip the default one), there are no GIT_* variables.

2020-05-11

Karoline Pauls avatar
Karoline Pauls

So I assumed that an input stage in a jenkins pipeline stage with agent none does not occupy an executor while waiting for input. How wrong I was. To prevent it from idling executors, it needs a when block with beforeAgent true . If there is no condition you’d want in when, you have to add a condition that always evaluates to true, e.g. expression { true } .

Karoline Pauls avatar
Karoline Pauls

it still doesn’t work, maybe because there’s a top-level agent that i needed to access GIT variables…

Steve Boardwell avatar
Steve Boardwell

It may only work if the main agent block is none. The main block defines the default agent to be used. That means that all the other stages need the agent set if they use one.

Steve Boardwell avatar
Steve Boardwell

Not a very elegant solution though.

Karoline Pauls avatar
Karoline Pauls

i’ve just confirmed that

Karoline Pauls avatar
Karoline Pauls

i had to copy my env vars to 5 places

Steve Boardwell avatar
Steve Boardwell

Can you set the env vars in a function with env.MY_VAR=bla? This should set them globally without using the environment block mutliple times.

Karoline Pauls avatar
Karoline Pauls

I’m not aware of the ability to set global environment in a declarative pipeline.

Steve Boardwell avatar
Steve Boardwell

Create a function outside the pipeline block def setEnvs() { env.MY_VAR=... } and call it in one of the first stages. Alternatively, you should also be able to use a script block script { ...env.MYVAR=bla.... }

s2504s avatar
s2504s

I use this section environment{} for defining all global envs for all stages in declarative pipeline

Steve Boardwell avatar
Steve Boardwell

The script step is a way to allow scripted pipeline code when declarative doesn’t provide the solution ootb.

Karoline Pauls avatar
Karoline Pauls

can i create a top-level function, outside of script in a declarative pipeline?

is declarativeness just a convention?

Karoline Pauls avatar
Karoline Pauls

also, i think i’ll still have to call setEnv everywhere to make it work with stage restarts

:--1:1
Steve Boardwell avatar
Steve Boardwell

Declarative has a DSL to test conformity and give structure. It doesn’t limit you to just declarative though.

Karoline Pauls avatar
Karoline Pauls

it’s soooo annoying that you cannot get the git hash without checking out

s2504s avatar
s2504s

Git hash can be obtained from body of webhook

Karoline Pauls avatar
Karoline Pauls

how do you obtain the body of the webhook?

s2504s avatar
s2504s
jenkinsci/generic-webhook-trigger-plugin

Can receive any HTTP request, extract any values from JSON or XML and trigger a job with those values available as variables. Works with GitHub, GitLab, Bitbucket, Jira and many more. - jenkinsci/g…

Karoline Pauls avatar
Karoline Pauls

i’d rather stick to what blue ocean does by default

Steve Boardwell avatar
Steve Boardwell

You using a pipeline job or multibranch?

Karoline Pauls avatar
Karoline Pauls

multibranch pipeline

Steve Boardwell avatar
Steve Boardwell

And it doesn’t provide the git info? I would have expected it to. Have you checked the git branch source plugin docs?

Karoline Pauls avatar
Karoline Pauls

you need the checkout scm step (or in declarative: no skipDefaultCheckout() ) to have any env vars

Steve Boardwell avatar
Steve Boardwell

Yes, that was it. Why do you need the git hash before checkout?

Karoline Pauls avatar
Karoline Pauls

i want to set it globally, without allocating a global agent

Karoline Pauls avatar
Karoline Pauls

actually, there seem to be currentbuild and scm global objects

Steve Boardwell avatar
Steve Boardwell

Even before checkout? If after checkout suffices, set top level agent to none, run Checkout stage with agent of your choice, after checkout run script { env.GIT_HASH=... } to set it for the rest of the pipeline.

Karoline Pauls avatar
Karoline Pauls

again, I don’t think this will work with re-running steps

Steve Boardwell avatar
Steve Boardwell

I see your point. Forgot the restart at stage bit.

Last chance, according to https://www.jenkins.io/doc/book/pipeline/running-pipelines/#restart-from-a-stage all the build info will stay the same so you could have a function which conditionally runs scm checkout if workspace is empty, or something along those lines.

Running Pipelines attachment image

Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software

Karoline Pauls avatar
Karoline Pauls

i think the easiest way would be to build self with parameters

Karoline Pauls avatar
Karoline Pauls

but hacky

Steve Boardwell avatar
Steve Boardwell

very. If you really need to avoid starting an agent, I would look adding the checkoutIfNeeded method to run the checkout scm step only if needed. My guess is that it should work, and you are not messing around with the general flow of things.

Steve Boardwell avatar
Steve Boardwell

Good luck anyway

Karoline Pauls avatar
Karoline Pauls

I need a ton of luck because essentially everything is breaking all the time in ways requiring at least 5 sentences to describe.

Karoline Pauls avatar
Karoline Pauls

I should have just written my own CI, i shit you not. I’ve done something similar once

Steve Boardwell avatar
Steve Boardwell

lol. Is Jenkins a requirement? Can you explain your CI requirements? Maybe we can come at it from another angle.

Karoline Pauls avatar
Karoline Pauls

i posted them a while before

Karoline Pauls avatar
Karoline Pauls

basically, on one hand we got spoiled by gitlab which generally does everything right, on the other we cannot afford to use it company-wise, on the other (third) hand, some business people want to be able to use the UI to run jobs, eh

Karoline Pauls avatar
Karoline Pauls

no one ever got fired for picking jenkins

Steve Boardwell avatar
Steve Boardwell

Makes sense. And build-wise, what do you want to happen?

Karoline Pauls avatar
Karoline Pauls

which build-wise?

Steve Boardwell avatar
Steve Boardwell

I have repoA I want a job to… I want the following things to happen for (a) branches, (b) PR’s, etc

Karoline Pauls avatar
Karoline Pauls

at the moment I’m getting hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: POD_LABEL for class: WorkflowScript which is an arcane error message that in the past happened when a pipeline was triggered from another but wasn’t passed some parameters that didn’t have default values.

I am “lucky” that i debugged it once, so now i know what may be wrong. It was excruciating to narrow it down in the first place.

Karoline Pauls avatar
Karoline Pauls

I have a project repo. I want to build a docker image, upload it to the “testing” ECR repository, run tests in parallel utilising that image, then on success push that image to the “production” ECR repo, later have parallel manual stages for (1) testing branch rollout and (2) production deploy, which wait for use input. The “testing branch rollout” stage should have a “delete testing rollout” stage afterwards, which is not dependent on the prod rollout but that’s not really possible in Jenkins because it’s not DAG-based.

I want the input steps not to consume an executor, which i solved as in https://issues.jenkins-ci.org/browse/JENKINS-62250.

Since the input step’s “Abort” button aborts the entire pipeline, I added a “SKIP” checkbox that’s later checked in when , so you can tick “SKIP” and press “Proceed”. Sadly when having 2 of such jobs in parallel, the Blue Ocean UI locks up if you skip one and doesn’t let you deploy or skip the other.

Karoline Pauls avatar
Karoline Pauls

the deploy steps trigger a subpipeline in the deploy repo and it mostly works, except for the arcane error message if you accidentally pass a null value that was required

Steve Boardwell avatar
Steve Boardwell

Have you thought about using the build step (https://www.jenkins.io/doc/pipeline/steps/pipeline-build-step/) to run individual jobs? The Jenkinsfile could be the orchestrator (timeouts for stages, etc) with the jobs handling the tasks. If not for all the steps, it would presumably help for the inputs, etc.

Karoline Pauls avatar
Karoline Pauls

I do use the build step to trigger deployments in the deployment project

Karoline Pauls avatar
Karoline Pauls

there is simply no answer to this, basically the Jenkins architecture is all wrong because if everything is a plugin, there’s not broader picture to how they’re developed. It’s begging for short-sightedness.

Steve Boardwell avatar
Steve Boardwell

Yes, I expect you’re right.

Karoline Pauls avatar
Karoline Pauls

I got that working eventually, but that’s the n-th time i change one thing to see 3 others break. Be that a cautionary tale.

Steve Boardwell avatar
Steve Boardwell

Glad to hear it. How did the solution with the parallel inputs look in the end?

Karoline Pauls avatar
Karoline Pauls

i merged 2 steps into one, allowing for mutually exclusive options

:--1:1
Karoline Pauls avatar
Karoline Pauls

the deploy repo should complain if they’re selected

s2504s avatar
s2504s

Hi all Jenkins experts) I set up jenkins using helm/stable to my k8s cluster. All executors are spawned each time for new job. Sometimes my jobs are blocked for 4-5 minutes with status “waiting for the next available executor”, but any jobs are running at this moment. Looks like this is some timeout, but I do not know, what is it Does someone face this issue ?

Karoline Pauls avatar
Karoline Pauls

to some degree, yes

Steve Boardwell avatar
Steve Boardwell

Check the jenkins logs. The k8s plugin fails silently with some things (i think it was an error in my yaml declaration when I had something similar)

Karoline Pauls avatar
Karoline Pauls

if there is an error in yaml, it doesn’t eventually succeed

Steve Boardwell avatar
Steve Boardwell

Wasn’t sure from the OP that it did. Could also be the resources with new nodes spinning up, etc. Would still check the logs and the k8s agent pods. Maybe waiting for something.

2020-05-10

s2504s avatar
s2504s

Hi guys! Glad to see you. I hope you are well So, I would like to ask you about a secrets delivery for application. I am using the approach where CI/CD process deliveries secrets from Secrets Storage (Vault, AWS Secret Manager, etc) to environment variables for application, then application uses these secrets. But guys from my team suggest to use approach where an application gets secrets from Secrets Storage by itself. I am not sure that it is right way, so I want to ask you about that. What the approach do you use for credentials delivery?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I like the env approach as others have mentioned, but from strictly a security point-of-view, environment variables aren’t the best option because an attacker with access to the node and inspect the environment of a process in /proc/$pid/env

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, accessing the secret storage directly introduces new problems - complicating local development and requiring more things running under docker compose

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can still use vault with envs (see envconsul) and I think that’s a happy medium, but still suffers from the aforementioned problem.

s2504s avatar
s2504s

Thank you, Eric for your explanation! Yeah, security is a pain point for any cases :)

Gabe avatar

I prefer env variables. It makes it easier to switch out the secrets storage since the application wouldn’t need to change.

s2504s avatar
s2504s

Thank you! yep! It was my first thought too)

2020-05-09

Karoline Pauls avatar
Karoline Pauls

this kind of sucks, devs largely want build-on-push

if you stop branch discovery from discovering branches, it will also filter out webhooks. Dumb but true.

Steve Boardwell avatar
Steve Boardwell

I suppose it depends what you want to test. PRs are also build on push

Steve Boardwell avatar
Steve Boardwell

For us, the only branches worth building/testing were either the main branches (develop, master, release-*, etc) or branches which are going to be merged into another branch. Devs are able to create branches with incomplete work without worrying about failing builds, since a build is only triggered once a PR is created, etc.

When testing a PR, we are also able to test the actual merged state as opposed to the feature branch in question, which may or may not have been rebased against the latest base branch.

Swings and roundabouts with pros and cons on both sides really. We found this method easier to handle in the end.

Karoline Pauls avatar
Karoline Pauls

Everyone is saying “it depends what you want” but in different projects somehow I never had a problem with the way GitLab works.

Steve Boardwell avatar
Steve Boardwell

Yes definitely. I wasn’t saying it works - it definitely doesn’t , just offering an approach/work-around which helped mitigate the problem.

Karoline Pauls avatar
Karoline Pauls

what about dynamically detecting buildstorms and pausing everything with the web API? (there must be a web API right?)

2020-05-08

Karoline Pauls avatar
Karoline Pauls

kubernetes will scale pods but those builds are useless in the first place

2020-05-07

Karoline Pauls avatar
Karoline Pauls

jenkins update:

• configuring it is more difficult than writing a CI from scratch

• NEVER EVER do plugin-based architecture in your projects. It is begging for short-sightedness in design. Plugin composition will compose implicit assumptions, in case of Jenikins way out of date. Right now I’m getting a buildstorm each time I reboot Jenkins. Disabling branch scanning triggering builds disables webhooks. It’s a house of mirrors and everything is completely backwards, bolted-on, etc.

Some tasks/projects are taxing to one’s sanity. This is one of them. Compared to that, porting a large amount of badly tested Python 2 code to Python 3 with no real integration tests was relatively pleasant.

Karoline Pauls avatar
Karoline Pauls

Sadly, we have no money for GitLab. But we do have money for me to waste time on Jenkins.

1
joshmyers avatar
joshmyers

Agree Jenkins is very painful, but you can automate the shit out of it

:--1:1
Karoline Pauls avatar
Karoline Pauls

automating Jenkins is an afterthought to an afterthought

Karoline Pauls avatar
Karoline Pauls

Configuration-as-code is a guessing game.

Karoline Pauls avatar
Karoline Pauls

it’s impossible to configure projects other than with job-dsl. Last time i tried it, I couldn’t get it to acknowledge the existence if an already existing credential (by ID).

joshmyers avatar
joshmyers

Aye, jobdsl is the way to go with Jenkins but still a bag of nails

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii (Cloud Posse) @Steve Boardwell this buildstorm problem might come to bite us too

johncblandii (Cloud Posse) avatar
johncblandii (Cloud Posse)

I’ve been there. A buildstorm can trigger a lot of issues due to scaling needs as it’ll push the current node(s) to the brink which can cause a large scale up with a generally slow scale down.

It is a pain in the neck to manage!

Steve Boardwell avatar
Steve Boardwell

Sorry. A little late to the party.

Agree with all the above, although we have been able to mitigate the problem somewhat by reducing the number of allowed branches (design decision though so may not fit everyone’s needs).

We chose to allow a single development branch, short-lived release branches, along with PRs for changes. We use branch filtering to stop other branches not matching develop or release-x.y.z[.*] being discovered, along with the BuildStrategyPlugin to build said branches only (also using it to stop PR’s from automatically building but that’s not scope of this).

This ultimately means that we only have branches which “should” be built anyway, with PR’s for anything else. As a side-effect the branches have been build already which means they don’t fall into the Jenkins restart/repository index scan trap because the git SHA is not null as with non-built branches.

johncblandii (Cloud Posse) avatar
johncblandii (Cloud Posse)
02:06:47 PM

@johncblandii (Cloud Posse) has joined the channel

Karoline Pauls avatar
Karoline Pauls

I’ve seen all of them at this point

joshmyers avatar
joshmyers

heh, times are hard when you fall into the depths of 3 year old Jenkins issues

joshmyers avatar
joshmyers

eugh.

Karoline Pauls avatar
Karoline Pauls

I’ve just tried to reproduce the build storm by killing a pod. It didn’t work this time.

Now I updated some plugins, I’ll see if this will trigger it.

1
Karoline Pauls avatar
Karoline Pauls

annoyingly, it didn’t

Karoline Pauls avatar
Karoline Pauls

so i don’t know what caused it in the morning

joshmyers avatar
joshmyers
prevent building branches/PRs existing before the first branch indexing by agabrys · Pull Request #186 · jenkinsci/branch-api-plugin

Hello, we provide CI for many teams in a big company. Unfortunately, we have a huge problem with server loads after the spin up process, after restarts and jobs modification. Multibranch jobs execu…

Karoline Pauls avatar
Karoline Pauls

what’s most punishing is the “prison-grade” UX of credentials. It takes some 5 clicks to do the basic thing

Karoline Pauls avatar
Karoline Pauls

and the sad part is that it’s all backwards because no one really wants to build old branches

2020-05-05

Karoline Pauls avatar
Karoline Pauls

I still cannot believe why someone thought that the default, unchangable behaviour of multibranch pipelines in jenkins should be to build all discovered branches on project creation.

joshmyers avatar
joshmyers

Clean up your branches ya’ll!

joshmyers avatar
joshmyers

But yeah, stampede of builds is a pain, you can change that behaviour though…

Karoline Pauls avatar
Karoline Pauls

I never clean my branches. There is no reasonable reason for that

joshmyers avatar
joshmyers

Yes there is. delete merged branches, it keeps your git history much cleaner

Karoline Pauls avatar
Karoline Pauls

exceptions:

Bamboo UX shittiness we had in maybe 2014 - where in order to build a branch we had to scroll through thousands of branches in a dropdown.

Jenkins buildstorm shittiness now….

joshmyers avatar
joshmyers

So there are at least 2 reasons

Karoline Pauls avatar
Karoline Pauls

neither of them good

loren avatar
loren

makes me happy that both github and gitlab have options to auto-delete merged branches

:--1:2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren we’ve been using this feature since it was introduced, but it seems to work 50/50 for me

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

has it been working for you?

loren avatar
loren

on github, what i’ve noticed is that it works perfectly if i merge the pr that i opened. if someone else merges it, then it does not auto-delete

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha! maybe that’s it.

loren avatar
loren

so we have required reviews and status checks, and after approved review, we usually defer to the teammate to merge

loren avatar
loren

of course, that’s a bit trickier with community prs where the contributor does not have permissions, but then that branch is on their fork at least

loren avatar
loren

oh, and this chrome extension is the total bomb! before github added the feature, this extension did it (and sooo much more)… https://github.com/sindresorhus/refined-github

sindresorhus/refined-github

Browser extension that simplifies the GitHub interface and adds useful features - sindresorhus/refined-github

Karoline Pauls avatar
Karoline Pauls

I never-ever wanted that and i’ve never met anyone who would.

joshmyers avatar
joshmyers

Yeah, Jenkins is a bit but it is a known quantity of

2020-05-01

btai avatar

@Zachary Loeber @Erik Osterman (Cloud Posse) thanks for rec on gomplate

    keyboard_arrow_up