Any notes on how you guys at Cloud Posse or others that manage many modules as separate repos do your release process? It would be greatly appreciated.
I see the makefile works as the runner for the unit tests which is awesome and I’m assuming happens upon merging to master for module X. But what determines module X is ready to release into an new version? A global find and replace of the old version tag to the new, run all unit & integration tests & if all pass push release?
@IckesJ we talk about it here: https://github.com/cloudposse/docs/issues/335
what it's not clear how we currently do versioning why our strategy is unique because we tag every single merge to master our versioning strategy allows us to systematically and consistently in…
basically, every time you merge to master, you bump the version.
bugs = patch releases
every other release is a minor release.
major releases are milestone driven.
keep in mind that pre-1.0 has a special meaning in semver
@Julian Gindi might have some more thoughts on this.
@Julian Gindi has joined the channel
Welcome @Julian Gindi !
(he just did a big presentation at a local meetup on semver and when/how to bump versions)
IMO, the main purpose of semver is not to communicate the stability of the functionality. that’s almost impossible to guarantee. even a bug fix can be a breaking change for someone else who had a workaround for that bug.
I assume that every change could be breaking for someone.
therefore, IMO the purpose behind semver is to pin software so it only changes when you expect it to.
thus, I hate it when projects don’t cut a release for every merge to master.
and that’s why I prefer every merge to master to have release so I can gauge our distance from the latest release.
git sha’s suck for humans.
All of this I agree with, I do think you can add safety and a bit of structure to internal API’s and set rules on which services can talk to what, but it’s most powerful when used as a final “resolution” for software and being able to see how things change over time.
I have a tool to help with this process, but it’s almost identical to what Erik suggested
do you have a recording of your talk?
This is awesome info guys! Thanks & would love to see/hear the recording
Here is a repo that might “automate” the boring mechanical bits of incrementing semver https://github.com/JulianGindi/auto-semver
A small python tool that aims to let you focus on writing software, while it versions it for you. - JulianGindi/auto-semver
That looks like a great package, and seems more mature. I think the only issue I have with it is that it seems to require me to pass in the current, while my script automatically determines that based on git tags. Not sure if this script is able to do that, not clear on first glance.
My approach was also a bit more simplicity, but I’m going to dig into this tool a bit, I rather use and support a tool that has a larger community if it accomplishes my personal needs
thus, I hate it when projects don't cut a release for every merge to master. 100% @Erik Osterman (Cloud Posse)
• Recording of the whole presentation//vimeo.com/388711413> (including @Julian Gindi’s talk on SemVer)
• Julian’s deck and notes on SemVer//gindi.io/semver.html>
• Industry Updates slides//slides.com/coreygale/west-la-devops-5-versioning#/4>
A presentation created with Slides.
Thanks @Corey Gale
No problem thanks again for all your support!
@Julian Gindi did you consider showing a github actions code snippet that can be used to automate the semver stuff with your tool?
So we have some bits and pieces, but I should 100% add something like that to the repo. My intention was to have it used with CI and it’s certainly how we use it.
Our Library of GitHub Actions. Contribute to cloudposse/actions development by creating an account on GitHub.
we have a lot of
auto-* type actions
Perfect place to slot in an
This actions library is great. I just created a couple days ago to do BulkRepoChanges & pipeline in azdo to be able to do a find and replace in files across all our tf module repos &/or run a cmd like pre-commit run -a….auto creates the branch, pr, etc…i was also wondering how you guys managed hundreds of repos. Digging in reading also brought to light dependabot, i haven’t seen that before…pretty cool as well.
Dev teams at 1,000+ companies like Pivotal, Instacart, and WeWork use Pull Reminders to stay on top of code reviews and ship faster.
oh ya that is nice
@Julian Gindi - nice presentation…I like the aviation angles…I have my instrument rating & loved every second of the learning process.
@IckesJ nice! Absolutely a goal of mine to finish my private and get instrument one day! Will have to talk more…
cloudposse/atlantis:latest supports 0.12.16 ? AFAIK it downloads any version, right ?
it can download TF versions specified in config
in our Docker container, we install atlantis and TF version what we need, and atlantis just uses it
is that by env variable ?
DEFAULT_TERRAFORM_VERSIONand then it goes on the loop to download the versions
my Dockerfile :
FROM segment/chamber:2 AS chamber FROM cloudposse/atlantis:latest # install terraform binaries ENV DEFAULT_TERRAFORM_VERSION=0.12.16 COPY --from=chamber /chamber /bin/chamber COPY atlantis-repo-config.yaml / ENTRYPOINT ["/bin/chamber", "exec", "ecs-atlantis-test", "--", "docker-entrypoint.sh", "server"]
I just copy the command from the fork that downloads terraform
I don’t think we shop an atlantis container
If we do, it is out of date or not maintained
We distribute Atlantis as an alpine package
That we install like the other tools
so you are saying that the image in docker hub is unmaintained and I should not use it ?
so to use the fork you guys have I will have to build from the repo/fork you guys have
is that a correct assumption ?
@Erik Osterman (Cloud Posse)
Yes, and let me explain
our fundamental position on this is that running “alantis” from some kind of shared docker image is more or less useless
it doesn’t solve how custom providers get installed
it doesn’t solve how any other tools or dependencies will get installed
if someone depends on helm, helmfile, terragrunt etc…. it won’t do much good
that’s why we distribute the package instead so that it can be installed in a docker image you control
In our model, we use
cloudposse/geodesic as our base image (which is up to date)
and then add the exrta tool we depend on.
I completely agree, make sense
and you guys do no host those alpine packages I guess
if I want to use your fork
oh we do!
Add something like this to your dockerfile
cloudposse-atlantis is the naem of our package from our fork
Also, check out the other packages we have
dozens and dozens
thanks to @Zachary Loeber
this is awesome, well I’m glad I got it working with the old image at least to do my demo
now I will update
it was VERY hard to get my head around this
and….I might have found a bug
a master class in ECS/Atlatnis/codebuild/modules/etc
I would like it to be simpler
I think there is a problem with the example….
https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/examples/complete/main.tf#L42 this guy will create a default TG
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
well what happens is that the ingress module creates a TG and the alb module too
but the alb module listener rule uses the alb-default TG
instead of the one created by https://github.com/cloudposse/terraform-aws-alb-ingress/blob/master/main.tf#L20
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
I think so….
I got pretty confused when following the dependencies
I think I did something wrong
did you figure it out?
I have no tried yet again, I’m going to do some cleanup and see but I think this could be the ingress module. I created a PR long ago that was merges to allow to pass a target group arn in case the target groups were created by other means, so i think this could be a case were the alb module is creating the TG and the ingress module too.
so maybe is a matter of passing the TG arn
but I need to test my theory
@Zachary Loeber has joined the channel
Thoughts? My team is finally picking up on my “dadsgarage” idea. This is my writeup so far.
@Erik Osterman (Cloud Posse)
How would you you manage the versioning nightmare/hell ?
Scores of Terraform versions, Java versions … etc….
@Santiago Campuzano By having an ARG variable for the version of each tool in the dockerfile. That way, a team can take it and pin whichever versions they need for their own purposes
I mean… having a DevOps Swiss Army Knife Container is a terrific idea… but when there’s a super complex environment like my company,, it can be trickier
@roth.andy Is the container gonna download and install the tools on the Fly depending on those ARGs ?
@Santiago Campuzano - i think the pattern is fine; you just need to define the boundaries. you could have team or product based swiss army knives - if you can get away with one for the whole company great.
No, your team would build the container with the versions you want specified, and push the image to a docker registry
having a bunch of smaller tooling containers is usually going to end up a bigger management overhead
@Chris Fowles We have a Docker Container like that for Terraform/Terragrunt …
And all the different versions of Terraform Providers.. that have worked well so far
@roth.andy It would be nice if you share with us any work/advances on your idea
I’ll be glad to give you feedback
becomes really powerful when you start using the same container for things like this https://code.visualstudio.com/docs/remote/containers
Developing inside a Container using Visual Studio Code Remote Development
Yep… I’ve seen that pattern.. an IDE with all it’s dependencies inside a container….
That one is great…. Onboarding a new Developer in a matter of mins…
At least the Tooling/Env part…
yeh it’s pretty sweet
I was tasked with this at my last company. The product had a super complex architecture. When I joined I had just 3 years of experience. Ramp up time for a new developer was typically 6-12 months, no joke. When somebody tried to dive in it they had to ask me or a senior dev what the hell was going on. I’m not saying it’s a bad idea at all. Just that it can be. If I was in charge I’d say to force each team member to do a part of it so that you’re not in charge of the whole thing.
I want to understand the entire system and not just small pieces. If you have team members that only care about certain parts then I think you’re asking for trouble. Let us know how it goes!
@Santiago Campuzano I started an OSS project a while back. @Pierre Humberdroz has used it and contributed it.
@MattyB - i think it’s important to acknowledge that the more junior a member is the less instinctive understanding they have based on experience and so it’s harder for them to understand the whole system. there’s no excuse for senior members of a team to not have a big picture view however
Fair point! There are quite a few variables that could cause it to succeed vs fail. You know your team better than I do. I’d like to think if I was in charge of doing that at my current job I’d be in a better position to help others help themselves
@roth.andy I think that captures it well. In terms of managing multiple versions of software, we use the “alternatives” system for that, as well as build our own packages.
@Erik Osterman (Cloud Posse) can you point me at an example? I’m familiar with the “java-alternatives” package. Is that what you are talking about? I didnt realize it could be used for other arbitrary tools
How would you you manage the versioning nightmare/hell ? • no silver bullet, but it’s not a new problem either • reduce the dimensions for which you allow changes to vary • pin docker images (devops) to releases for stability • use package manager (OS have been doing this for years), with alternatives system • use
PATH strategically (e.g.
Alternatives can be used for any binary @roth.andy
Python, Golang, Java, etc….
here’s where we have our packages. our
helm3 packages are examples that have alternatives hooked up. our alpine distro is a little bit unconventional as we’ve templatized the
Depending on how your work I would add that a simple smoke test for some components are needed. Like does it still compile a simple java project for example. Just to make sure that every is kinda working.
In regards to managing the versions of external dependencies this has to be done anyways if you have different containers.
@ a previous company our build container had a entry point script which would make a request to a file and to check if minimal version requirements are satisfied we did this to prevent exploitable software to run inside our critical toolchain there was a semver checker and a blacklist mode.
This learning we made after some internal tool got compromised by an ex engineer. (long story)
regarding that this is something to keep in mind and might not even be important at all right now.
back than I wish it just told me hey upgrade your build container version to X to get latest features of Y .. it was writen in Go and super simple to use.
What are some of the best tools to automate application deployments to Kubernetes? I was looking at Spinnaker and Jenkins X. Has to be cloud-provider agnostic
Neil…. Actually Spinnaker and Jenkins X is a good marriage. Spinnaker is a CD tool, whereas Jenkins X is a powerful and stable CI tool…..
I’ve been researching that couple for a while in my company …
Jenkins-x is heavily opinionated though
Why ? @Zachary Loeber ?
in a good way but still may be hard to tear gitflow from some dev teams
I’ve been looking at jx pretty hard lately and I think I’m onboard with their flow for development but for some teams and deployments its going to be a hard sell I think.
jx seems to be the manifestation of the Accelerate book
it’s also in a bit of flux from what I see. documentation for deployment into clusters is already out of date so most tutorials from just last year are deprecated already.
its also cloud native so if you have anything hosted on prem it becomes less magical (at least from what I’ve seen)
We’ve been PoCing a lot of tools…
The last one was CodeFresh…. .
just my 2 cents on jx
We are very disappointed
really? I’ve yet to dig into codefresh
lots of people swear by it though
I was looking into maybe GoCD or Tekton for pipelines (jx wraps around Tekton)
Hmmmm don’t know… our experience was not that good… We were expecting too much I think …
isn’t drone pretty good?
Drone is pretty stable…. it has a good learning curve for us DevOps
It’s just that we are hitting some limits in terms of GitOps and K8S
sorry, I’m looking at doing releases via argoCD (or flux or similar) for kube deployments and still vetting out the CI platform
Question about Tekton
Is it a fully fledged CI/CD tool for K8S apps ?
It is a pipeline tool that I believe still is in alpha or beta but looks really promising to me. Only issue I see is having ot manage the backend cluster/resources it uses
jx orchestrates Tekton (at least that’s what I see it doing)
Hmmmm ok… we were exploring FluxCD as well… but we found it kind of overcomplicated
But I really really like the idea of platform independent declarative pipelines
I was looking at flux initially but shifted over to ArgoCD. I think they are combining efforts though
Nice !! Gonna explore ArgoCD then ….
worth a peek at least to answer your original question though, I don’t know that there is a best tool ATM. Seems to be an emerging space
I’ll say I’m not very fond of devployment though (my just now created word for when developers directly deploy to kube clusters)
jx does that only a little bit with preview deployments
I briefly looked at Codefresh but I couldn’t get it to connect to our Azure account for some reason. @Zachary Loeber Youre right that jenkins x is very opinionated and in flux. Documentation is outdated already. Also a lot of these tools rely on helm, which we recently upgraded to v3 and they only support helm v2. I’m going to check out drone and AgroCD
All of our builds are being done with Github Actions, so i really just need a way to deploy now
What’s stopping you from deploying with Github Actions as well?
I just haven’t figured out how to do it with a manual trigger. We aren’t confident to deploy to production without a manual approval
I’m really liking Argo-cd. thanks for recommending it
If it doesn’t have to be self-hosted, gitlab ci/cd supports manual triggers.
I also couldn’t get behind not having at least the option to create a manual trigger.
Currently in my org we’re using Jenkins from
stable/jenkins helm chart
Implementing our own GitOps with https://github.com/FairwindsOps/rok8s-scripts
It’s still in flux though, I’m working on “dark launch”
Right now, we do
devployments () with terraform
Opinionated scripts for managing application deployment lifecycle in Kubernetes - FairwindsOps/rok8s-scripts
Fairwinds makes some excellent tools, not sure how I missed this one, thanks for sharing
@scorebot help keep tabs!
@scorebot has joined the channel
Thanks for adding me emojis used in this channel are now worth points.
Wondering what I can do? try @scorebot help
Hi, what do you use for application deployment to ECS? we use .NET core docker containers