#dynolocker (2019-02)
https://github.com/joshmyers/dynolocker
Archive: https://archive.sweetops.com/dynolocker/
2019-02-06
@Erik Osterman (Cloud Posse) has joined the channel
set the channel description: https://github.com/joshmyers/dynolocker
@joshmyers has joined the channel
@mumoshu has joined the channel
@joshmyers was just talking with @mumoshu about your tool
i’ve been referring a bunch of people over to dynolocker
there’s no good solution like this that does distributed locking.
@joshmyers hi!
@mumoshu has an interesting (common) use-case about renewing locks
@Andriy Knysh (Cloud Posse) has joined the channel
@joshmyers is based in GMT (London), so he’ll be online tomorrow
@michal.matyjek has joined the channel
@michal.matyjek are you guys doing any distributed locking?
@dustinvb has joined the channel
I don’t remember
I was also sharing dynolocking with @dustinvb from codefresh
2 votes and 7 comments so far on Reddit
ah i remembered that i had created a very similar command myself 3 years ago, but in ruby https://github.com/crowdworks/joumae-ruby#cli
A client library for the Joumae lock service. Contribute to crowdworks/joumae-ruby development by creating an account on GitHub.
i definitely like a command written in golang am eager to submit a pr to dynolocker if that makes sense
dynolocker run --lock_name mylock --renew_interval 30s -- terraform apply
also wire it up with a codefresh pipeline to build binary releases
so the use case is…?
see the snippet above
basically, you need to lock something which could be used across pipelines or even within the same pipeline
for example, if you’re terraforming, you want to lock a project to a pull request
(e.g. like in atlantis)
also, i’ve had the problem we’re we merge 2 PRs too close to each other
and we end up doing concurrent helm deployments of the same app
we should really be locking a helm release before performing a helm deployment
that’s (2) very strong use-cases for locking
set the channel topic: https://github.com/joshmyers/dynolocker
probably it is unlikely to happen when you run the pipeline on a new commit to e.g. master
(happened to me yesterday… and we have wait: true
, so second release was rolling out destroying pods from first release)
first release then was blocked by second release completing
but i do have encountered the same situation as erik’s in my terraform pipeline powered by github flow(the latest one, apply/deploy before merging for final testing)
i think for terraform it’s scarier
are you using codefresh with terraform now?
not yet, but i’m seriously considering to move to codefresh for anything related to github flow
2019-02-07
Morning
Sure, PRs always welcome
it was a hacky tool whipped up due to a race condition when bootstrapping Vault nodes (a few years back now)
locksmithctl
by coreOS may also be interesting
That one requires etcd, no?
Yeah it was based off etcd
That might not work well if running outside of k8s
dynolocker might not work well outside of AWS
well, def won’t
Haha well, kinda depends on what “outside” means ;-)
But yea, I get your point. For our purposes, AWS is an acceptable requirement.
(And already use it for terraform locking)
Yeah, I was going to try and do more with it and actually have it do sane things with process and signals etc but never got around to it
huh, I like this Variant
2019-02-08
@richwine has joined the channel
2019-02-14
set the channel description: https://github.com/joshmyers/dynolocker Archive: https://archive.sweetops.com/dynolocker/