#general (2020-02)
General conversations related to DevOps/Automation
General Discussions
2020-02-03
2nd interview for my first DevOps type role today! Anyone want to share tips?
#jobs had some good discussion on it recently. i’m not sure how much is relevant to a first timer though
Ya some good interview questions were posted in #jobs. Honestly, it comes down to the interviewer and their style of interviewing.
Hey everyone, give a warm welcome to our newest members!
- @wattiez.morgan
- @Viktors D
- @Prasanna Pawar
- @Almas Hilman Muhtadi
- @Dan Griffiths
- @Sharanya reddy pagidi
Good to have you here =)
thx for the hard work on that elasticsearch module thats really nice.
Thanks @gyoza!
2020-02-04
Hey everyone, give a warm welcome to our newest members!
- @cia
- @Josh Hudson
- @tomkinson
- @Maciej Kozlowski
- @Jason Carter
Good to have you here =)
thanks!
Thanks
2020-02-06
Hey everyone, give a warm welcome to our newest members!
- @Mike Martin
- @Zack Hewison
- @Miranda Pearson
- @Silke Van den Broeck
- @hugomelo97
Good to have you here =)
Thanks for the shout out! Really enjoyed the zoom session yesterday - will be back next week!
Thanks for the welcome! Hello
Thanks @Mike Martin! Ya, yesterday’s #office-hours was a good one. See you next week!
Hey @Zack Hewison!
2020-02-07
Hey everyone, give a warm welcome to our newest members!
- @Norbert Fenk
- @Dhrumil Patel
Good to have you here =)
2020-02-08
Hey everyone, give a warm welcome to our newest members!
- @rustemabd
- @Andrew Cameron-Douglas
- @Tom Howarth
Good to have you here =)
Thank you! Happy to join!
Thanks for letting me in. I am still on my learning journey. so please be kind :)
@Tom Howarth all skill levels welcome!
2020-02-09
Hey everyone, give a warm welcome to our newest members!
- @Aaron Lennon
Good to have you here =)
2020-02-10
Hey everyone, give a warm welcome to our newest members!
- @Stoor
Good to have you here =)
2020-02-11
Hey everyone, give a warm welcome to our newest members!
- @Geoff Weinhold
- @julius.blank
- @Conti Mattia
- @Meg Yahl
Good to have you here =)
Dang, the daily new member list is getting larger and larger. The cult of devops is spreading….
come closer to the light!
Question on secrets for things such as DB passwords etc. Does anyone keep the originals in a Git repo (encrypted of course)? We do and I’m questioning myself on whether we should.
Pro: We have the master copy offline from the K8S cluster and in the event of a total failure we still have access to the original passwords.
Cons: Do we really need to keep yet another copy of the passwords? They are in the Azure Keyvault/Google Secret Manager (where apps pull them them from)
Anybody have any good reasons to go either way?
@Patrick M. Slattery I mean, it sounds like you’re facing the burden of keeping passwords in more than place. What about a centralized secrets management solution such as HashiCorp Vault?
Pros:
• Secrets are in one place
• Failure scenario: Vault’s storage can be HA, for example with Consul as the data store. Even in total failure, you will be okay if you have enabled Consul backups to AWS S3.
• Cloud agnostic - i.e. you’re not relying on AWS KMS, GSM, etc
• Dynamic Secrets Engines (You can give admin credentials to Vault and it will issue and automatically revoke temporary DB credentials)
• A k8s mutation webhook controller which essentially acts as an operator to allow injection of secrets to pods https://www.vaultproject.io/docs/platform/k8s/injector/index.html Cons:
• Expensive to run HashiCorp’s reference architecture of 5 Consul nodes + 3 Vault nodes (https://learn.hashicorp.com/vault/operations/ops-reference-architecture) - Vault integrated Raft storage is still in beta until 1.4.0
• You’re running a third party application on top of everything you’re already running
• Re: Failure scenario and automated backups - for non-Enterprise Consul you need to manage automated backups yourself
Yeah, I’ve heard a couple of Vault horror stories that keep me away from it. We ourselves initially used Consul for state management in our app and had several disasters with it. It would be hard to persuade anyone here to run Consul again.
That said the pros are all very much what I really want…
You can use AWS secrets manager (if on AWS), but it becomes really expensive if there are large number of secrets to be stored (we are also in the same boat right now on which secrets manager to use though)
We are currently using Azure Keyvault / Google Secrets Manager (We are moving away from Azure though) Price for either is not anything crazy but then again we only have a few dozen secrets in each instance at most
and as a result still red zone for twilighting as per Googles norman operation procedures
Google Secrets Manager is nice but is pretty light on features being so new (Still in beta)
as long as the encryption is well protected i (depending on the workload) don’t see a massive issue with using git as the source of truth as long as they are that. things like needing to rotate at short intervals and needing to audit retrieval of secrets would change that recommendation
vault and consul are great tools, but they’re not without operational overheads - they need to be kept running and maintained
if you can’t commit to the overhead required to run them you’re introducing a weakness into the system rather than a strength as they’ll quickly end up in the critical path
This was very concisely said, I try and formulate such a sentence when someone throws Kubernetes into the conversation, even them having never operated it
2020-02-12
2020-02-13
what are everyones favorite videos for learning terraform, helm, kubernetes, etc? please share with me!
I don’t use videos for learning much but thus far I think that Kubernetes In Action is the current bible of kubernetes.
Same, videos tend to be too slow-paced for me, and I’m a quick reader. I’d rather read, pause where I want, test, play, search for answers…
A coworker of mine has enjoyed the LinuxAcademy content, I can’t vouge for it but it’s helped him learn stuff.
Bootstrap Kubernetes the hard way on Google Cloud Platform. No scripts. - kelseyhightower/kubernetes-the-hard-way
Also, courses on udemy from Edward Viaene, for Kubernetes, terraform and Prometheus are good ones I can think of
Edward Viaene is a Udemy instructor with educational courses available for enrollment. Check out the latest courses taught by Edward Viaene
Hey everyone, give a warm welcome to our newest members!
- @Siraj Rauff
- @ericyang879
- @sekhar modu
- @Vidhya Vijayakumar
Good to have you here =)
Helpful feedback @Zachary Loeber @Alex Siegman
Let me rephrase this to what learning materials have ya’ll found the most helpful
I am trying to curate some content to help newcomers to grok the concepts and give them a place to get started
I am personally searching for content and overwhelmed with how bad most of the stuff is (well, to be kind - just not the way I would explain it), so it’s not surprising that if someone is a newcomer to this stuff how hard it is to find good learning materials
For people brand new to kubernetes, I’ve found “Kubernetes Up and Running” pretty invaluable for learning the basic concepts. It’s available for free from a few places, I haven’t found a give-information-free link in my quick google, but https://azure.microsoft.com/en-us/resources/kubernetes-up-and-running/
Improve the agility, reliability, and efficiency of your distributed systems by using Kubernetes. Get the practical Kubernetes deployment skills you need in this O’Reilly e-book. You’ll learn how to:Develop and deploy real-world applications.Create and run a simple cluster.Integrate storage in…
Or you can buy it from o’reilly / amazon
I think it helps to be focused on what you’re trying to learn, and more importantly, for what purpose. Example being, Up and Running is a great book for anyone who has need to understand basic kubernetes concepts. But if you’re a developer who has to make the program and stuff it in to a container, will that necessarily help you? I find it harder to find learning materials at the level and “depth” around a given concept than materials for a tool or concept in general
that’s a good point.
Another great example is python books. There’s a million “learn python” books, but all of them cover the same boring stuff. What about a “learn python for someone who already knows python” book - where it goes more in to style, architecture choices for python programs, good designs for whatever, etc. I don’t need to be taught python, I want to learn/explore idiomatic approaches to problems in python, learn to use the language elegantly and efficiently, etc.
@Erik Osterman (Cloud Posse) can i be added to the terraform channel, kicked myself out by mistake
Hrmmm anyone can join any channel :-)
Even if you leave one…
good to know, thanks
2020-02-14
Hey everyone, give a warm welcome to our newest members!
- @Sudesh Lalmal Pathirana
- @mattia.bertorello
- @Olivier
Good to have you here =)
2020-02-15
Hey everyone, give a warm welcome to our newest members!
- @Kevin Hetman
- @krismulica
- @acs0508
- @Örjan Sjöholm
Good to have you here =)
2020-02-16
Hey everyone, give a warm welcome to our newest members!
- @21042jim
- @Christoph Gerkens
Good to have you here =)
2020-02-17
Hey everyone, give a warm welcome to our newest members!
- @Frankie.li
- @CC
- @Pawel
- @Alex Kiss
Good to have you here =)
Thanks!
good to be here
Is this the right way to version pin helm to v2 in buildharness .33?
apk —update —no-cache helm2@cloudposse && update-alternatives —set helm /use/share/helm/2/bin/helm2
?
probably a copy & mistake problem, but —
is wrong
should be apk --update --no-cache helm2@cloudposse
etc..
After this helm version -c correctly shows version 2.16.1
Couldn’t figure out if there was a make target that would allow packages to overwrite helm symlink to the specific one I want
@vincent.drl there are a few different concepts that are getting mixed together
there’s the build-harness
docker image that has packages installed
then there are the install targets. the install targets are designed to work across platforms, thus do not use apk
which is specific to alpine linux
Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness
When using this target, you would do something like make packages/install/helm HELM_VERSION=....
Hmm
I’m adding a few more binaries to build harness from private GitHub repo which was working fine while on .27 but helm 2.14 didn’t work for us anymore
I was basing off of the buildharness image and baking a new one after using the private GitHub release target
aha
gotcha
Now I just want to make sure helm2 is used
As “helm”
Bumping to .33 defaults to helm3 :(
ok, so yes, you’re on the right track. You’ll probably want to uninstall helm@cloudposse
and then use helm2@cloudposse
Ok, makes sense because helm3 is fully unlinked so I might as well get rid of it properly and I don’t have to muck around with update-alternatives directly then
Thanks!
ya, and when you start moving to helm3, we have a helm3@cloudposse
package so you can keep both installed
Cool cool - were deprecating helm though so probably won’t do that
what are you moving towards?
not a text renderer for structured documents
i can understand for your own apps, but if you depend on any open source charts, you’re undertaking a monumental effort and an exercise in how to manage technical debt at a colossal scale
How many charts move to helm3?
i would never want to manage prometheus, grafana, kiam, cert-manager and the 2 dozen other charts
all helm2 charts are compatible with helm3
Hmm maybe I don’t have to version pin helm at all then
Didn’t test it, haven’t looked into helm3
but you need need to “upgrade” helm2 releases to “helm3” releases
or uninstall (with helm2) / reinstall (with helm3)
Overall I have had mostly headaches from the upstream helm-charts repo
Oh we don’t use tiller
Only render and send manifests to kube through ArgoCD
aha
So probably don’t need to do anything then
ok, so the “package” concept of helm is not something you’re leveraging
You mean the releases through config maps?
if we think about helm
being like rpm
, then what you’re doing is like:
rpm2cpio myrpmfile.rpm | cpio -idmv
which is fine, but then it’s like managing a linux distro without any package manager
it’s not really the same thing … think about it more like spinnaker helm bake phase
i don’t think of helm as a template system
i think of it as a package system
Helm shouldn’t try to play CD
i guess it’s a perspective on where the “CD” starts and ends.
Which depends on how much control your other tools need over traffic routing within and automated rollover
that’s a fair point.
Why shouldn’t helm do CD? Arguably that is its entire purpose.
For most deployments it works perfectly fine. If you are doing some nuanced rollout then of course you will need to plan out a strategy first and consider an appropriate tool to suit your requirements.
To me, helm primarily solved the issue of managing related Kubernetes resources which are tightly coupled and have few variable configurations across different use cases
True that for the examples listed above (externalDNS / cluster autoscaler / monitoring agents / … ) the Deployment strategies are fairly simple to let it handle by Helm. helm 2 was horrible security and rbac wise due to the tiller component though.
requiring more fine grained patching of custom paths to handle secret management as per company requirements (vault service accounts / sealed secrets / chamber entry points) … helm charts just become overly generic …
We’re currently doing the last mile customisation (as some ppl call it) with some custom patching on TOP of the library of open source “packages” which often contain operational experience of the many contributors to it
But I’ve also found that many helm charts are lift and shifted into kube early on and completely missing latest evolution of the tools they package actually adopting Kube api to leverage a lot of the distributed complexity issues - so sometimes I strip 90% of the stable chart (and I do create issues and have my charts public for ppl who search for the same search terms and come across my comments)
I disliked tiller a whole lot as well, it made pipelines stupid hard to work with and maintain any level of security. But that’s really no longer relevant. I’m not wholely advocating helm for everything but I’ve come to accept it as a tool in my belt for certain tasks.
given the above issues:
- outdated charts with slow adoption of consumer patches
- last mile customisations highly dependent on individual use cases either resulting in overly generic charts of inflexible charts
- poor handling of modern CD practices (it’s designed for VERY simple upgrade strategies)
I’ve found that almost every public chart needs some form of manipulation to suit my client’s needs as well. Pulling them down into localized chart repos is all to common but also is technically part of best practice of eliminating outside dependencies anyway.
See also istio deprecating it’s official helm charts, it’s basically a consensus in the community
Pulling down with poorly documented patching leaves you in a much bigger mess of managing upgrades
We still heavily leverage helm, albeit some of my colleagues do it very begrudgingly
Not true at all. Pulling random charts out in the wild into production clusters leaves you in a far more likely position to create technical debt though.
And we are trying hardest to get rid of it (mostly for internal deployments for now)
in a modern CICD pipeline, how do you share common elements across dozens of projects effectively?
Istio also doesn’t define the industry standards
though it is really exciting and fun stuff, I see a very small portion of businesses adopting it
sorry, I’m playing devil’s advocate here because I’ve had a few people categorically make some of the same claims you have based purely on prior distaste for the tool.
I’m one of those people that begrudgingly use it and see its value in the right places.
adopting what? Service meshes?
I also don’t push secrets through it at all
I do pull models for that on my clusters so I flat out work around some of the stickier elements of using it.
I’m also not categorically against it, but I’ve adopted severe abused helm charts with undocumented patches from upstream
helm, sry
And my colleagues who inherited this from the person I replaced hate it a lot more than me
The fact so many projects have been packaged with Helm shouldn’t lock us into it though
I think fundamentally we kind of agree :)
right, so for now, if you have to package a common set of deployment code for shared use across multiple development efforts what well thought out tools are there to use?
kustomize looks pretty cool. That’s one I should look into?
for our internal deployments we integrated templates into a binary to avoid abuse
ugh… must be a better way
Now that kustomize was pushed into kubectl … it’s idd something that seems to solve our patching issue
Myeah I’m not 100% with it either, more so because I really want a structure aware template that doesn’t fall over a missing space
And simply catches syntax errors way earlier
I think jsonnet + kustomize
But haven’t played with enough myself
TF .12 HCL is quite interesting, just migrated all resources not managed by kube to that - I was very sceptical at first
I never tried TF to manage kube deploys … I don’t think the TF model fits kube control loops model?
I do think my friend’s company moved or TF to manage to kube deployments though
If you always know your cluster state then tf is ideal for it I’d think.
I’m still spitting out base clusters and configuring them after the fact via stacks of helmfiles for baseline config
I used to pre-create namespaces and shove secrets into the cluster via TF though. I’m not against using it for kube deployments but then I have to connect developer pipelines to terraform which kind of scares me (probably more than it should)
I’m now leaning heavily into the deployment/environment git repo approach that will allow me to leverage gitops if I like (or even just stand-alone pipeline as code in the repo itself to do push deployments)
My direct supervisor kind of designed an internal replacement for helmfile+helm with that binary I mentioned
- it can fetch and render helm charts
- it overlays patches in a crude way (not as fine grained as kustomize)
- it takes a map like helmfile to do all of the above
- it pushes to a sync branch which ArgoCD tracks
- it does basic cluster bootstrap and sets up argoCD tracking for all other deploy repos (team specific) related to the cluster
And it has those compiled in templates for non-helm internal microservices which have defaults and some globally + locally overwritable values
your super sounds like a person I could get along with.
how many repos/deployments are you supporting?
(for what its worth I’m eyeing over jx for some of this for opinionated deployment into kube, they have some pretty sound ideas around teams/projects/environments)
I haven’t heard about jx- I have to say the design of the tool is mostly by my supervisor Chris Kolenko - I helped implement the functionality … I know he’s dying to talk publicly about it but we haven’t had the time yet to finalise it - currently handling about 3 Brandings of the same set of microservices across dev / test / staging / prod - so 12 repos of that and 2 cluster repos per env (for ephemeral cluster upgrades, we just stand up a new version cluster, boostrap it and switch over) we are mid way of this migration while also adding new business requirements all the time… so it’s very in flux, cluster repos retire when clusters do
It’s not so big and already quite complex, we come from a monorepo for deployments that was using templated helmfile templating values for helm templates - a templating nightmare rendered by argoCD with zero insight in obscure error messages
helm templates templating more helm templates?
Sorry for digging (and feel free to not answer) but is it just you two handling all that?
I work mostly in a bubble as of late so hearing how other devops teams are operating is like my drug or something.
It grew from a team of DevOps and Dev (outsourced) to an in house team being built, i was 3rd to join 6 months ago and team is now 5 “devops”
It’s not our job title just trying to keep things simple
I’m currently supporting a mostly offshored team of developers for approximately 45 repos across 4 teams and have had anywhere from 3 to 8 environments (each with their own fully loaded kube clusters and pipelines) at once.
I converted all their pipelines to pipeline as code early on or there’s no way I’d be able to take on such load.
@Zachary Loeber are you also using flux?
(i forget - you told me before)
2020-02-18
Hey everyone, give a warm welcome to our newest members!
- @ngan nguyen
- @vincent.drl
- @Rico
- @Ievgenii Shepeliuk
Good to have you here =)
I whipped up a list of practical recommendations for “12 factor” apps on Kubernetes. Appreciate any feedback. Did I miss anything?
I’m curious about this one: Use DNS-based service discovery (instead of IPs or depend on consul
); use short-dns names with search domains rather than FQHN.
firstly I think you meant FQDN
but aside from the typo and the fact that I currently follow this practice I’m wondering about cross-cluster services and such.
consul is being considered for a springboot microservice deployment (they currently use Zuul or something like that) and I’m under the impression that using Consul may be able to help with high availability and service discovery across regions and such
but that’s more of a hunch than that I’ve actually deployed such a construct
Maybe adding that configuration parameters should avoid environment ‘stamping’ would be a good thing to add (though there is likely a better way to word that)
this is kind of like the URL
or URI
ambiguity. IMO, FQHN is “fully qualified hostname” and “hostnames” are for hosts.
vs FQDN
is a “fully qualified domain name”, which is the domain for the hosts. basically, it’s a zone. Like [us-west-2.prod.example.co](http://us-west-2.prod.example.co)
, whereby the FQHN might be [api.us-west-2.prod.example.co](http://api.us-west-2.prod.example.co)
I get your point about consul
; it’s not wrong. it’s also like using ASM (aws secrets manager). do you embed it in your code? that’s how you get maximal benefit from it, but also, maximally limit how/where it can operate. is the secrets interface external or intrinsic to the application logic?
I think what’s interesting about the new hashicorp service mesh based on consul is how it’s implemented with sidecars (like most service meshes)
this means your application does not need to be aware of that implementation detail. it simplifies local development but still works well in a complex multi-cloud environment
(the poly repo thing is opinionated - realize that’s not a requirement; this list is meant for our customers)
Flags should enable/disable functionality without knowledge of stage or environment (e.g. do not use if ($environment == 'dev') { ... })
THANK YOU!
it’s a good opinion in my book - mono repo is an overhead, doesn’t make sense for most systems - and when it does you should know that it does
i do think that things that are versioned together should be in the same repo though - if you’ve got two services that are tightly coupled and need to be released together (you shouldn’t paint yourself into this corner) then it will probably be easier to control that if they’re in the same repo
https://www.hashicorp.com/resources/closing-keynote-terraform-at-google Kind of an interesting take on the monorepo
(see Single version policy
in the transcript)
Learn about the workflow Google engineers use for Terraform plan-and-applies, and hear how the company migrated from Terraform 0.11 to 0.12.
Health checks should not depend on the health of the backing services
- this is an interesting conversation to have. usually boils down to the difference between “healthy” and “ready”
I’ve seen something along those lines here https://docs.google.com/document/d/199PqyG3UsyXlwieHaqbGiWVa8eMWi8zzAn0YfcApr8Q - Paper written by a former Google SRE
actually, the way i initially interpreted Health checks should not depend on the health of the backing services
is to not alert based on cause, but i’m not really sure what this sentence means the more I look at it
Nevermind it is what I thought it was https://12factor.net/backing-services
A methodology for building modern, scalable, maintainable software-as-a-service apps.
my point is services should gracefully degrade. So if a service depends on the health of it’s backing services, and the backing services, go down, then my service would go down. this is a great way to create site-wide blackouts in a heartbeat.
also, my my service can’t talk to one database, but it can to another? what should happen. i think the service is still “healthy” & “ready”, but might return 503
for requests that are impacted.
the downstream service should then decide on how to handle the 503
@Erik Osterman (Cloud Posse) so maybe have specific 500-series responses based on backing service outages? And a monitoring service with an intelligent ruleset would check the most upstream service and be able to detect these issues?
ideally your app should be aware of both when it is healthy, and when it is able to serve requests based on its backing services - if i can’t write to a database then i shouldn’t be trying to take requests, especially if there is a node that can
yea, definitely have monitoring/escalations on this; my point is more that what do we want kubernetes to do when the upstream backend is offline and my service depends on it? do we want kubernetes to kill my service (restart the pod, fail service health checks), or keep it online. i argue that since the problem is not my service, we should keep my service online (by passing health checks), even if my service might respond with 50x
to other API requests (e.g. ``GET /healthz = 200, but
POST /api/delete/user = 503` )
if i can’t write to a database then i shouldn’t be trying to take requests, especially if there is a node that can
health check should say - “i’m all good, don’t try and restart me” readiness check should say - “i can take requests, send me some traffic!”
but is it that binary (black/white)?
so if 1% of functionality is impacted by not being able to reach some upstream, should my service stop accepting requests?
if you can gracefully degrade then you are “ready”
but you should be able to say “i’m running fine, but i can’t service traffic” without getting all your pods killed
especially helpful for avoiding a startup storm
yea, agree with that
hmmm that’s an interesting point @Erik Osterman (Cloud Posse). Maybe that’s why microservices have a better place on k8s than monoliths never thought about it but now I have an argument for it
like if you had bs-monolith-app
and it does a bunch of crap, including processing images and writing them back to S3, and in the impossible case that S3 goes down, it’ll lose 10% of its functionality
versus, if you had bs-image-processing-service
and bs-document-processing-service
, then the backing service going down will completely screw up the former but not the latter
and in this case bs-image-processing-service
can just say “I CANT DO ANYTHING” and its pods will get constantly recreated, the entire service will flap and the SREs/Admins/operators will see that much more clearly than it just losing 10% of its functionality
I’ve heard the jargon “blah blah microservices kubernetes” countless times before and took it blindly but never pictured this very convincing scenario
Maybe backend services are one of a few criteria used for scoping the roles of microservices. As some of us may know, we may get a head of ourselves and make arbitrarily small microservices, sometimes known as nanoservices, where the overhead of managing them doesn’t meet the value of having them scoped that specifically in the first place
yea, this is the decoupling that we want to achieve between services. how that decoupling is defined, is a bit arbitrary.
especially helpful for avoiding a startup storm
yea, this is a good real-world example. a common problem I’ve seen is a Java app (for example) on startup will throw an exception if it cannot connect to the database and exit. this can lead to a startup storm that @Chris Fowles talks about. you have all these services in a crash loop until the database comes back up. then the database comes online, and everything slams it all at once, taking out the database. then the java apps crash again.
eventually, the crashloop backoffs reduce the effects of the storm and the system comes back online (if you’re lucky)
i thought startup storm had something to do with startup companies
@Yonatan Koren is apparently the human encyclopedia of relevant links at a moments notice
lol this is what i get paid for
jk
2020-02-19
2020-02-20
Hey everyone, give a warm welcome to our newest members!
- @Mario Gonzales
- @George Kontridze
- @vinayaks
- @anthonygiza
- @Daniel Blue
Good to have you here =)
Is this the place to ask a trouble shooting devOps technical question. I don’t want to pollute
Yep! There are different channels for specific issues (e.g. #aws, #release-engineering, #terraform, #terragrunt, and so many more)
Hi Matty, yes I didn’t see the one that might be appropriate for a Rancher, RMQ port issue
ya, maybe #kubernetes
some rancher users there
2020-02-21
@scorebot has joined the channel
Thanks for adding me emojis used in this channel are now worth points.
Wondering what I can do? try @scorebot help
Hey everyone, give a warm welcome to our newest members!
- @Chuck B
- @Marco Ceppi
- @nafas muhammad
- @scorebot
- @Nabil Becker
Good to have you here =)
2020-02-23
Hey everyone, give a warm welcome to our newest members!
- @Mohamed Meabed
- @Richard Gomes
Good to have you here =)
2020-02-24
asdf-vm, a wicked cool project I tinkered with last night comes with a backstory written as a ballad -> https://github.com/asdf-vm/asdf/blob/master/ballad-of-asdf.md
Extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more - asdf-vm/asdf
This looks cool. Could help with my “devtools” container. Wonder how it compares to the “alternatives” way @Erik Osterman (Cloud Posse) mentioned
Extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more - asdf-vm/asdf
Still looking into it but it’s pretty generic. Mostly just scripts with shims (reminds me of the oh-my-zsh plugin system but targeted towards various app versions of cli tools instead)
Yep. That’s the thing it has felt like Brew has always been missing.
only reason I looked into it was zero desire to deal with installing a specific version of golang on a new(ish) linux laptop of mine. I’d used gvm before for such task
or pyenv for python
programming languages suck like that right?
I just used nvm in dadsgarage. It would be great to standardize with something like this
anyway, the asdf tool plugin list was WAY larger than I expected
but it works more like direnv
yeah, it even has stuff like Terraform and Helm
looking for a .tools-version file and sourcing the right shim if found
Hey everyone, give a warm welcome to our newest members!
- @Stef
- @sia.s.saj
- @kimxogus
- @Jey Ayyalu
- @Zorin Wade
- @Francisco Montada
- @Sreekumar
- @Andy To
- @Abhishek Gupta
Good to have you here =)
2020-02-25
Hey everyone, give a warm welcome to our newest members!
- @Milosb
- @Dragos Andronache
- @Jeremy Schuller
- @dan
- @Thuong
- @Moritz S
Good to have you here =)
2020-02-26
Hey everyone, give a warm welcome to our newest members!
- @bougyman
- @joey
- @Oliver Smit
- @RB
- @Kendall Link
Good to have you here =)
Hello all. I keep coming across SweetOps archives in Google searches, and thought I would check out the Slack. Judging from my brief skimming of #random… I’m glad I did.
2020-02-27
Wondering if something about chart versions I’m not grok’ing… I’ve seen some charts where its version looks intentionally locked/coupled to its appVersion
. Seems like a lot of potential to muck with the semver in my helm values. Hasn’t happened to me personally yet, but in a forked chart that I maintain, I’m debating whether to break that coupling to appVersion
. Curious if anyone hear has thoughts on that.
github is having issues. does anyone setup a readonly internal mirror for when this happens?
im a remote engineer, so im already home.
ive been googling and havent seem to find anything. maybe github up time has been so good that no one had to worry abou tit
We had it once in a company
I dont remember the details since it was setup before I came
i usually have all my repos pretty up-to-date locally, and write tests and build scripts in a way they can be run locally instead of only through CI
but it was a cron that synced one critical repo
sounds super bespoke
as for the actual server https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
should be straight forward
ah ok. i was hoping for something more cookie cutter, off-the-shelf. basically i give it our enterprise org name and an auth token and it grabs all our repos and has an endpoint
could run a cron that just loops through your local clones and runs git fetch periodically?
yea but then thats a solution only for myself
ahh
well gitlab has some pretty great webhook based options to mirror from github
@RB you jinxed it https://www.githubstatus.com/
Welcome to GitHub’s home for real-time and historical data on system performance.
lol… it was causing issues this morning for us on the east coast so i was looking at alternatives in case it happens again
would be nice to have an artifactory like approach for git* service providers like github
so you first clone/pr/commit/etc to the mirror, which then redirects the changes upstream to github. if github is having issues, it won’t stop developers. we can simply wait for github to come back, and resync our mirror with upstream
Apart from the fact that git is made as a decentralised version control system and you could theoretically pull and push directly between computers or a central repo on a server over ssh, you could setup gittea or gitlab and configure a repo to sync another repo on both pull and push. (ps. bonus points if you make a working helm chart for gittea :))
but it doesn’t seem like this solution exists… so if anyone wants to make a lot of money or contribute to the open source community by hedging that github will have issues like this in the future, then ill be your customer
git proxy or git cache turn up a couple options. not sure how robust they would be for a team, or if availability would be any better than GH
A caching git proxy. Contribute to rohanpm/ngitcached development by creating an account on GitHub.
local caching server for git when the actual server is on the other side of a (possibly slow) WAN link - sitaramc/gitpod
Hey everyone, give a warm welcome to our newest members!
- @Steve Neuschotz
- @SATYA PATI
- @Jesse
- @Bogdan Lata
- @Steen
- @Martin Leopold
- @igal
- @hari
- @Alan Rickman
- @scott866
Good to have you here =)
2020-02-28
got a question for anyone using GCP. if you have an app running under a service account in project A, but you want to access resources in project B (from that app), what’s the best way to do this? i’ve seen API keys, service account impersonation, etc. thrown out there. curious what people are actually doing in real applications, though.
Hey everyone, give a warm welcome to our newest members!
- @Jawwad Yunus
- @Castro Mbithii
- @nishgupta29
- @randomy
- @Adam Perry
- @Marcin Brański
- @Tan Quach
Good to have you here =)
2020-02-29
Hey everyone, give a warm welcome to our newest members!
- @Victor Wong
Good to have you here =)