Discussions related to https://github.com/terraform-aws-modules Archive: https://archive.sweetops.com/terraform-aws-modules/
hi everyone, any one have idea how to fix his Error: Failed to download module
Could not download module “null-label” (http://context.tf<i class="em em-22|context.tf"</i>22>) source code from “https://github.com/cloudposse/terraform-null-label/archive/0.22.1.tar.gz//*?archive=tar.gz”: bad response code: 401.
I am writing terraform script to launch the MSK cluster in AWS any one have reference scripts please share with me
Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.
hi Erik Thankyou for sharing the details , and I am looking into it , I have few queries and I will connect with you
Probably one for @Erik Osterman (Cloud Posse) and the rest of the posse: has there been consideration for the use of Sematic Versioning (https://semver.org/ for those readers who haven’t seen it before) of the various modules? With the recent moves around AWS provider updates and more recently the minimum Terraform versions changing, it’s been a little harder than I’d like to use pessimistic versioning to track releases without surprise breaking changes.
Heh, good point, I’d completely overlooked 0.X == “if it breaks, you get to keep all the pieces”
Perhaps my question should be “can we have some v1.x releases?”
We talk a little bit more about our strategy here: https://docs.cloudposse.com/community/faq/#what-is-our-versioning-strategy
It would be a little bit ironic if we reached 1.0 before terraform
That said, LTS and major releases is something we’ll need to do at some point - but it needs to be driven by paid customer demand. We spend tens of thousands a month right now maintaining these modules out-of-pocket. v1.x will just add to that burden by increasing the number of releases we need to maintain.
I’m with @bazbremner. The Terraform policy of “we still aren’t 1.0” is a cop out, and it (and CloudPosse modules) should bump to 1.0 and follow “real” semver. One one hand your versions say “our interfaces are unstable”. On the other hand, your modules are widely deployed in real production environments by real users, and are de facto stable. It’s nice to have the freedom of 0.x versions to theoretically make breaking changes. But real world constraints are already stopping you from doing this. For example, the recent changes to null-label I’ve been involved with. Moving to 1.x+ versioning wouldn’t constrain you further. It would just communicate your existing constraints to users more clearly.
Is it a cop out? I don’t know. It’s a business decision. It depends what we optimize for.
Here’s how much I’ve spent at Cloud Posse this month maintaining our open source. No one paid us for that.
0.x reduces our overhead by not needing to maintain stability across 5 versions of terraform, even more versions of providers. Companies like RedHat can do that.
I sympathize a lot. I remember reading somewhere that if a project is used widely in prod environments, then it’s v1.0 whether you like it or not. That resonated a lot with me. My compromise has been to not be shy about 1+.0 releases, but to also increment major versions at will, whenever we make a change that really requires changes to the user config
it’s a promise of stability within the minor increments, but definitely not a promise of long-term interface compatibility or maintenance
There are 2 parts of that though. I agree it’s easy to bump those versions to convey the impact of those changes. But like you say @loren it’s a promise of stability within each major release of minor increments. With literally 300+ repositories, if we have 3 major versions per project, that’s managing the stability of 1000+ releases with a small team. It is simply not feasible. So what are we communicating? Exactly that. We do not have the in-house capacity to ensure our releases are breaking and to backport or manage fixes for multiple releases. And since that’s the situation, we’re pre-1.0 since we don’t have the in-house ability.
Yeah, if people read x.0.0 as some kind of long-term support channel, you’re better off avoiding it
Yes, I think the best compromise right now is that users fork at the version they are at and take over the long-term maintenance to ensure their own stability. (Then realize how much effort that is, let their forks languish)
though, what i’m saying is that you don’t have 3 maintained/updated major versions of any project. you just keep rolling forward. the burden is on the user to also adjust their configs/state if they need features in newer versions. i feel like this is fair for free open source projects. the users have to put something in.
(granted, paid users impose a whole new set of constraints. they don’t want to roll forward, but they do want the new features! and you’re in a new bind)
TL;DR: I want us to get there, and we’ll start doing major/minor releases when we can keep our promises.
Maybe this will be a step in that direction? https://github.com/sponsors/cloudposse
Cloud Posse is a DevOps Accelerator that helps companies own their infrastructure in record time by building it with you and then showing you the ropes. Everything we do is 100% Open Source under A…
Thanks guys for the awesome work! And I can very much relate to the difficulties in maintaining hundreds of repositories in open source
Sorry Erik. I didn’t mean to cast aspersions about your dedication to open source. I believe your modules are a missing link that compose fiddly API-level cloud resources into the building blocks people actually want to work with. The C to Terraform’s assembler
I would love a world where CloudPosse changes nothing about their approach to backwards compat, except every module is bumped to 1.0.0 tomorrow. But you are right, semver has a chilling effect on people’s willingness to break back compat. And I like my ecosystem to keep evolving
@Erik Osterman (Cloud Posse) thanks for the replies and I certainly appreciate the work that everyone puts into the modules - this thread certainly isn’t meant to suggest otherwise!
Has anyone using the cloudposse/codebuild/aws ran into any issues when using the “S3”
I am getting the following error when running
Error: cache location is required when cache type is "S3"
Found a work around by setting the variable
cache_bucket_suffix_enabled to false.
Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild