#terraform (2020-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-09-01
I’m updating my terraform-opsgenie-incident-management implementation from an earlier release and it looks like the auth mechanism has changed. I removed opsgenie_provider_api_key
from being passed to the CP opsgenie modules and added a provider block, but I have been getting this shockingly helpful message and can’t find where to make this change:
Error: Missing required argument
The argument "api_key" is required, but was not set.
Not sure if this from the logs helps…sure doesn’t help me:
2020-09-01T10:21:55.404-0400 [DEBUG] plugin: starting plugin: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7 args=[.terraform/plugins/registry.terraform.io/op
sgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7]
2020-09-01T10:21:55.424-0400 [DEBUG] plugin: plugin started: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7 pid=15505
2020-09-01T10:21:55.424-0400 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7
2020-09-01T10:21:55.435-0400 [INFO] plugin.terraform-provider-opsgenie_v0.4.7: configuring server automatic mTLS: timestamp=2020-09-01T10:21:55.434-0400
2020-09-01T10:21:55.465-0400 [DEBUG] plugin.terraform-provider-opsgenie_v0.4.7: plugin address: address=/var/folders/r1/2sj8z7xn12s5j5729_ll_s7w0000gn/T/plugin781003244 network=unix timestamp=2020-09-01T10:21:55.465-0400
2020-09-01T10:21:55.465-0400 [DEBUG] plugin: using plugin: version=5
2020/09/01 10:21:55 [TRACE] BuiltinEvalContext: Initialized "provider[\"registry.terraform.io/opsgenie/opsgenie\>"]" provider for provider["<http://registry.terraform.io/opsgenie/opsgenie|registry.terraform.io/opsgenie/opsgenie"]
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalOpFilter
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalSequence
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalGetProvider
2020-09-01T10:21:55.524-0400 [TRACE] plugin.stdio: waiting for stdio data
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalValidateProvider
2020/09/01 10:21:55 [TRACE] buildProviderConfig for provider["registry.terraform.io/opsgenie/opsgenie"]: no configuration at all
2020/09/01 10:21:55 [TRACE] GRPCProvider: GetSchema
2020/09/01 10:21:55 [TRACE] No provider meta schema returned
2020/09/01 10:21:55 [WARN] eval: *terraform.EvalValidateProvider, non-fatal err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalSequence, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalOpFilter, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalSequence, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [TRACE] [walkValidate] Exiting eval tree: provider["registry.terraform.io/opsgenie/opsgenie"]
2020/09/01 10:21:55 [TRACE] vertex "provider[\"<http://registry.terraform.io/opsgenie/opsgenie\|registry.terraform.io/opsgenie/opsgenie\>"]": visit complete
it must be opsgenie, because supplying the OPSGENIE_API_KEY
env var stops that error, even though the one provider I do declare has app_key
defined.
@Dan Meyers
(our tests are using OPSGENIE_API_KEY
)
and I think we removed api_key
from the modules because providers in 0.13 should be passed by reference rather than invoked inside the module
also, if you haven’t yet had a chance, check out our new config
submodule of the opsgenie stuff - it supports YAML configuration for your desired opsgenie state
just a point of clarification, the note above says app_key
is defined but the error references api_key
– just want to make sure thats a typo
Also, here’s a clip from #office-hours where we talk about the new config module: https://www.youtube.com/watch?v=fXNajuC4L1o
Thanks, @Erik Osterman (Cloud Posse). I was on the office hours call, when this was discussed. Very exciting and a really cool abstraction. I had already set up our basic config and just wanted to throw a few minutes at it to bring it up to 0.13 and see some of those changes reflected in Opsgenie (once our trial has been extended) so i’ve just been working on my original implementation. Once I get a clean plan from that and a few spare moments, i’ll probably convert to the config
module.
it’s a module that has only uses the opsgenie provider, so it’s not anything else.
Hey guys facing a weird problem with terrafrom rds module i made a read replica with terraform and that was successful. after that i do terraform apply, its saying it has to change name and want to recreate replica.(i also did this) but again it shows it want to change the name i checked the name in tfstate its whats that should be.(it wants to change the name {masterdb name} to {replica name})
@Jeremy G (Cloud Posse) could this be related to null label changes?
@sahil kamboj Which modules of ours are you calling? What versions of our modules are you using? As we say in all our documentation, you should be using a specific version, not pinning to master
.
@Erik Osterman (Cloud Posse) Unlikely to be due to label change as neither terraform-aws-rds
nor terraform-aws-rds-replica
use [context.tf](http://context.tf)
or the new label version.
@Jeremy G (Cloud Posse) sry was disconnected for month due to COVID it was a silly mistake in name parameter , its the db name not rds and should be same as master.
~ name = “frappedb” -> “frappedb-replica” # forces replacement option_group_name = “frappe-read-db-20200831120637864700000001” parameter_group_name = “frappe-read-db-20200831120637864800000002” password = (sensitive value) performance_insights_enabled = false
Evaluating terraform Cloud for our team. Wondering if anyone here is using it currently? And maybe can share some pros, cons, regrets, tips, etc? How was the migration from open source Terraform to the cloud, etc? thank you
one con: you’re limited to running their version of terraform and cannot BYOC (bring your on container). the good thing is then your limited to running vanilla terraform, the bad thing is you cannot use any wrappers or run alpha versions of terraform.
fortunately, they’ve just released runners for TFC. this was a huge con before that, since it wasn’t possible to use things like the postgres provider to manage a database in a VPC.
related to this, providers are now easily downloaded at runtime. also was a limitation, but no longer is.
the biggest compliant I hear is the cost of TFC enterprise & business.
edit: Disregard - see below
another con is that you can’t run your own workers (in your own accounts) without shelling out for a $$$ enterprise contract
so any moderately regulated workload becomes difficult to deal with from a compliance standpoint, because you’re basically giving a 3rd party highly privileged access to your accounts
you’re probably only going to get Cons in this thread which doesn’t really reflect on the product itself at all. it’s a pretty great solution for anyone who’s tried to automate terraform on their own and felt the pain. for most of us who have kicked the tires it’s frustration that we can’t use it because of tick boxes rather than technical deficiencies.
@Chris Fowles can you clarify? I thought runners are now supported with the business account.
Thank you
What's the difference between Terraform Cloud and Terraform Enterprise?
Terraform Enterprise is offered as a private installation. It is designed to suit the needs of organizations with specific requirements for security, compliance and custom operations.
Yea, so agree with @Chris Fowles - I would pick terraform cloud over all the alternatives (e.g. Atlantis, Jenkins, or custom workflow in some other CI/CD platform). What it does, it does very well and better than the alternatives.
ok, but there’s now a “hybrid” mode where the runners can be a “private installation” but the dashboard is SaaS.
any doco on that? I’ve not seen that yet
ahhh ok found it
Today we’re announcing availability of the new Business tier offering for Terraform Cloud which includes enterprise features for advanced security, compliance and governance, the ability to execute multiple runs concurrently, and flexible support options.
pro: also supports SSO now
con: (okta only)
con: Business pricing is “Contact us because we don’t know how to bill this yet”
Ya, hate that.
2020-09-02
when i am doing terragrunt apply in tfstate-backened it is going to create table and s3 bucket again. and throwing error why so?
Sounds like after first creating the bucket and table with the module, the step of reimporting the local state was not performed.
Have you followed these steps: https://github.com/cloudposse/terraform-aws-tfstate-backend#create
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
v0.13.2 0.13.2 (September 02, 2020) NEW FEATURES: Network-based Mirrors for Provider Installation: As an addition to the existing capability of “mirroring” providers into the local filesystem, a network mirror allows publishing copies of providers on an HTTP server and using that as an alternative source for provider packages, for situations where directly accessing the origin registries is…
0.13.2 (September 02, 2020) NEW FEATURES: Network-based Mirrors for Provider Installation: As an addition to the existing capability of “mirroring” providers into the local filesystem, a network m…
The general behavior of the Terraform CLI can be customized using the CLI configuration file.
Hi. Is there a possibility to use a CMK instead of the KMS default key for encryption at terraform-aws-dynamodb? Thanks.
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
I see a PR here ^^
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
^^
Hi. Another question: How can I deactivate the ttl_attribute at terraform-aws-dynamodb? If I set it to null or “” I get an error (because it must have a value). If I avoid the argument it will be enabled with the name “EXPIRES”. I have checked the code in the module. I see no way to disable ttl. Can anyone explain to me how this works?
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
fyi, submitted an issue with tf 0.13 that cloudposse modules may run into, since it impacts conditional resource values (e.g. join("", resource.name.*.attr)
) that are later referenced in other data sources. this includes module outputs that are passed to data sources later in a config… https://github.com/hashicorp/terraform/issues/26100
Terraform Version $ terraform -version Terraform v0.13.1 + provider registry.terraform.io/hashicorp/external v1.2.0 + provider registry.terraform.io/hashicorp/random v2.3.0 + provider registry.terr…
oh jeez. ya, all of that should be changed to using the try()
function instead. more reason to do that now.
Terraform Version $ terraform -version Terraform v0.13.1 + provider registry.terraform.io/hashicorp/external v1.2.0 + provider registry.terraform.io/hashicorp/random v2.3.0 + provider registry.terr…
@antonbabenko thought you might also want to be aware, in case you get reports on your modules (i hit it on your vpc module)
@RB unfortunately, try()
doesn’t fix it… in the repro case in the issue, this still generates a persistent diff: empty = try(random_pet.empty[0].id, "")
thanks @loren
interesting so it affects both cases.
both cases, since both cases return “” (empty string)
and TF 0.13 just can’t compare empty strings correctly
pretty much, yeah. one workaround i’ve found is to use a ternary with the same condition that you use for the resource, so this does work: empty = false ? random_pet.empty[0].id : ""
if TF 0.13 can evaluate the expression all up front, then it works
isn’t better to way for them to fix it?
before changing every module?
we don’t need to change every module since it could affect it only when enabled=false
but yes, it’s better for them to fix it
they get pay for it
they responded and explained why it is happening. it makes sense, though i don’t know what edge cases led them to make the change so that resources with 0 instances are not stored in the state. i’d expect this issue will not be solved quickly, if at all
personally, i’ll be switching to that workaround wherever i can, which i think is a more stable solution anyway
maybe if we all upvote the issue, they will resolve it sooner
please do!
hi all, sometimes, I need to do things outside of terraform, for example provisioning the infra vs updating content (eg putting things into a bucket).
This introduces duplicates of declaration of variables, one set for shell and another for terraform.
Has anyone ran into something similar? Do you have any advise on how to DRY the config?
what about using https://github.com/scottwinkler/terraform-provider-shell (and have all of that in TF state)
Terraform provider for executing shell commands and saving output to state file - scottwinkler/terraform-provider-shell
thanks, I’ll have a look into that.
Hey guys I’m using the git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.10.0>
module. Just wondering if changing backup_window
in a deployed RDS instance will create a new RDS instance or update the existing one. Also still new to terraform, so maybe there is an easy way to find out? thanks guys
Shouldn’t. Terraform plan can be run to preview before you do anything.
Terraform apply also asks for confirm.
Always test/learn in safe environment :-)
Lastly RESOURCES = STUFF YOU CREATE data sources read …. If you didn’t create with terrafl use data and it won’t try to create it.
Ie…vpc subnets etc use data unless you are creating.
Thanks for the reply. Appreciate the information
Does anyone know a good tool for pulling values from Terraform state outside of terraform itself?
As in, I have a CD process that is running simple bash commands to build and deploy a static site. I’d like to get my CloudFront CDN Distribution ID and the bucket that the static site’s assets should be shipped to from my Terraform state file in S3. I could pull the state file, parse out the outputs I need, and then go about it that way but I am figuring that there must be a tool written around this.
I was looking for something similar as well. The suggested solution was https://sweetops.slack.com/archives/CB6GHNLG0/p1599083030235300?thread_ts=1599082729.235200&cid=CB6GHNLG0
what about using https://github.com/scottwinkler/terraform-provider-shell (and have all of that in TF state)
I don’t think that’s what I’m looking since I’m talking about totally outside of the context of a Terraform project.
We’re in the process of doing the same thing, and we’ve settled on AWS’ SSM Parameter Store, to store key/values as opposed to Terraform outputs, as a source of truth for both Terraform and other tooling (eg. Ansible, GitHub Actions, etc.)
aws_ssm_parameter
resources to write data to SSM, and aws_ssm_parameter
data sources to read data from SSM.
that’s a nice idea.
It does mean the scripts will need to reach into the parameter store for values tho.
For sure, but there are lots of SDK’s available for AWS API endpoints.
One thing I am considering is the dotenv file and the tfvars file is the same format
Annoyingly Param Store still isn’t supported by RAM, so you’ll need well defined roles, prefixes and encryption keys tho’ https://docs.aws.amazon.com/ram/latest/userguide/shareable.html
AWS RAM lets you share resources that are provisioned and managed in other AWS services. AWS RAM does not let you manage resources, but it does provide the features that let you make resources available across AWS accounts.
I had a little bit of a hard time convincing my architect to throw stuff into SSM, but it’s gone very well and he’s really embraced the idea.
Ah yeah — I was not thinking yesterday. This is a perfect usecase for SSM PStore + Chamber. Appreciate the reminder @Drew Davies!
oh. sweet! the terraform registry has versioned docs for providers! looks like the versioned docs go back about a year or so, here’s the earliest versioned docs for the aws and azure providers * https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docs * https://registry.terraform.io/providers/hashicorp/azurerm/1.35.0/docs
2020-09-03
Can you use for iterator with a data source? Just thought about this with looking up list of github users for example. Would like to know if that’s possible, didn’t see anything in docs
iterator, as in count
or for_each
? if so, sure, certainly
cool. never had the need so just making sure before I wasted more time
when running terraform in AWS, with s3 backend for the statefile, is there anyway to create the bucket when running for the first time? in the docs, it just says
This assumes we have a bucket created called mybucket
there is a bit of a chicken/egg thing going on. cloudposse has a pretty good writeup of how to keep it all in terraform… https://github.com/cloudposse/terraform-aws-tfstate-backend#create
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Also fyi terraform cloud can be used for free for state file management and has versioning, locking and more. Might be worth considering. I have been using it exclusively for almost the past year instead of any s3 buckets.
I found a github action that creates the backend in terraform cloud fyi
I recently updated to the newest vpc module version https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.17.0 but then got this error for the null resource:
Error: Provider configuration not present
To work with
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3], after
which you can remove the provider configuration again.
Would anyone know how to solve it? I noticed that the 0.13 upgrade docs (https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations) mention this blurb:
In this specific upgrade situation the problem is actually the missing
resource
block rather than the missingprovider
block: Terraform would normally refer to the configuration to see if this resource has an explicitprovider
argument that would override the default strategy for selecting a provider. If you see the above after upgrading, re-add the resource mentioned in the error message until you’ve completed the upgrade. But I wasn’t sure how to interpret that if it’s something that might have happened with the vpc module upstream of my usage?
make sure you have the aws provider version et in your provider resource and try running terraform 0.13upgrade
provider aws {
region = "us-east-1"
version = "~> 3.3"
}
or similar, I think it just cant be null anymore
Is it an aws
provider issue? It looks like it’s a null
provider issue
I am generating this provider block now:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
oh I see, yea i just skimmed over. I hit this exact issue with aws
I would try to explicitly set null provider too maybe
hmm ok, I’ll give that a shot, thanks
or try terraform init -reconfigure
No luck, it looks like I might need to remove it from state manually?
@Andriy Knysh (Cloud Posse)
It looks like I was able to replace the state references manually, and that worked
For example, every line had:
"provider": "provider[\"<http://registry.terraform.io/-/null\|registry.terraform.io/-/null\>"]",
just replaced it with
"provider": "provider[\"<http://registry.terraform.io/hashicorp/null\|registry.terraform.io/hashicorp/null\>"]",
(then state pushed)
intersting..good to know
There is a terraform to perform the find-replace automatically:
terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null
I’ve seen this a few times with 0.12 -> 0.13 conversions that don’t use terraform 0.13upgrade
and get messed up state files. Bit of a pothole IMO.
Good tip @Alex Jurkiewicz
Just ran into this issue too. Perhaps an idea to place this info somewhere?
Error: Provider configuration not present
To work with
module.subnets.module.nat_label.data.null_data_source.tags_as_list_of_maps[3]
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.subnets.module.nat_label.data.null_data_source.tags_as_list_of_maps[3],
after which you can remove the provider configuration again.
You should submit a PR to add this to the 0.13 migration page in the official Terraform docs
Regarding config
module in terraform-opsgenie-incident-management
, what’s the significance of including “repo David…” in the descriptions?
Hrm… is that convention enforced or just an example?
The reason we did this is so could correlate teams with repositories with stakeholders (e.g. it’s 3am, you get an alert for some service, but the error is not obvious and requires some domain expertise, you don’t know what to do, so who should you escalate to?)
Right. Figured, but that’s just text as far as this exercize is concerned, right? That info is consumed by (tired) humans or …something else?
Yup, just consumed by humans. Not used programatically.
Got it. Thanks.
2020-09-04
Does anybody know a nice way to attach and then detach a security group from an ENI during a single run of a tf module? I’m trying to allow the host running TF temporary access to the box while it deploys, then revoke that later
This is in response to this bug: https://github.com/terraform-providers/terraform-provider-aws/issues/5011
This issue was originally opened by @Lsquared13 as hashicorp/terraform#18343. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version Terr…
Found out recently that a tf plan will update the state file, notably the version
Any good tricks to doing a plan without the state updating ?
I understand you can turn off refresh but that would also inhibit the plan
I’m thinking perhaps we can output the current state to a file, do a terraform plan, and push the outputted state back up ?
what are your thoughts?
i do a terraform plan with a new version all the time. it’s never affected the tfstate
Keep in mind a plan does a state refresh, so it must update the state file.
here’s an old issue recommending the use of -refresh=false
perhaps it is updating the local copy. that makes sense. it is certainly not updating the remote state, or at least, not in a way that impacts the ability to run applies with the older version
interesting. i’ll try this out locally and see. this was a concern with my uppers regarding atlantis doing plans across the org so i thought id ask
we had the problem a couple times where someone “accidentally” updated the remote state on a dev environment by using a newer version of terraform than the rest of the team. we implemented strict pins of the required_version
in the terraform block, and it’s never been a problem since. upgrades are now very deliberate.
terraform {
required_version = "0.13.1"
}
yep, i think this is what i’ll have to do as well across a repo before i can add atlantis bot to access the repo
Refreshing Terraform state in-memory prior to plan…
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
I would assume that means it isn’t updating the state, but perhaps it behaves differently if you save the plan to a file?
I guess it does say “the refreshed state” but that doesn’t necessarily mean it isn’t updating the state file with the latest version…
Is there a doc on contributing to Cloudposse TF mods?
Here’s some stuff to get you started
Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.
If you want to participate in review Pull Requests, send me a DM. We get dozens of PRs and can use all the help we can get.
So far so good with the config mod. Our rotations are pretty simple for starters, but we do need things like ignore_members
and delete_default_resources
in teams, so i’ll submit a PR for those changes in a little while.
Ping @Dan Meyers when you need a review since he’s currently the “owner” of that.
Will do. Thanks, Erik!
2020-09-06
anyone know how to do a “find by” type of operation on a list of maps in terraform?
my_objs = [ {"foo": "a"} {"foo": "b"}]
# find_by("foo", "a", my_objs) --> {"foo": "a"}
Here’s what I’ve come up with :/
[for o in local.my_objs : o if o.foo == "a"][0]
yeah, for/if is going to be the way that you’ll need to filter a collection by a property
2020-09-08
I’m investingating moving our setup to terragrunt to simplify modules. Looking at the official examples (Looking at the https://github.com/gruntwork-io/terragrunt-infrastructure-live-example/tree/master/non-prod) I see that they place the region higher in the hierarchy than the stage. Is this common?
IMO, if it’s the same region between your environments, then that’s fine.
I was thinking that if you had a service that was fundamentally cross-region, for example a jitsi deployment where you want RTC video bridges in regions close to the users.. then you would want to place the stage at a higher level than the region
yeah, you can have it inside each stage.
There is also other scenarios like AWS ACM or Lambda@Edge, where you need to have them on us-east-1
regardless of where your main region is supposed to be.
personally, I’ve recently moved away from terragrunt, due to the additional (cognitive) overheads it presented with regards to the additional wiring.
I have this swapped around
terraform
├── coreeng
│ ├── account.hcl
│ └── global
│ ├── env.hcl
│ └── us-east-1
│ ├── atlantis
│ │ └── terragrunt.hcl
│ ├── ecr
│ │ └── terragrunt.hcl
│ └── region.hcl
├── globals.hcl
├── prod
│ ├── account.hcl
│ └── prod
│ ├── account-service.hcl
│ ├── compliance-service.hcl
│ ├── device-service.hcl
│ ├── env.hcl
│ ├── eu-central-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ ├── eu-west-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── buckets
│ │ │ ├── main.tf
│ │ │ └── terragrunt.hcl
│ │ ├── compliance-service
│ │ │ └── terragrunt.hcl
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ ├── graph-service.hcl
│ ├── idp-service.hcl
│ ├── platform-dependencies.hcl
│ ├── profile-service.hcl
│ ├── resource-service.hcl
│ ├── tpa-service.hcl
│ ├── us-east-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── buckets
│ │ │ ├── main.tf
│ │ │ ├── provider.tf
│ │ │ ├── terragrunt.hcl
│ │ │ └── tfplan
│ │ ├── compliance-service
│ │ │ └── terragrunt.hcl
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ └── us-west-2
│ ├── account-service
│ │ └── terragrunt.hcl
│ ├── device-service
│ │ └── terragrunt.hcl
│ ├── graph-service
│ │ └── terragrunt.hcl
│ ├── idp-service
│ │ └── terragrunt.hcl
│ ├── platform-dependencies
│ │ └── terragrunt.hcl
│ ├── profile-service
│ │ └── terragrunt.hcl
│ ├── region.hcl
│ ├── resource-service
│ │ └── terragrunt.hcl
│ └── tpa-service
│ └── terragrunt.hcl
├── terragrunt.hcl
└── test
├── account.hcl
├── dev
│ ├── account-service.hcl
│ ├── compliance-service.hcl
│ ├── device-service.hcl
│ ├── env.hcl
│ ├── eu-central-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── compliance-service
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ ├── eu-west-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── compliance-service
│ │ │ └── terragrunt.hcl
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ ├── graph-service.hcl
│ ├── idp-service.hcl
│ ├── platform-dependencies.hcl
│ ├── profile-service.hcl
│ ├── resource-service.hcl
│ ├── tpa-service.hcl
│ ├── us-east-1
│ │ ├── account-service
│ │ │ └── terragrunt.hcl
│ │ ├── compliance-service
│ │ │ └── terragrunt.hcl
│ │ ├── device-service
│ │ │ └── terragrunt.hcl
│ │ ├── graph-service
│ │ │ └── terragrunt.hcl
│ │ ├── idp-service
│ │ │ └── terragrunt.hcl
│ │ ├── platform-dependencies
│ │ │ └── terragrunt.hcl
│ │ ├── profile-service
│ │ │ └── terragrunt.hcl
│ │ ├── region.hcl
│ │ ├── resource-service
│ │ │ └── terragrunt.hcl
│ │ └── tpa-service
│ │ └── terragrunt.hcl
│ └── us-west-2
│ ├── account-service
│ │ └── terragrunt.hcl
│ ├── device-service
│ │ └── terragrunt.hcl
│ ├── graph-service
│ │ └── terragrunt.hcl
│ ├── idp-service
│ │ └── terragrunt.hcl
│ ├── platform-dependencies
│ │ └── terragrunt.hcl
│ ├── profile-service
│ │ └── terragrunt.hcl
│ ├── region.hcl
│ ├── resource-service
│ │ └── terragrunt.hcl
│ └── tpa-service
│ └── terragrunt.hcl
└── qa
├── account-service.hcl
├── compliance-service.hcl
├── device-service.hcl
├── env.hcl
├── eu-central-1
│ ├── account-service
│ │ └── terragrunt.hcl
│ ├── device-service
│ │ └── terragrunt.hcl
│ ├── graph-service
│ │ └── terragrunt.hcl
│ ├── idp-service
│ │ └── terragrunt.hcl
│ ├── platform-dependencies
│ │ └── terragrunt.hcl
│ ├── profile-service
│ │ └── terragrunt.hcl
│ ├── region.hcl
│ ├── resource-service
│ │ └── terragrunt.hcl
│ └── tpa-service
│ └── terragrunt.hcl
├── eu-west-1
│ ├── account-service
│ │ └── terragrunt.hcl
│ ├── compliance-service
│ │ └── terragrunt.hcl
│ ├── device-service
│ │ └── terragrunt.hcl
│ ├── graph-service
│ │ └── terragrunt.hcl
│ ├── idp-service
│ │ └── terragrunt.hcl
│ ├── platform-dependencies
│ │ └── terragrunt.hcl
│ ├── profile-service
│ │ └── terragrunt.hcl
│ ├── region.hcl
│ ├── resource-service
│ │ └── terragrunt.hcl
│ └── tpa-service
│ └── terragrunt.hcl
├── graph-service.hcl
├── idp-service.hcl
├── platform-dependencies.hcl
├── profile-service.hcl
├── resource-service.hcl
├── tpa-service.hcl
├── us-east-1
│ ├── account-service
│ │ └── terragrunt.hcl
│ ├── compliance-service
│ │ └── terragrunt.hcl
│ ├── device-service
│ │ └── terragrunt.hcl
│ ├── graph-service
│ │ └── terragrunt.hcl
│ ├── idp-service
│ │ └── terragrunt.hcl
│ ├── platform-dependencies
│ │ └── terragrunt.hcl
│ ├── profile-service
│ │ └── terragrunt.hcl
│ ├── region.hcl
│ ├── resource-service
│ │ └── terragrunt.hcl
│ └── tpa-service
│ └── terragrunt.hcl
└── us-west-2
├── account-service
│ └── terragrunt.hcl
├── device-service
│ └── terragrunt.hcl
├── graph-service
│ └── terragrunt.hcl
├── idp-service
│ └── terragrunt.hcl
├── platform-dependencies
│ └── terragrunt.hcl
├── profile-service
│ └── terragrunt.hcl
├── region.hcl
├── resource-service
│ └── terragrunt.hcl
└── tpa-service
└── terragrunt.hcl
127 directories, 159 files
terraform/$ACCOUNT/$ENVIRONMENT/$REGION/$THING
also consider cross posting in #terragrunt for more feedback
personally, I don’t like imputing the state by filesystem organization (the “terragrunt” way).
This year, we’ve moved to defining the entire state of an environment (e.g. prod us-west-2) in a single YAML file used by all terraform workspaces for that environment. That way our project folder hierarchy is flat (e.g. projects/eks
, or projects/vpc
). In one of the project folders is where you have all the business logic in terraform. Now the relationship between eks
and vpc
and region, environment etc, will all be defined in a single configuration file called uw2-prod.yaml
(for example). We make heavy use of terraform remote state so pass state information between project workspaces. Best of all, the strategy works natively with terraform cloud, but terragrunt tooling does not. Using #terragrunt with terraform cloud means terragrunt
needs to be triggered by some other CI/CD system.
projects:
globals:
stage: prod
account_number: "xxxxxx"
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: prod
zone_name: uw2.ourcompany.net
eks:
command: "/usr/bin/terraform-0.13"
vars:
node_groups:
main: &standard_node_group
availability_zones: null
attributes: null
desired_size: null
disk_size: null
enable_cluster_autoscaler: null
instance_types: null
kubernetes_labels: null
kubernetes_version: null
max_size: 2
min_size: null
tags: null
gpu:
<<: *standard_node_group
instance_types: ["g4dn.xlarge"]
kubernetes_labels:
ourcompany.net/instance-class: GPU
eks-iam:
command: "/usr/bin/terraform-0.13"
vars: {}
vpc:
vars:
cidr_block: "10.101.0.0/18"
helmfile:
autoscaler:
vars:
installed: true
aws-node-termination-handler:
vars:
installed: true
cert-manager:
vars:
installed: true
ingress_shim_default_issuer_name: "letsencrypt-prod"
echo-server:
vars:
installed: false
external-dns:
vars:
installed: true
idp-roles:
vars:
installed: true
ingress-nginx:
vars:
installed: true
reloader:
vars:
installed: true
Here’s a sneakpeak of what that configuration looks like. No copying dozens of files around to create a new environment.
Do you have some type of hierarchy of variables that get loaded in and if so, how does something like Atlantis make sure it triggers for the right thing e.g. if eu-west-1 vars change and it is a single file, is that for prod or dev? Personally I don’t mind the opinionated hierarchy. Isn’t too much boiler plate code so relatively DRY, is pretty obvious what is going on, can’t shoot yourself in the foot easily. Folks who are new to Terraform get it
Are ya’ll using Terraform cloud now rather than e.g. Atlantis?
Yes, so basically managing terraform cloud is done using terraform cloud. So the workspaces are defined by the configurations. When the config/uw2-prod.yaml
file changes, terraform cloud picks up those changes and redeploys the configuration (e.g. tfvars
) for all workspaces. And using triggers, we can depend on the the workspace configuration.
The problem with atlantis
is it cannot be triggered from other pipelines (not elegantly at least. e.g. I don’t consider a bot running /atlantis plan
a solution). But with terraform cloud, it can be triggered from other pipelines. This is the main reason we’ve reduced usage of atlantis
in new engagements.
hierarchy of variables that get loaded
yes, so each project can still define defaults.auto.tfvars
for setting shared across all projects.
Then there’s the concept of globals for an environment:
projects:
globals:
stage: prod
account_number: "xxxxxx"
terraform:
dns-delegated:
vars:
zone_config:
- subdomain: prod
zone_name: uw2.ourcompany.net
Those globals
get merged with vars
So changes to projects/
causes a plan for all environments (since it affects all environments)
Changes to .yaml
causes plan for the yaml modified. Once this is modified, it cascades and triggers all dependent workspaces to plan.
The import aspects to note:
- state stored in git via yaml configurations for each environment
- terraform cloud manages terraform cloud workspaces defined in the yaml
- terraform projects use settings defined in workspace (that were set by the terraform cloud configuration). this is why it’s all “native” terraform.
- triggers are used to auto plan/apply dependencies.
you know, in our original approach we started with our little project called tfenv
to terraform init -from-module=...
everywhere. We outgrew that because there were just too many envs. We were fighting terraform. Terraform 0.12 came out and changed the way init -from-module
worked. The lesson taught us to stick to vanilla terraform and find out an elegant way to work with it. The problem with terragrunt in my book is that is diverging from what terraform does natively. It provided a lot more value before 0.12 and 0.13, but that value is diminishing.
I’ve been thinking about the consequences of the “wrong abstraction.” My RailsConf 2014 “all the little things” talk included a section where I asserted: > duplication is far cheaper than the wrong abstraction And in the summary, I went on to advise: >
the other thing we outgrew was the one-repo per account. it was impossible (very tedious) for aws account automation and general git automation.
Woah, ok, many questions here.
We’re having trouble managing our terraform structure. We aren’t a typical sass that has one product with dev/qa/prod stages.. rather we operate kind of like a consultancy (though we’re not) and operate the same solution multiple times for different clients. So we have heavy heavy re-use of all our infra code.
We are using vanilla terraform now with a structure like so:
example
├── modules/
│ ├── other-stuff/
│ ├── ssm/
│ └── vpc/
└── ops/
├── dev/
│ ├── ansible/
│ ├── packer/
│ └── terraform/
│ ├── other-stuff/
│ │ └── main.tf
│ ├── ssm/
│ │ └── main.tf
│ └── vpc/
│ └── main.tf
├── prod-client1/
│ ├── ansible/
│ ├── packer/
│ └── terraform/
│ ├── other-stuff/
│ │ └── main.tf
│ ├── ssm/
│ │ └── main.tf
│ └── vpc/
│ └── main.tf
└── test-client1/
├── ansible/
├── packer/
└── terraform/
├── other-stuff/
│ └── main.tf
├── ssm/
│ └── main.tf
└── vpc/
└── main.tf
but with more prod, more test, more clients, more modules, more everything. it is gnarly.
Also.. every one of those client roots is its own AWS account.
each of the ops/*-client/terraform/$module/
root modules uses relative paths to ../../../../modules/$module
not shown of course are the vars and other data specific to each deployment
there is so much duplication… and things start to diverge over time.. one client doesnt use cloudflare so we wire up a special dns provider in one of their root modules
it’s a massive headache.. hence why we are looking at terragrunt
We seem to be a bit behind the status quo, as I’ve noticed more people being vocal about moving away from terragrunt as terraform improves
The single project yaml definition looks great, but is only available on terraform cloud? What other options do we have?
Creating a new deployment when using Terragrunt will still require copying a bunch of files and a directory tree, but at least that content is very thin.. it’s all just vars, not actual terraform code.
Now I’m unsure.. Is that project config (config/uw2-prod.yam
) a terraform cloud feature or tooling from cloudposse?
We also make heavy use of remote state, and any attempt to generalize the config is intractable as you cannot use vars in tf backend configs.
Yes, so basically managing terraform cloud is done using terraform
cloud. So the workspaces are defined by the configurations. When the config/uw2-prod.yaml
file changes, terraform cloud picks up those changes and redeploys the configuration (e.g. tfvars
) for all workspaces. And using triggers, we can depend on the the workspace configuration.
Lot to unpack there. Sounds like cloudposse tooling?
It’s our own strategy, but terraform cloud doesn’t allow you to bring your own tools. Therefore the solution is technically vanilla terraform.
For coldstart though, we use a tool written in variant2 to bring up the environment, but for day-2 operations everything is with terraform cloud
Hi, I am trying to read the output security_group_id
from the elastic-beanstalk-environment
module, but I am getting This value does not have any attributes
when call it. Any ideas?
Do you have a count
set for the module?
@Alex Jurkiewicz yes..
Your reference is wrong
it needs a [0]
probably
The error is saying “the thing you are trying to read the attribute security_group_id
on is not a map/object”
thanks.. I guess it has to be module.app_beanstalk_environment.0.security_group_id
I wish that would have been the error, it would be easier to find the issue
no, it needs to be module.app_beanstalk_environment[0].security_group_id
2020-09-09
Is it possible to just simply ‘set’ a block? I have a basic object containing several volume definitions:
container_volumes = {
files_staging = {
name = var.file_staging_bucket
docker_volume_configuration = [{
scope = "shared"
autoprovision = false
driver = "rexray/s3fs"
}]
}
files_store = {
name = var.file_store_bucket
docker_volume_configuration = [{
scope = "shared"
autoprovision = false
driver = "rexray/s3fs"
}]
}
And would rather like to use it like so:
volume = toset([
local.container_volumes["files_staging"],
local.container_volumes["files_store"]
])
Unfortunately, TF whines that it wants it to be a block rather than a simple assignment (even though it’s literally the same thing in the underlying json…). Please tell me this isn’t how you’re meant to work with these stupid blocks:
dynamic "volume" {
for_each = ["files_staging", "files_store",]
content {
name = volume.value
dynamic "docker_volume_configuration" {
for_each = local.container_volumes[volume.value].docker_volume_configuration
content {
scope = docker_volume_configuration.value.scope
autoprovision = docker_volume_configuration.value.autoprovision
driver = docker_volume_configuration.value.driver
}
}
}
}
I just want to be able to set it with an =
I really, really hate block syntax. I don’t think I’ve seen a single case where I prefer it over just a list and a for loop.
I’ve just hit a wall similar to this. I’m trying to define a number of s3 buckets. Some of them have lifecycle_rules, one uses encryption, and one is publicly exposed. This translates to about 2-3 different blocks for each use-case. It would be great if I could either assign a block directly, pass a block, or have it work with empty blocks pre-defined. I’m wondering if there’s something fundamental that I missed with TF 12 syntax.
This looks like a ‘no, not ever’: https://github.com/hashicorp/terraform/issues/21458#issuecomment-496022674
Current Terraform Version Terraform v0.12.0 Use-cases I would like to be able to set block arguments from a map of key-values. For example, suppose I have a map containing four argument values for …
That is absolutely and utterly infuriating.
I would like to start a petition to fork Terraform for the express purpose of removing attribute blocks because they’re dumb.
yes, they are a poor hack for the lack of more expressive variable typing
What really annoys me about them is it just renders down to a json array anyway, yet they stop you from just specifying an array to replace them for “readability” (see: verbosity)
Then they had to add a whole bunch of extra syntactic rubbish to make up for it, like the ‘dynamic’ keyword
yeah. The purpose of HCL is to be more declarative than full code. dynamic is extra unnecessary code
I’m so glad to hear others express this. I’d been thinking I was alone, and now I feel validated.
I’m also glad I’m not alone. Has this been brought up on github anywhere? I’d even really like to see just an optional flag that says “–allow-assigning-to-blocks” or something, so the user has to acknowledge it’s not best practice but let us do it anyway
You used to be able to do that, there was a hacky way to do so. They patched it out several versions ago (around 0.9/0.10 IIRC).
I recall, it was “accidentally allowed” from what I read
yes, unintentionally
Hashicorp aren’t interested in giving you multiple ways to declare the same graph. Which IMO is a good thing. I don’t want two ways to define these sub-blocks. That’s a price too high
That’s fair, but I think the way that they decided on is far too limited and doesn’t mesh with the rest of the functionality they’ve given the language. Unfortunately I can’t think of a reasonable suggestion to fix the current system other than “just let us do it the natural way”
I agree. I think they’ve painted themselves into a corner but I hope not
hm, something broke badly with cloudposse/ecr/aws v0.27.0
even if I give a name as name = format("%s/%s/%s", var.orgPrefix, var.system, each.value)
var.name
is passed into terraform-null-label
for processing into the prefix. As far as I can tell it shouldn’t replace slashes though. It might be easier to pass it through separately and let null-label do the work of creating the prefix for you
name = each.value
namespace = var.orgPrefix
stage = var.system
yes, then it works, but still - why mangle the name
I just blew all my repos, which is of course a rookie mistake since I did not check the plan properly 1st
it will strip the /
Hi everyone I’ve got a question about the new module features (count, foreach).. I want to access an output from a module with a count (its a true/false flag to create only if the flag is true) and use that output conditionally… but I get an error that it is an empty tuple. Anyone have experience with this?
I promise I spent a few hours on this :smile: just came across try(...)
which seems to solve my problem
in modules with count
, the output is list, and you need to use something like join("", xxxxx.*.name)
to get a single item
in modules with for_each
, the output is a map
this is a map that I’m working on
I get an error about it being an empty tuple
output "alb_data" {
value = {
"public" = {
"http_listener_arn" = coalescelist(aws_alb_listener.http_public_redirect[*].arn, ["none"])[0]
"https_listener_arn" = aws_alb_listener.https_public.arn
"dns" = aws_lb.public.dns_name
}
"private" = {
"http_listener_arn" = coalescelist(aws_alb_listener.http_private_redirect[*].arn, ["none"])[0]
"https_listener_arn" = aws_alb_listener.https_private.arn
"dns" = aws_lb.private.dns_name
}
}
}
alb = var.create_alb ? module.alb[0].alb_data : var.params.alb
Error: Invalid index
on ../../../modules/services/app/main.tf line 8, in locals:
8: alb = var.create_alb ? module.alb[0].alb_data : var.params.alb
|----------------
| module.alb is empty tuple
The given key does not identify an element in this collection value.
changing to this seems promising
alb = try(module.alb[0].alb_data, var.params.alb)
if it’s empty, try
will not help
it just hides issues
hmm
how does module.alb
look like?
pretty standard
this var.create_alb ? module.alb[0].alb_data : var.params.alb
is the same as var.create_alb ? join("", module.alb.*.alb_data) : var.params.alb
and same as try(module.alb[0].alb_data, var.params.alb)
all should work, but they have a slightly diff behaviors
I see.. is the empty tuple not considered an error?
show how you invoke the module?
module "alb" {
source = "./modules/ALB"
count = var.create_alb ? 1 : 0
alb_listener_certificate_arn = local.alb_listener_certificate_arn
cluster_name = local.cluster_name
enable_alb_logs = var.enable_alb_logs
env = local.env
private_subnet_ids = local.private_subnet_ids
public_subnet_ids = local.public_subnet_ids
service_name = var.service_name
vpc_global_cidr = local.vpc_global_cidr
vpc_id = local.vpc_id
}
do you correctly set the variable var.create_alb
in both the module and the invocation of the module?
it sounds like it’s false
here
module "alb" {
source = "./modules/ALB"
count = var.create_alb ? 1 : 0
variable "create_alb" {
description = "Set to true to create ALB for this service, leave false to remain on shared cluster ALB"
default = false
}
is in my [variables.tf](http://variables.tf)
file on the module… most of the invocations here will have this set to false
and true
here
alb = var.create_alb ? module.alb[0].alb_data : var.params.alb
this is in the same module though how can they have different values?
alb = var.create_alb ? module.alb[0].alb_data : var.params.alb
and
module "alb" {
source = "./modules/ALB"
count = var.create_alb ? 1 : 0
are in the same module
the first is a local
that i use for setting listener rules and such
if I try the join function w/ empty string I get this:
Error: Invalid function argument
on ../../../modules/services/app/main.tf line 8, in locals:
8: alb = var.create_alb == true ? join("",module.alb[0].alb_data) : var.params.alb
|----------------
| module.alb[0].alb_data is object with 2 attributes
Invalid value for "lists" parameter: list of string required.
I feel like perhaps I’m making a silly mistake somewhere..
this module.alb[0]
gets the first item from the list
should be join("",module.alb.*.alb_data)
ah I see
where module.alb.*.
is list
you know the try(...)
actually is working very well here
yes try
does the same
hmmm
Error: Invalid function argument
on ../../../modules/services/app/main.tf line 8, in locals:
8: alb = var.create_alb ? join("",module.alb.*.alb_data) : var.params.alb
|----------------
| module.alb is tuple with 1 element
Invalid value for "lists" parameter: element 0: string required.
@Andriy Knysh (Cloud Posse) - thank you so much for helping me talk through this issue, I really appreciate it and everything else CP has provided the community! I am happy w/ current solution but still interested in talking this through… but if you are busy no need to continue here!
if you share the entire code, we can run a plan and see what happens (it’s difficult to understand anything looking at snippets of code)
module.alb.*.alb_data
and
var.params.alb
should be the same type
and since alb_data
is not a string, join(""…)
will not work here (sorry, did not notice that)
so try
is the best in this case
(but still does not explain the original issue you were seeing)
hmm this confuses me more or maybe sheds some light…
Error: Inconsistent conditional result types
on ../../../modules/services/app/main.tf line 8, in locals:
8: alb = var.create_alb ? module.alb.*.alb_data : var.params.alb
|----------------
| module.alb is tuple with 1 element
| var.create_alb is true
| var.params.alb is object with 2 attributes
just playing around w/ it trying to understand
Error: Inconsistent conditional result types
on ../../../modules/services/app/main.tf line 8, in locals:
8: alb = var.create_alb ? module.alb.*.alb_data : var.params.alb
|----------------
| module.alb is empty tuple
| var.create_alb is false
| var.params.alb is object with 2 attributes
The true and false result expressions must have consistent types. The given
expressions are tuple and object, respectively.
it says module.alb
is a tuple with 1 element but I’m trying to refer to the alb_data
output.. not module.alb
the 2nd output I understand… module.alb
is empty because we don’t create the module since it has count set to 0
I try to avoid using *
since 0.12 came out
try(module.alb[0].alb_data, var.params.alb)
should work for you
2020-09-10
Being able to forward submodule outputs with output "foo" { value = module.foo }
is very handy. Is there a way to do an export like that, but remove the layer of indirection? So that all of the outputs of foo are available as top level outputs of the exporting module?
no, outputs can only be created explicitly
Maybe if a tool was written to convert the tf to hcl, read all the modules and outputs, then check if all modules were outputted. If not, flag it
Or better yet, make the tool dump the module outputs if they are missing
I thought you could export the entire module output as a full object. Is that only for resources?
module "bananas" {
source = "./bananas"
}
output "module_bananas" {
value = module.bananas
}
We need a way to splat an output onto the module
output * {
value = module.bananas
}
Hello all. I’m looking for a pointer to some guidance/best practice for cycling IAM access keys as part of a terraformed deployment pipeline. Any recommendations?
I version the resource which forces a recreation
I think tainting the resource also does this, but is something you’d do on demand. Anyone?
yea Tainting will totally work too. I have a pipeline that pgp encrypts the secret and then decrypts it and stores the creds in vault. any consumer of the creds uses vault to retrieve them. this allows for seamless rotation.
@Luke Maslany can you add some more details about what technology you’re using for your pipeline? Perhaps there are alternatives to using/rotating IAM access keys and instead using runners with service accounts.
Thanks both.
I might be overthinking this then. We do have pipelines that store IAM keys in the AWS SSM parameter store.
In my mind I was looking to identify a way to have terraform toggle between the two access keys associated with an account.
I was thinking to have terraform recreate the key that wasn’t in active use, then use the new key to complete the terraform execution.
If the new deployment failed, the old key would be unaffected and would continue to work.
If the deployment succeeded, the production instances would now be running with the new key, and the next time a deployment was run it’d replace the old key as part of the new execution.
I’m currently looking at a deployment pipeline that uses a Jenkins job, to execute terraform, which creates a new ASG using the most recent AMI of an immutable image and user data created through the interpolation of a file template. The interpolated file template contains the access key. The user data then performs a transform on the config file for the application on the instances at runtime.
I am not thrilled about the current solution as the keys are visible in the user data for the instance.
However I am not sure how quickly I will be able to unpick the current implementation as it uses multiple nested modules, running some pretty old versions of terraform.
With that in mind, I’ve been looking to see if there was a quick win that would allow me to cycle the keys on each deployment, by adding some logic into the module that currently provisions the aws_iam_access_key to swap between key1 and key2 depending on which key was already in active use.
I have a feeling though it is going to be quicker to just fork the current module and update the user data script to pull the key direct from the SSM Parameter Store at runtime using an IAM role on the instance.
hi folks! had a question (similar to this old issue) on passing through GCP service account credentials to the atlantis environment. We currently use terraform to host atlantis on an AWS ECS cluster, and would prefer not to have to keep the credentials file in Github or manually baking it wholesale into the task definition. Was wondering if there was an easy way to reference the required credentials.
Right now thinking of either placing it on AWS secrets manager or SSM parameter store, then querying it through the provider module and passing through to the google provider. Open to any other ideas on this
Firstly, we're using the google provider https://www.terraform.io/docs/providers/google/index.html which makes use of a local service account credentials file to execute terraform. Second, we…
this is better in the #atlantis channel
Firstly, we're using the google provider https://www.terraform.io/docs/providers/google/index.html which makes use of a local service account credentials file to execute terraform. Second, we…
look at the Cloudposse atlantis module and see how they use Parameter Store
you can do somemething similar
I’m having issues with TF being too pedantic about how I set up my IAC. I’m not really sure what the name for it is, but the way that it expects block syntax, doesn’t accept the same as arguments, and doesn’t allow for automation in the block syntax, at least as I see it. In the example below, I want to define a number of s3 buckets as part of the IAC for a microservice based application. If I had started with boto3, I’d be done by now, and have all the flexibility I needed. I’m pretty frustrated with this. Anyway, *is there any way to do what I have done below in a more DRY / maintainable way*? I just discovered another variation of the bucket which also appears to be defined in block syntax – one of the buckets is encrypted. So that means another 2-3 blocks for just that bucket.
You could probably use dynamic blocks to do what you want https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
I did make that attempt. I wasn’t able to get it to work.
The way that S3 is configured in terraform doesn’t really lend itself to making a generic module that covers all possibilities since there are so many options that aren’t separate resources.
.. and since those options are implemented as blocks. ( with all the restrictions of block syntax )
Sometimes blocks in blocks
yes. blocks in blocks. sometimes nesting to a stupid degree.
Cloudwatch is similarly annoying
I think that feeling comes from the overhead necessary to implement a declarative language. At least with CF, I never find myself going over a single blog post trying to figure out how for / for_each / objects work for a new use-case, only to find out that the way I wanted to do something was decided by the TF team to be not-a-best-practice. With CF, I more often find myself just realizing that the thing I want to do is missing.
With that said, though, the TF team does seem to be pretty responsive, always asking about the exact use-case the user is trying to accomplish, and often offering a work-around.
… I just feel like I rarely have the time to pull away from getting something working to create such posts.
locals {
private_buckets = [
"company-${var.environment}-secrets",
"company-${var.environment}-service-auth",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-db-connections",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-service-file-upload",
# object level logging = company-prod-cloudtrail-logs
"company-${var.environment}-service-feedback"
# object level logging = company-prod-cloudtrail-logs
]
default_lifecycle = {
id = "DeleteAfterOneMonth"
expiration_days = 31
abort_incomplete_multipart_upload_days = 7
enabled = false
}
private_buckets_w_lifecycles = {
"company-service-imports" = {
"name" = "company-${var.environment}-service-imports"
"lifecycle_rl" = local.default_lifecycle
}
}
public_object_buckets = [
# "company-${var.environment}-service-transmit"
]
public_buckets_w_lifecycles = {
"company-service-transmit" = {
"name" = "company-${var.environment}-service-transmit"
"lifecycle_rl" = local.default_lifecycle
}
}
}
resource "aws_s3_bucket" "adv2_priv_bucket" {
for_each = toset(local.private_buckets)
bucket = each.value
tags = local.tags
}
resource "aws_s3_bucket" "adv2_priv_bucket_w_lc" {
for_each = local.private_buckets_w_lifecycles
bucket = each.value.name
lifecycle_rule {
id = each.value.lifecycle_rl.id
expiration {
days = each.value.lifecycle_rl.expiration_days
}
enabled = each.value.lifecycle_rl.enabled
abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
}
tags = local.tags
}
resource "aws_s3_bucket" "adv2_pubobj_bucket_w_lc" {
for_each = local.public_buckets_w_lifecycles
bucket = each.value.name
# log to cloudtrail bucket (this is for server logging, not object level logging)
# logging {
# target_bucket = aws_s3_bucket.adv2_cloudtrail_log_bucket.id
# }
lifecycle_rule {
id = each.value.lifecycle_rl.id
expiration {
days = each.value.lifecycle_rl.expiration_days
}
enabled = each.value.lifecycle_rl.enabled
abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
}
tags = local.tags
}
resource "aws_s3_bucket" "adv2_pubobj_bucket" {
for_each = toset(local.public_object_buckets)
bucket = each.value
tags = local.tags
}
resource "aws_s3_bucket_public_access_block" "adv2_priv_s3" {
for_each = aws_s3_bucket.adv2_priv_bucket
bucket = each.value.id
# AWS console language in comments
# Block public access to buckets and objects granted through new access control lists (ACLs)
block_public_acls = true
# Block public access to buckets and objects granted through any access control lists (ACLs)
ignore_public_acls = true
# Block public access to buckets and objects granted through new public bucket or access point policies
block_public_policy = true
# Block public and cross-account access to buckets and objects through any public bucket or access point policies
restrict_public_buckets = true
}
resource "aws_s3_bucket_public_access_block" "adv2_priv_s3_w_lc" {
for_each = aws_s3_bucket.adv2_priv_bucket_w_lc
bucket = each.value.id
# AWS console language in comments
# Block public access to buckets and objects granted through new access control lists (ACLs)
block_public_acls = true
# Block public access to buckets and objects granted through any access control lists (ACLs)
ignore_public_acls = true
# Block public access to buckets and objects granted through new public bucket or access point policies
block_public_policy = true
# Block public and cross-account access to buckets and objects through any public bucket or access point policies
restrict_public_buckets = true
}
v0.14.0-alpha20200910 0.14.0 (Unreleased) ENHANCEMENTS: cli: A new global command line option -chdir=…, placed before the selected subcommand, instructs Terraform to switch to a different working directory before executing the subcommand. This is similar to switching to a new directory with cd before running Terraform, but it avoids changing the state of the calling shell. (<a href=”https://github.com/hashicorp/terraform/issues/26087” data-hovercard-type=”pull_request”…
This new option is intended to address the previous inconsistencies where some older subcommands supported partially changing the target directory (where Terraform would use the new directory incon…
Terraform 0.13.3 will start warning of an upcoming deprecation to the ansible, chef, and puppet provisioners — https://www.reddit.com/r/Terraform/comments/iq2z11/terraform_0133_will_include_a_deprecation_notice/
NB: I’m cross-posting from the HashiCorp community forum for visibility and feedback. Terraform is beginning a process to deprecate the built-in…
lol!! alright everyone, let’s gear up for terraform 0.14
! =P
Yea, totally agree
(unfortunately, we still have some places we added >= 0.12, < 0.14
#FML
Anyways, we have some better tooling to handle this and hopefully will be less painful with every iteration
@Erik Osterman (Cloud Posse) Is the new standard to only pin >= 0.12
?
yes, >= 0.x
where x
is the minimum supported version
so maybe >= 0.13
if it uses for_each
on modules
Let’s DO IT!!!!!!
lol
fyi, just saw that alpha releases are the new norm… i don’t think they are about to drop 0.14 so soon after 0.13 though… https://discuss.hashicorp.com/t/terraform-0-14-0-alpha-releases/14003
The Terraform core team is excited to announce that we will be releasing early pre-release builds throughout the development of 0.14.0. Our hope is that this will encourage the community to try out in-progress features, and offer feedback to help guide the rest of our development. These builds will be released as the 0.14.0-alpha series, with the pre-release version including the date of release. For example, today’s release is 0.14.0-alpha20200910. Each release will include one or more change…
and here are details on the feature in this alpha release… https://discuss.hashicorp.com/t/terraform-0-14-concise-plan-diff-feedback-thread/14004
We have a new concise diff renderer, released today in Terraform 0.14.0-alpha20200910. This post explains why we’ve taken this approach, how the rendering algorithm works, and asks for your feedback. You can try out this feature today: Download Terraform 0.14.0-alpha20200910 Review the changelog More on Terraform 0.14-alpha release Background Terraform 0.12 introduced a new plan file format and structural diff renderer, which was a significant change from 0.11. Most notably, for updated reso…
@Erik Osterman (Cloud Posse) Removing that upper bound on TF version is going to pay dividends
Apologies if this is a question that Google can answer: Are there any examples of using a kms secret with the terraform-aws-rds-cluster
? We currently run an Ansible job to create databases, but want to enable all developers to be able to create their own databases and are not willing to put credentials in a statefile.
More Info: 1: I found https://github.com/cloudposse/terraform-aws-rds-cluster/issues/26 from 2018, but thought there might be new information elsewhere. 2: We use Jenkins and Atlantis, so we could have Jenkins call the Ansible playbook and put our root passwords in Vault, but I’d like to make things simpler.
I use Parameter store
in jenkins there is a Parameter Store plugin for jenkins
you can create the PS with terraform with a lifecycle rule to ignore value changes
resource "aws_ssm_parameter" "dd_api_key" {
name = "/pepe-service/datadog/api_key"
description = "API key for datadog"
type = "SecureString"
value = "APIKEY"
tags = var.tags
lifecycle {
ignore_changes = [
value,
]
}
}
that is what Terraform knows
but the value is injected by hand, jenkins, another tool
chamber
etc
2020-09-11
I am using tf-13 , terraform init works fine for me but when I do a terraform plan its throwing an error saying cannot initialize plugin. Anyone else saw this ?
Can you paste what you’re seeing here?
Error: Could not load plugin
Plugin reinitialization required. Please run “terraform init”.
Plugins are external binaries that Terraform uses to access and manipulate resources. The configuration provided requires plugins which can’t be located, don’t satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your configuration, including providers used in child modules. To see the requirements and constraints, run “terraform providers”.
Failed to instantiate provider “registry.terraform.io/-/aws” to obtain schema: unknown provider “registry.terraform.io/-/aws”
You try TF_LOG=DEBUG terraform plan to see verbose output
Wait, what’s that “/-/aws” there? How are you declaring the plugin?
I am not declaring the plugin I just have my provider as aws
provider "aws" {
region = "us-west-2"
profile = var.awsProfile
}
Can you look for any other references to aws in your code? Obviously exclude things like ‘resource “aws..“’
` 2020/09/11 12:28:45 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
2020/09/11 12:28:45 [INFO] backend/local: starting Plan operation
2020-09-11T12:28:45.887-0400 [INFO] plugin: configuring client automatic mTLS
2020-09-11T12:28:45.910-0400 [DEBUG] plugin: starting plugin: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 args=[.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5]
2020-09-11T12:28:45.922-0400 [DEBUG] plugin: plugin started: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 pid=30307
2020-09-11T12:28:45.922-0400 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5
2020-09-11T12:28:45.957-0400 [INFO] plugin.terraform-provider-aws_v3.6.0_x5: configuring server automatic mTLS: timestamp=2020-09-11T12:28:45.957-0400
2020-09-11T12:28:45.988-0400 [DEBUG] plugin: using plugin: version=5
2020-09-11T12:28:45.989-0400 [DEBUG] plugin.terraform-provider-aws_v3.6.0_x5: plugin address: address=/var/folders/cs/fpp3k3zj61q0hd1pn41nmf9xl5ttrf/T/plugin474919102 network=unix timestamp=2020-09-11T12:28:45.988-0400
2020-09-11T12:28:46.212-0400 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2020-09-11T12:28:46.215-0400 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 pid=30307
2020-09-11T12:28:46.215-0400 [DEBUG] plugin: plugin exited
`
I couldnt find any … this worked like 10 mins back .. I first had a conflict of tf version so I upgraded to tf 13.2 and ran the plan it worked… made some changes and ran consecutive plans which lead to this issue
Can you try this in a clean environment without state, without .terraform folder, etc.?
Yea that helped… my state file was corrupted
Makes sense
Hi I’m working with the VPC Module but getting
Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: 8332210c-dcbe-4b6d-bde4-c8d37ce655c0
on .terraform/modules/aws_infrastructure.vpc.eks_subnets/nat-gateway.tf line 67, in resource "aws_route" "default":
67: resource "aws_route" "default" {
Wondering if there’s any debug help, or what I can do to get around this. I’ve already done terraform apply and this is a 2nd run
Can you share your code for using the module? Free free to replace any ids etc.
Is there a chance that the terraform state didn’t get persisted? … so it’s trying to recreate it
Is there a way to pull the k8s auth token from the CloudPosse terraform-aws-eks-cluster
module? If not, is there any objection to a PR to expose the token via the module’s outputs?
@Andriy Knysh (Cloud Posse)
specifically I’m looking to access the token
attribute from the aws_eks_cluster_auth
data resource here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/df8b991bef53fcab8f01c542cd1c3ccc6242b61c/auth.tf#L72
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
I think we should not do it, it’s not a concern of the module
we can get EKS kubeconfig from the cluster anytime
@Chien Huey what is your use-case?
ah sorry,
yes, we have the token already
we can add it to the outputs
I am trying to use the helm
provider functionality along with the CloudPosse EKS modules to bootstrap a cluster. As part of that bootstrap, I want to use the helm
provider to install fluxcd
I thought about kubeconfig, which we can get from the cluster anytime w/o including it into the module
@Erik Osterman (Cloud Posse), this touches on your recent statement – or maybe it’s in your module dev docs – that you do not expose secrets via TF module outputs. I get that, but there are plenty of use cases where it is really helpful.
default to not
What if the module has a feature flag for the output? If the flag is set to true, the output contains that setting. If its set to false, then it’s null.
yes that’s what I wanted to do
the flag should default to false
to not show the token in the output
@Chien Huey if you want to open a PR and add a feature flag called aws_eks_cluster_auth_token_output_enabled
We’ll approve that
@Andriy Knysh (Cloud Posse) any opinion on the variable name?
aws_eks_cluster_auth_token_output_enabled
or auth_token_output_enabled
maybe the second
ok, let’s do auth_token_output_enabled
heads up @Chien Huey
Hello, I’m trying to create a custom root module that is comprised of a bunch of public modules. In this case, a bunch of CloudPosse modules but am having some questions regarding the layout..
I’m trying to follow this https://github.com/cloudposse/terraform-root-modules but then stumbled on this https://github.com/cloudposse/reference-architectures/tree/0.13.0/templates .
So, is the repo layout supposed to look like what is shown in link #2 and using the example Makefile from link #1
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
I would not use our terraform-root-modules
or the current state of reference-architectures
. The root modules are on life-support for existing customers. Most of them are tf 0.11. Our reference architectures for 0.12+ are not yet public, but should be by the end of the year.
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
They’re entirely redesigned from the ground up to work with terraform cloud
no, I want to create “my own” root module that is ultimately just using a bunch of public modules
but was using those links as a reference point on how to start
Aha, yes, that may be helpful then as an idea for how to organize things.
Also, happy to jump on a call with you anytime and get you unblocked. calendly.com/cloudposse
2020-09-14
Hi Folks, I am trying to reference a module in a private repo. Is there a standard way to tell git+ssh to use the $USER variable for login instead of using “root” when using the geodesic shell?
I do this on pipelines by adding a private key to the pipeline container and then adding the repo url to known hosts. then i can reference the module using ssh
do you modules start start with git://<fixed-pipeline-user>@<your private git>/…?
If so, in my case I would like to still let users still need to run geodesic locally
never mind… I was thrown off by an error with the module url: all I needed was to reference the module correctly:
export TF_CLI_INIT_FROM_MODULE="git::<ssh://git@<private> git repo>/..."
yea I use mostly gitlab ci and my before_script on yamls end up looking like this
before_script:
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- cat "${GIT_PRIVATE_KEY}" >> ~/.ssh/id_rsa
- ssh-keyscan git.domain.com >> ~/.ssh/known_hosts
- chmod 600 ~/.ssh/*
- terraform init
Hi guys, is it possible to attach existed security group to the EC2 instances instead of creating a new one in terraform-aws-elastic-beanstalk-environment
module?
Yes check out the module inputs, you add them as a list using allowed_security_groups
@pjaudiomv thanks for your reply but it seems like allow the security group you provide to access the created default security group…?
ah ok so it creates a default security group but the ingress is set to the provided security groups. this is not a configurable option to not create that group
cool thanks
Anyone have a recommendation for a blog post / video / example repo that shows how to do Multiple Accounts in AWS well using Terraform? IMHO it’s a complicated topic and I’ve blundered through it lightly before, so I’m looking to avoid that going forward.
my gitlab yaml ewnds up looking like this
---
image:
name: hashicorp/terraform:0.13.1
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- cat "${GITLAB_PRIVATE_KEY}" >> ~/.ssh/id_rsa
- ssh-keyscan git.domain.com >> ~/.ssh/known_hosts
- chmod 600 ~/.ssh/*
stages:
- plan
- apply
terraform_plan_only:
stage: plan
resource_group: terraform-lock
script:
- |
for ACCT in $AWS_ACCT_LIST
do
rm -rf .terraform
echo $ACCT
eval "ACCT_FILE=\$$ACCT"
cat "${ACCT_FILE}" > "$(pwd)/.env"
source "$(pwd)/.env"
export $(cat $(pwd)/.env | xargs)
terraform init \
-backend-config=bucket="$S3_BUCKET" \
-backend-config=dynamodb_table="$LOCK_TABLE" \
-backend-config=region="$AWS_REGION"
terraform plan
done
only:
- merge_requests
terraform_apply:
stage: apply
retry: 1
resource_group: terraform-lock
script:
- |
for ACCT in $AWS_ACCT_LIST
do
rm -rf .terraform
echo $ACCT
eval "ACCT_FILE=\$$ACCT"
cat "${ACCT_FILE}" > "$(pwd)/.env"
source "$(pwd)/.env"
export $(cat $(pwd)/.env | xargs)
terraform init \
-backend-config=bucket="$S3_BUCKET" \
-backend-config=dynamodb_table="$LOCK_TABLE" \
-backend-config=region="$AWS_REGION"
terraform plan -out=plan.plan
terraform apply plan.plan
done
only:
- master
I can paste what those env vars look like in a min
AWS_ACCT_LIST
space seperated list of account aliases staging-2343525 prod-25345634 dev-234346
each one of those has a corresponding env var thats a file
ex staging-2343525
AWS_ACCESS_KEY_ID=AKGFHGCHGKAIZEXBLEG5
AWS_SECRET_ACCESS_KEY=SECRET
AWS_REGION=us-east-1
LOCK_TABLE=tfstate-lock-2343525
S3_BUCKET=tfstate-2343525
thus could be a horrible example, but is my experience and works for my use case
I get what you’re trying to show me. That stuff I fully understand. The more interesting part I’m looking to understand from folks is the IAM assumable roles / user delegation setup through Terrafrom. So maybe my question is not clear enough
ah ok yes that is a much larger and different discussion
Thank you though dude — sorry, I didn’t express that I do appreciate the help!
np
Are you using AWS organization?
It’s a lot easier to create accounts under AWS Organization because it will automatically create the role OrganizationAccountAccessRole. You can assume that role and finish configuring the account
@zeid.derhally I am using Organizations and I do create the account through that process. Good to know that is the smartest path. I will add that to my list of “Many steps to accomplish adding a new account / environment”.
2020-09-15
Hi guys, I’m getting the error below when using the terraform-aws-ec2-instance
module:
Error: Your query returned no results. Please change your search criteria and try again.
on .terraform/modules/replicaSet40/main.tf line 64, in data "aws_ami" "info":
64: data "aws_ami" "info" {
you should probably post the messages in the thread to keep the chat a bit cleaner..
Looks to me you are looking for a ami id from var.ami_rhel77
but have defined a ami id for var.ami_omserver40
sorry, I pasted the wrong variable but I do have
variable "ami_rhel77" {
default = "ami-0170fc126935d44c3"
}
and it’s in the correct AWS region
assuming the ami ID and the owner accounts ID is correct, I would double check that you have shared the ami to the account where you execute the terraform code
the “ami-0170fc126935d44c3” is a RHEL 7.7 public image available to all and shared by Red Hat
oh, in that case I’m not sure what might be wrong.. Maybe the owner account id is not their accounts id? I don’t use rhel but quick google suggests that they share ami’s from 309956199498
I still have the same error but for the eks modure which has this local variable:
eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
and I have this
variable "kubernetes_version" {
default = "1.18"
}
I think latest EKS version is 1.17
let me try it
awesome! that worked. Thanks again! I didn’t know eks is a little behind the versions on kubernetes.io site.
and here is my Terraform code:
module "replicaSet40" {
source = "git::<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=tags/0.24.0>"
ssh_key_pair = var.ssh_key_pair
instance_type = var.ec2_replicaSet
ami = var.ami_rhel77
ami_owner = var.ami_owner
vpc_id = module.vpc.vpc_id
root_volume_size = 10
assign_eip_address = false
associate_public_ip_address = false
security_groups = [aws_security_group.sg_replicaSet.id]
subnet = module.subnets.private_subnet_ids[0]
name = "om-replicaSet40"
namespace = var.namespace
stage = var.stage
tags = var.instance_tags
}
variable "ami_owner" {
default = "655848829540"
}
variable "ami_omserver40" {
default = "ami-00916221e415292ed"
}
I do find my ami when running aws ec2 describe-images --owners self
AMis are region specific. Are you using the same region?
Thanks Andriy for replying. Yes, I’m using the AMI for the same region. This already has been resolved. The owner_id was incorrect.
2020-09-16
HCS Azure Marketplace Integration Affected Sep 16, 09:32 UTC Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.
If you have questions or are experiencing difficulties with this service please reach out to your customer support team.
IMPACT: Creating HashiCorp Consul Service on Azure clusters may fail.
We apologize for this…
HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.
HCS Azure Marketplace Integration Affected Sep 16, 14:06 UTC Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate your patience.Sep 16, 09:32 UTC Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with…
HCS Azure Marketplace Integration Affected Sep 16, 16:16 UTC Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.
We apologize for this disruption in service and appreciate…
HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.
HCS Azure Marketplace Integration Affected Sep 16, 16:35 UTC Resolved - We are considering this incident resolved. If you see further issues please contact HashiCorp Support.
We apologize for this disruption in service and appreciate your patience.
Hashicorp Cloud TeamSep 16, 16:16 UTC Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.
HashiCorp Cloud TeamSep 16, 14:06 UTC Update - We are…
Can someone chime in on the pros and cons of using terraform “workspace”? I’m trying to see how to structure TF for multiple environments and most of the “advanced” gurus prefer to avoid it. This is the one im following and I’m so confused as a beginner newb
https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html
Chapter 4. How to Create Reusable Infrastructure with Terraform Modules At the end of Chapter 3, you had deployed the architecture shown in Figure 4-1. Figure 4-1. A … - Selection from Terraform: Up and Running [Book]
i started out using workspaces, but felt they were too implicit/invisible. the explicitness of a directory hierarchy made more sense for our team/usage
Chapter 4. How to Create Reusable Infrastructure with Terraform Modules At the end of Chapter 3, you had deployed the architecture shown in Figure 4-1. Figure 4-1. A … - Selection from Terraform: Up and Running [Book]
sure, but isnt that duplicating whole bunch of iac just for the sake of it? or do u have the “core” modules under ./module
and reference it under production/staging/test/whatever?
exactly, we use a core module over and over across multiple accounts and envs
modules in a different repo and pinned to your compositions works great
technically, we use terragrunt to manage the workflow, but that’s not strictly necessary
we just use 1 mono-repo for the modules rather than 1 repo per module (small team, too much to deal with if we broke it further)
so if i understand the modules correctly, the “referencee” still have the [vars.tf](http://vars.tf)
duplicated for every module u want to utilize?
staging/main.tf -- reference ../modules/whatever.tf
staging/vars.tf -- vars for ../modules/whatever.tf
test/main.tf ... same as above
test/vars.tf
production/...
you have options… you can expose them on the cli, or code the values in [main.tf](http://main.tf)
, or use a wrapper like terragrunt to use core module directly (something like terraform’s -from-module
Will answer this in next week’s #office-hours
I am not a big fan of terraform workspaces due to the fact we can not keep the state files in different s3 buckets. I prefer terragrunt over terraform workspaces
I think I was lucky and only started with TF 12 +, workspaces are the best.. both in s3 and TF Cloud. I just went for parameterised stacks and depending on the workspace select key (using the built in ‘terraform.workspace’ variable) in a map and that is it. Later on I have extended this out to every env having its own file. I saw a lot of examples like this:
and just didn’t get it, this is not DRY. I don’t care about ‘stage’ or ‘prod’. I just have environments with different settings.
v0.13.3 0.13.3 (September 16, 2020) BUG FIXES: build: fix crash with terraform binary on openBSD (#26250) core: prevent create_before_destroy cycles by not connecting module close nodes to resource instance destroy nodes (<a href=”https://github.com/hashicorp/terraform/issues/26186” data-hovercard-type=”pull_request”…
There are two commits here, since go mod tidy had a few things to clean up before I upgraded the dependency, and I thought it might be easier to review as separate commits.
One of the tenets of the graph transformations is that resource destroy nodes can only be ordered relative to other resources, and can't be referenced directly. This was broken by the module cl…
Hey folks — would love some input into a module dependency issue I’m having using the CP terraform-aws-elasticsearch module.
I have my root project which consumes the above module. That module takes in an enabled flag var and a dns_zone_id var. They are used together in the below expression to determine if the ES module should create hostnames for the ES cluster:
module "domain_hostname" {
source = "git::<https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.7.0>"
enabled = var.enabled && var.dns_zone_id != "" ? 1 : 0
...
}
This is invoked twice for two different hostnames (kibana endpoint and normal ES endpoint).
Now my consumption of the ES module doesn’t do anything special AFAICT. I do pass in dns_zone_id as an reference to another modules output: dns_zone_id = module.subdomain.zone_id
I previously thought the module in module usage pattern was causing the below issue (screenshotted) because that was just too deep of a dependency tree for Terraform to walk (or something along those lines), but I’ve just now upgraded to Terraform 0.13 for this project and I’m using the new depends_on = [module.subdomain]
. Yet, I’m still getting this same error as I was on 0.12:
Similar issue from the project issues itself, but back in 2018: https://github.com/cloudposse/terraform-aws-elasticsearch/issues/13
I was previously solving this via a two phase apply, but I was really hoping the upgrade to 0.13 would allow me to get around that hack.
what While trying to test a new module, which depends on this one, I added example usage: module "vpc" { source = "git://github.com/cloudposse/terraform-aws-vpc.git?ref=master>…
if the zone specified to var.dns_zone_id
is being created in the same apply, then this will happen
there is no way around that limitation in terraform. just have to remove that condition from the expression
Yeah, that sounds like my sticking point… now is there no way to get around that dependency snag even with the new module depends_on?
depends_on - Creates explicit dependencies between the entire module and the listed targets. This will delay the final evaluation of the module, and any sub-modules, until after the dependencies have been applied. Modules have the same dependency resolution behavior as defined for managed resources.
Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.
Like why does that not solve the problem. If I’m getting the dns_zone_id value from module.subdomain and I specify “Hey wait for subdomain to be applied” via depends_on… I was assuming that was the whole point of depends_on. But maybe I’m misunderstanding that.
it shifts the problem to the user calling the module, but no it does not remove the limitation
oh, depends_on is not count
Argh. This is frustrating.
@Matt Gowie the depends_on
in 0.13 lets you have modules depend on other things, but it doesn’t fix the count issue. Count needs to be calculated before terraform starts applying anything, which is why the issue is still appearing
i haven’t tried using the two together, but i don’t think it will matter… terraform understands the directed graph, so because of the var reference, it already knows that it needs to resolve one before the other
I wonder if I instead update the module to accept dns_zone_name and then use data.aws_route53_zone to look up the zone to find the zone_id then that might do it? I can statically pass the dns_zone_name.
the count and for_each expressions all must resolve right at the beginning of the plan. they can depend on a data source, but only as long as that data source does not depend on a resource
Okay — Take away for myself is that count is always calculated during the plan and needs to resolve. Regardless of depends_on. TIL.
yeah, i’d still recommend removing that condition from the expression and rethinking the approach
Maybe I’ll update the module to add an explicit flag for the hostname resources instead of having it rely on a calculated enabled + dns_zone_id != ""
condition
that’s been my workaround also
I’ll shoot for that for now. Thanks for the insight gents!
2020-09-17
Hey folks, after upgrading my Terraform from 0.12 to 0.13, I’m unable to run terraform init
due to terraform failing to find provider packages.
terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/template v2.1.2
- Using previously-installed hashicorp/kubernetes v1.13.2
- Using previously-installed hashicorp/random v2.3.0
- Using previously-installed mongodb/mongodbatlas v0.6.4
- Using previously-installed hashicorp/null v2.1.2
- Using previously-installed hashicorp/local v1.4.0
- Finding hashicorp/aws versions matching ">= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints >= 3.0.*, >= 2.0.*, >=
2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0,
~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*
terraform providers --version
Terraform v0.13.3
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.1.2
+ provider registry.terraform.io/mongodb/mongodbatlas v0.6.4
cat versions.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.0"
}
mongodbatlas = {
source = "mongodb/mongodbatlas"
version = ">= 0.6.4"
}
}
required_version = ">= 0.13"
}
cat provider.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-2"
profile = "default"
}
# Configure the MongoDB Atlas Provider
provider "mongodbatlas" {
}
add a version constraint to your aws provider block
provider aws {
version = "~> 3.0"
region = "us-east-2"
profile = "default"
}
I added the version constraint into my aws provider block, deleted the .terraform
directory and run terraform init
again. I still get the same errors.
Is it because the vpc module has this versions.tf file:
terraform {
required_version = ">= 0.12.0, < 0.14.0"
required_providers {
aws = ">= 2.0, < 4.0"
template = "~> 2.0"
local = "~> 1.2"
null = "~> 2.0"
}
}
does this help? https://www.terraform.io/upgrade-guides/0-13.html#why-do-i-see-provider-during-init-
Upgrading to Terraform v0.13
in particular, the commands using state replace-provider
so, I had already read that documentation and also run the state replace-provider. See the output of my terraform providers :
➜ terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] >= 3.0.*, ~> 3.0
├── provider[registry.terraform.io/mongodb/mongodbatlas] >= 0.6.4
├── module.subnets
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│ ├── module.this
│ ├── module.utils
│ │ ├── provider[registry.terraform.io/hashicorp/local] >= 1.2.*
│ │ └── module.this
│ ├── module.nat_instance_label
│ ├── module.nat_label
│ ├── module.private_label
│ └── module.public_label
├── module.eks_cluster
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
│ ├── provider[registry.terraform.io/hashicorp/kubernetes] ~> 1.11
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*, < 4.0.*
│ ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ ├── module.label
│ └── module.this
├── module.omserver40
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ └── module.this
├── module.omserver42
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ └── module.this
├── module.cm-ubuntu16
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ └── module.this
├── module.eks_node_group
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 3.0.*
│ ├── provider[registry.terraform.io/hashicorp/template] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/local] >= 1.3.*
│ ├── provider[registry.terraform.io/hashicorp/random] >= 2.0.*
│ ├── module.label
│ └── module.this
├── module.replicaSet40
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ └── module.this
├── module.replicaSet42
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ └── module.this
├── module.vpc
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*, < 4.0.*
│ ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ ├── module.label
│ └── module.this
├── module.alb_om40
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
│ ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│ ├── module.access_logs
│ ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ ├── module.label
│ └── module.s3_bucket
│ ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│ ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│ ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│ └── module.default_label
│ ├── module.default_label
│ └── module.default_target_group_label
├── module.bastion
│ ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│ └── module.this
└── module.alb_om42
├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
├── module.access_logs
├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
├── module.s3_bucket
├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
└── module.default_label
└── module.label
├── module.default_label
└── module.default_target_group_label
Providers required by state:
provider[registry.terraform.io/hashicorp/kubernetes]
provider[registry.terraform.io/hashicorp/aws]
provider[registry.terraform.io/hashicorp/null]
provider[registry.terraform.io/hashicorp/template]
provider[registry.terraform.io/mongodb/mongodbatlas]
I also tried to rule out the state files by copying all my terraform code to a new directory and run terraform init
. Still got the same error.
yeah, i think you have some kind of irreconcilable version constraint?
- Finding hashicorp/aws versions matching ">= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...
in ├── module.alb_om40
and module.access_logs
and module.s3_bucket
those are all Cloud Posse modules
Stat version of them are you targeting, it may be that you want to target a newer version
I point the modules source
to the Github repo of Cloud Posse using the latest tag
Got it
i believe they’ve become amenable to reducing the restriction to >=
… open a pr
let me show you one example from the ALB module tag 0.17.0
cat versions.tf
terraform {
required_version = ">= 0.12.0, < 0.14.0"
required_providers {
aws = "~> 2.0"
template = "~> 2.0"
null = "~> 2.0"
local = "~> 1.3"
}
}
Ah it may be that you need to pin your provider to 2 then
(and open a pr )
Yeah, if you’re trying to upgrade to AWS provider 3 then you’ll need to deal with those ~> 2.0
blocks.
If you submit PRs for using >= 2.0
and post them in #pr-reviews then we can check em out and try to get them merged if they pass tests. The new version did introduce a bunch of small changes that have broken tests though so it’s not a totally pain free module upgrade all of the time.
sure, give me some time as I’m pretty busy with work but will try to create this PR today. Thanks for all the help
Well, I finished work and started looking in to this PR. I don’t know how would I go about this. because module terraform-aws-alb
calls the terraform-aws-lb-s3-bucket
module which calls the terraform-aws-s3-log-storage
module. They all use ~> 2.0
update the deepest module first, publish new version, then move up the stack
A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.
Is it the part about using terraform against localstack that stands out? (which is pretty interesting)
A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.
for me, the ability to define state moves in code, and plan/apply them is really slick. i’m always renaming and rethinking things, and that often translates to a lot of state manipulation
Hello All, quick question regarding terraform… how can I use terraform to clone an existing EMR cluster
you can not do it in one go
you will have to create a TF project import the resource with terraform import and represent that resource in code by looking at the state file
BUT
you can use this
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
which will do it for you
it is a pretty good tool
2020-09-18
:wave: Hi guys this is Nitin here and I have just came across this slack channel. If this is not the right channel then please do let me know.
As part of provisioning EKS cluster on AWS we are exploring terraform-aws-eks-cluster
https://github.com/cloudposse/terraform-aws-eks-cluster
What is the advantage of using cloud posse terraform module over the community published terraform module to provision EKS cluster on AWS
Thanks a lot
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Personally, terraform is a fast moving river and it changes rapidly. This is one of the reasons why I am staying away from 3rd party modules for now. (Again, just iterating that this is my personal opinion only).
But if there is a 3rd party module that does what I needed, then I’d rather prefer to adopt that module from a well established source like CloudPosse over a random community module in the registry.
The guys here are very knowledgeable in terraform, and so I’d trust them to keep things up-to-date, well tested and compatible with the recent releases of terraform.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Speaking from personal experience, the other terraform module (terraform-aws-modules/terraform-aws-eks) is really unstable. They commit breaking changes all the time. It was a nightmare to use. The CloudPosse one has been WAY more stable
thanks Andrew
A general question for users and contributors of this module My feeling is that complexity getting too high and quality is suffering somewhat. We are squeezing a lot of features in a single modul…
@roth.andy thanks for the pointers. I think if you refer to a tag rather than master you will always get a stable module
even in cloud posse tf module recommendation is to use a tag rather than master
so CloudPosse modules are also community-driven and open-sourced, and we accept/review PRs all the time
and for the EKS modules, we support all of them and update all the time
and they are used in production for many clients
thanks @Andriy Knysh (Cloud Posse) for the inputs. Does cloudposse module give us the ability to use the tf deployment mechanism ?
deployment mechanism = kubernetes_deployment
so basically if I use cloud posse eks module to provision eks cluster can we then somehow deploy helm and flux on the eks cluster
we deploy EKS clusters with terraform, and all Kubernetes releases (system and apps) using helmfiles
works great
thanks Andriy I will look into helmfile.
can it also be used to deploy flux ?
sure
flux has helm charts (and operator)
you can always create a helmfile to use the chart
ok so I just need flux then and no helmfile as flux can do what helmfile will do
butt how do I install flux with cloud posse ?
helmfile is to provision flux itself into a k8s cluster
ahh cool got you
you can use helm for that
thanks a lott Andriy
but helmfile adds many useful features so it’s easier to provision than using just helm
(one other HUGE differentiator is :100: of our our terraform 0.12+ modules have integration tests with terratest
which means we don’t merge a PR until it passes at least some minimal smoke tests)
yes. and the EKS modules have not just a smoke test, in the tests we actually waiting for the worker nodes to join the cluster
and for the Fargate Profile module, we even deploy a Kubernetes release so EKS would actually create a Fargate profile for it https://github.com/cloudposse/terraform-aws-eks-fargate-profile/blob/master/test/src/examples_complete_test.go#L173
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
thanks for your inputs guys
I had one question regarding terraform helmfile
can this provider be used in production ?
or is there any other way to deploy charts using terraform + helmfile ?
Appreciate all your inputs
We’re not yet using it in production, but have plans to do that probably in the next quarter.
@roth.andy has been using it lately, not sure how far it went
@mumoshu any one you know using it in production?
not great
I’m currently using local-exec
I’m going to take another stab at it later though, I just couldn’t spend any more time on it
@Erik Osterman (Cloud Posse) they’re not in the sweetops slack, but a company partnering with me is testing it towards going production.
their goal is to complete a “cluster canary deployment” in a single terraform apply
run. they seem to have managed it :)
andrew, @Andrew Nazarov, and many others have contributed to test the provider(thanks!). and i believe all the fundamental issues are already fixed.
i have only two TODOs at this point. it requires helmfile v0.128.1 or greater so i would like to add a version check so that the user is notified to upgrade helmfile binary if necessary.
also importing existing helmfile-managed releases are not straight-forward. i’m going to implement terraform import
and add some guidance for that. https://github.com/mumoshu/terraform-provider-helmfile/issues/33
If I’ve been using helmfile as a standalone tool, is there a way to smoothly transition ownership of those charts while using this plugin?
Sweeet
also importing existing helmfile-managed releases are not straight-forward. i’m going to implement terraform import
and add some guidance for that.
Ahh good to know! that will be important when we get to implementing it.
thanks guys that really helps us. Will let you know once I present the findings to our team
fyi the eksctl provider has support for terraform import
now
Anyone been able to get https://github.com/cloudposse/terraform-aws-ecs-web-app to work without codepipeline/git enabled?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
First I run into this bug https://github.com/cloudposse/terraform-aws-ecs-web-app/issues/63
Found a bug? Maybe our Slack Community can help. Describe the Bug Here is my terraform code module "ecs_web_app" { source = "git://github.com/cloudposse/terraform-aws-ecs-web->…
And after specifying repo_owner I run into
Error: If `anonymous` is false, `token` is required.
on .terraform/modules/athys.ecs_codepipeline.github_webhooks/main.tf line 1, in provider "github":
1: provider "github" {
codepipeline_enabled = false
webhook_enabled = false
repo_owner = "xxx"
I switched to using alb service task and the alb ingress modules instead of that one
Will give that a shot, thanks
@RB Any idea what would cause
Error: InvalidParameterException: The new ARN and resource ID format must be enabled to add tags to the service. Opt in to the new format and try again. "athys"
on .terraform/modules/ecs_alb_service_task/main.tf line 355, in resource "aws_ecs_service" "default":
355: resource "aws_ecs_service" "default" {
Actually think I found a var to fix that issue. Still don’t really understand the cause, as all resources are brand new (so why use old arns?)
did you tick the boxes in your aws account to opt into the long arn format ?
if so, did you rebuild your ecs cluster ?
Rebuilt ecs cluster, but didn’t realize there’s a setting on the was account. That’s probably it
Thanks
i added the use old arn variable to make sure to not tag the task/service (i forget w hich one) to allow using that module for the not-long arn formats
nice! np
The new ARN and resource ID format must be enabled to add tags to the service
this must be done manually in the AS console
for each region separately
ah it’s per region. interesting.
mine are set to “undefined” which means
An undefined setting means your IAM user or role uses the account’s default setting.
you need to be an admin to do so
Hey guys, had a quick question, is there any reason adding or removing a rule from a wafv2 acl in the terraform itself forces a destroy/recreate of the entire acl? Currently trying to look for ways to get around this as I need the ACL modified in place rather than destroyed everytime a dev goes to modify the wafv2 acl rules.
i also don’t know, but sometimes can decode why terraform wants to do something from the config and the plan output. if you can share those, maybe someone will be able to help
It’s pretty much just a rule that Terraform mentions is forcing a replacement.
which often flows from something in the config. but can’t help if we can’t see it
I’ve co-authored https://github.com/Flaconi/terraform-aws-waf-acl-rules/ and have not seen any of those issues. Maybe you can share your code ?
Module for simple management of WAF Rules and the ACL - Flaconi/terraform-aws-waf-acl-rules
Anyone in here using Firelens to ship logs to multiple destinations? I’m using cloudposse/ecs-container-definition/aws
and trying to come up with a log_configuration
that will ship to both cloudwatch and logstash both.
@sweetops This might help you:
<source>
@type forward
bind 0.0.0.0
port 24224
</source>
<filter *firelens*>
@type parser
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type json
</parse>
</filter>
<match *firelens*>
@type datadog
api_key "#{ENV['DD_API_KEY']}"
service "#{ENV['DEPLOYMENT']}"
dd_source "ruby"
</match>
I believe to ship to two locations you would just need two <match> sections that ship to your respective destinations.
Haha btw — That is a fluentd configuration. We have the AWS fluentbit / firelens configuration as a sidecar on each of our ECS containers and then host a single fluentd container in our cluster for shipping externally.
Here is the Dockerfile for that container:
FROM fluent/fluentd:v1.7.4-debian-1.0
USER root
RUN buildDeps="sudo make gcc" \
&& apt-get update \
&& apt-get install -y --no-install-recommends $buildDeps \
&& sudo gem install fluent-plugin-datadog \
&& sudo gem sources --clear-all \
&& SUDO_FORCE_REMOVE=yes \
apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false \
$buildDeps \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
COPY fluent.conf /fluentd/etc
Do you pin the version of TF and/or your providers/plugins?
Terraform now has an official stance on how to pin as pointed out to me by @Jeff Wozniak
Terraform by HashiCorp
this bit us hard so we no longer use it since all of our modules are intended to be composed in other modules
we use a strict pin only in root modules, and any composable modules use only a min version
and the min version is usually optional. we only add the restriction if we know we’re using a feature that depends on a min version of something (e.g. module-level count requires tf 0.13)
Hmmm interesting. So at a minimum, a min version. Not leaving it empty.
Would appreciate your guys’ feedback on the above. Trying to determine best practices for our environment.
Hi everyone. I’m creating public subnets like this:
resource "aws_subnet" "adv2_public_subnets" {
for_each = var.adv2_public_subnet_map[var.environment]
vpc_id = var.vpc_map[var.environment]
cidr_block = each.value
availability_zone = each.key
tags = merge(local.tags, { "Name" = "adv2-${var.environment}-pub-net-${each.key}" } )
}
and I’d like to be able to refer to them similar to this:
resource "aws_lb" "aws_adv2_public_gateway_alb" {
name = "services-${var.environment}-public"
internal = false
load_balancer_type = "application"
subnets = aws_subnet.adv2_public_subnets
idle_timeout = 120
tags = local.tags
}
This also failed to work:
subnets = [aws_subnet.adv2_public_subnets[0].id, aws_subnet.adv2_public_subnets[1].id]
… I’ve since been unable to figure out how to refer to the subnets created as a list of strings
I think the issue is that subnets is a collection of objects, not a list of strings, but I’m not sure how to say give me back a list of strings of attribute x for each object in the collection.
I’m also really not sure how to easily figure out what exactly aws_subnet.adv2_public_subnets is returning without breaking apart the project and creating something brand new just to figure out what that would be. … is there a way to see this?
Also tried:
subnets = [aws_subnet.adv2_public_subnets.*.id]
where I tried to implement what I saw in this example: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#application-load-balancer
the object is a map, so you need to use for
to access the attribute you want:
[ for subnet in aws_subnet.adv2_public_subnets : subnet.id ]
ah, that’s how you do that. It wasn’t clear to me how to apply that to a value.
it’s not intuitive. would love if we could index into the map for an attribute, the way we can with a list/tuple, a la aws_subnet.adv2_public_subnets[*].id
I had no idea that could be done (for list/tuple)
Sweet. That does exactly what I want. Thanks! For reference,
resource "aws_lb" "aws_adv2_public_gateway_alb" {
name = "services-${var.environment}-public"
internal = false
load_balancer_type = "application"
subnets = [ for subnet in aws_subnet.adv2_public_subnets : subnet.id ]
2020-09-19
Hi guys, I’ve got an error when tried terraform apply
to spin up a few nodes on Google, just preparing infra for the GitLab Ci/CD pipelines on Kubernetes. We had an outage at our ISP for a few days, but not sure that it can be related to this issue.
Code: https://gitlab.com/organicnz/gitops-experiment.git
terraform apply -auto-approve -lock=false
google_container_cluster.default: Creating...
google_container_cluster.default: Still creating... [10s elapsed]
Failed to save state: HTTP error: 308
Error: Failed to persist state to backend.
The error shown above has prevented Terraform from writing the updated state
to the configured backend. To allow for recovery, the state has been written
to the file "errored.tfstate" in the current working directory.
Running "terraform apply" again at this point will create a forked state,
making it harder to recover.
To retry writing this state, use the following command:
terraform state push errored.tfstate
It’s already fixed guys
Stack Overflow | The World’s Largest Online Community for Developers |
2020-09-20
is there some way I can get tf to load a directory of variable files?
Not that I know of. But if you want to roll that into your projects workflow then the common approach for this type of thing is to script around it using Make or bash.
Use the extension .auto.tfvars
?
Terraform also automatically loads a number of variable definitions files if they are present:
Files named exactly terraform.tfvars or terraform.tfvars.json.
Any files with names ending in .auto.tfvars or .auto.tfvars.json.
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
We heavily use symlinks in Linux to hydrate global, environment, region variables and auto tfvars
yeah, right… ok
all very interestings ideas
Your variables could be outputs in nodule A, then module B could use a remote state data source to retrieve module As outputs which can then be used
(or use YAML configuration files)
More and more we’re taking the approach that HCL is reponsible for business logic and YAML is responsible for configuration.
How do you load the YAML into TF vars?
So our modules always operate on inputs (variables), but at the root-level module, they operate on declarative configurations in YAML.
Ah. And you load as locals?
Ya, here’s an example: https://github.com/cloudposse/terraform-opsgenie-incident-management/blob/master/examples/config/main.tf
Contribute to cloudposse/terraform-opsgenie-incident-management development by creating an account on GitHub.
Ah then you load into a module similar to the context.tf pattern and that can provide the variable interpolation bit. Cool, good stuff
yeah, nice! https://github.com/cloudposse/terraform-opsgenie-incident-management/blob/master/examples/config/main.tf
This is totally what I was looking for.. amazing. I like it @Erik Osterman (Cloud Posse)
2020-09-21
Im staring upgrading TF projects to 0.13 an in one particular project I have 3 provider aliases and I’m getting Error: missing provider provider["[registry.terraform.io/hashicorp/aws](http://registry.terraform.io/hashicorp/aws)"].us_east_2
which is weird because it does not complain for any other and the upgrade command works just fine
nevermind……
being dyslexic is not cool some times
I’m creating a cloudfront terraform module and want to optionally add geo restriction:
restrictions {
dynamic "geo_restriction" {
for_each = var.geo_restriction
content {
restriction_type = geo_restriction.restriction_type
locations = geo_restriction.locations
}
}
}
variable "geo_restriction" {
type = object({
restriction_type = string
locations = list(string)
})
description = "(optional) geo restriction for cloudfront"
default = null
}
However this gives me an error when I pass the default null variable:
for_each = var.geo_restriction
|----------------
| var.geo_restriction is null
Cannot use a null value in for_each.
Is there a way to fix this? Or am I doing it wrong?
Your default should be an empty list, [] . You can then use toset() for the for_each
2020-09-22
Anyone have an example including EFS + Fargate, including permissions?
I’ve been struggling to get past
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-5e587347.efs.us-west-1.amazonaws.com" - check that your file system ID is correct. See <https://docs.aws.amazon.com/console/efs/mount-d>...
Oh I ran into this too. A few things to look into:
- Make sure you can actually get to the EFS from where FARGATE is running. Specifically, it uses the NFS protocol.
- Make sure you’ve got DNS resolution (using AWS’s DNS).
- Look at the policies on the EFS end, to make sure they’re allowing FARGATE to connect in.
The DNS piece may be what I’m missing, will take a look at that thanks.
Thank you so much @Yoni Leitersdorf (Indeni Cloudrail) I was missing a DNS flag on my vpc. Was fighting this for awhile.
Happy to help wasted hours of my life on this.
Hello. Just looking to use https://github.com/cloudposse/terraform-aws-elasticache-redis. Part of the task is to create users on the redis server that are essentially read only users. Is this possible with this module, or terraform in general? We already have a bastion SSH tunnel in place that only allows tunnelling to specific destinations, so no issue with connecting to the Redis instances.
My guess is that unless there’s a specific resource to monitor, terraform isn’t going to be involved.
But any suggestions would be appreciated.
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
You’re talking about Redis ACL right?
I don’t think the AWS API deals with this at all https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_Operations.html so it’s unlikely the Terraform resources would.
Interesting that no-one has done this (based on my 5 second Google search)
The following actions are supported:
Thank you for that. I’m not an expert in this area at all and so learning what’s what.
Would certainly be an interesting ask though.
Does anyone know how to terrafom apply just for additional outputs? (and ignore the rest of the changes as it’s being tempered with)
edit: there doesnt seem to be a way to pass in ignore_changes
into a module? im using the vpc module and just want to append some outputs ive missed without messing with the diff.
ahh life saving!! i was googling like a fanatic and at one point I also stumbled upon terraform refresh
… but in no where in the doc does it mention that it can update the outputs. Thanks!
np!
2020-09-23
:wave: I’m trying to use https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack and was wondering how the kms_key_arn
is supposed to be used with the required slack_webhook_url
string parameter. I was going to create a SecureString parameter in Parameter Store and am not sure if that’s the correct way to go about using kms_key_arn
Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack. - cloudposse/terraform-aws-sns-lambda-notify-slack
ah i think I can just use the aws_ssm_parameter data source as the value for slack_webhook_url
in my terraform and ignore the kms_key_arn
attribute https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter
Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack. - cloudposse/terraform-aws-sns-lambda-notify-slack
is there anyway to mv a local terraform state to remote s3, I feel like im missing something super trivial
Terraform should handle it for you if you add the remote S3 backend config. You add the config, terraform init
, and then it will prompt you to transfer it.
yea I saw that in the docs and tried it and it never asked me
im a try the things again, thanks.
Huh. What tf version? Do you maybe have multiple terraform blocks / backend configs or something similar?
0.12.28
terraform {
required_version = ">= 0.12.0"
}
provider aws {
region = var.region
}
and no backup config
i mean backend
without the backend configuration you can’t
Hm. Yeah, I’ve definitely had this work across a number of 0.12.* versions.
Yeah, you need that backend config.
You need to include the backend config and issue an terraform init for the state migration to happen
Or are you using the backend init flags?
oh my b, i mean i didnt before…because it was local.
terraform {
backend s3 {
bucket = "tfstate-account-number"
region = "us-east-1"
dynamodb_table = "tfstate-lock-account-number"
key = "egress-proxy/terraform.tfstate"
}
}
Is that in conjunction with your other terraform block? I’m wondering if they don’t jive together if there are two. Not sure if I’ve done that myself.
yes it is, i will join em
sweet that worked
thanks for the help
Anyone have any more detailed info on this new OSS project from Hashicorp?
I’m curious too - I remember the announcement, but don’t know what was announced
product hasn’t been announced yet other than the hype building
We’re on 0.14
already?!
The Terraform core team is excited to announce that we will be releasing early pre-release builds throughout the development of 0.14.0. Our hope is that this will encourage the community to try out in-progress features, and offer feedback to help guide the rest of our development. These builds will be released as the 0.14.0-alpha series, with the pre-release version including the date of release. For example, today’s release is 0.14.0-alpha20200910. Each release will include one or more change…
Awesome. Thanks
v0.14.0-alpha20200923 0.14.0 (Unreleased) UPGRADE NOTES: configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (<a href=”https://github.com/hashicorp/terraform/issues/26135” data-hovercard-type=”pull_request”…
Terraform by HashiCorp
The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…
anyone able to do an s3_import using an rds cluster ?
tried doing it and got this fatal error https://github.com/terraform-providers/terraform-provider-aws/issues/15325
discovered i needed to provider master user and pass and now it fails with a weird error after trying to create it for 5 min
i keep getting this error S3_SNAPSHOT_INGESTION
which i imagine is cause of some iam restriction
but it has s3 and rds full rights …
we do this for snapshots and imports
what kind of iam role do you use ?
have you run into this S3_SNAPSHOT_INGESTION error ?
nvm, i got around it by doing this in the UI and then backporting it back into terraform
I do not remember running nto that error
Glad you solved it
can for_each
be used for data source lookups? I want to do something like this:
## retrieve all organizational account ID's
data "aws_organizations_organization" "my_org" {
for_each = toset(var.ACCOUNT_ID)
ACCOUNT_ID = each.key
}
would my var.ACCOUNT_ID
just be a list of strings? or have to be a more complex variable declaration? I’m having issues trying to get it to work at the moment.
variable ACCOUNT_ID {
default = [""]
type = list(string)
}
never mind.. I got it. Had to remove ACCOUNT_ID = each.key
from the data lookup. Thanks
I guess I’m trying to figure out how to actually “filter” and get the specific value that I need.. Does for_each use the lookups as a normal data lookup? or..
What I am trying to figure out how to do is have a data source lookup on my AWS organization and somehow save all of the account numbers. Then, I’d like to pass that information as a variable into my AWS provider so I can loop through accounts and create common resources across multiple accounts but I don’t want to maintain a hardcoded list of account numbers. Oh, and I’m using tfvars and not workspaces.
## retrieve all organizational account ID's
data "aws_organizations_organization" "my_org" {
for_each = toset(var.ACCOUNT_ID)
}
provider "aws" {
region = var.region
assume_role {
role_arn = "arn:${var.partition}:iam::${each.value.accounts}:role/<ROLE_NAME>"
}
}
looks like this is a known limitation. Sigh.. https://github.com/hashicorp/terraform/issues/19932
Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …
I like to use cloudposse/terraform-state-backend but it seems overkill to create one bucket per terraform state. The module doesn’t seem to allow me to re-use an existing bucket, is there another module that still does the dynamodb setup but uses already existing module?
Almost seems like the s3 stuff should be in separate module, this way one could create a common bucket to be used for all terraform stacks (in separate “folders” in that bucket, of course) and a dynamodb table + backend.tf file for each cluster. I could refactor that myself of course, but then I would loose the bug fixes & improvements you guys make.
buckets are free, I wouldn’t worry about it
the cost of accidentally overwriting one stack’s state with another stack’s is extremely high, and using different buckets is an effective way to reduce that risk
agree. It’s also easier to manage permissions between buckets as opposed to objects inside a single bucket.
idk i kind of agree with op. we use a single versioned bucket for tfstates and it works. we have upwards over 1000 modules with multiple workspaces so probably upwards of 3000 states. creating 3000 buckets seems ridiculous comparatively to 1 with versioning.
yeah well that’s exactly it, as shown by @RB it doesn’t seem to scale well.
Another example: we have an EKS cluster setup with terraform, that’s one state, then we have AWS resources for each deployment of an app in that cluster, each deployment has its own terraform state that “extends” the cluster terraform state (uses remote state as input). It would make sense to have those deployment terraform states all in the same bucket as the cluster state.
So I could see one bucket per cluster, but then if you want to break down the state into smaller pieces for re-usability, separate buckets massively clutters bucket namespace.
a single bucket has infinite prefixes. why not just use a prefix per cluster
so that we keep everything with high cardinality this is the scheme we use.
<s3://aws-account-id_tfstates/github_org/repo/module_path>
I used to have 1 statefile bucket that managed everything. Then I recently migrated to having 1 statefile bucket per AWS account with DynamoDB locking. Inside of those account specific buckets there could have 50 folders all separating out different statefiles but at least those statefiles live with that account.
yep that makes sense
fyi jon, those are not folders, they are prefixes. the aws console just shows them in a directory structure
you might think im nitpicking and the difference is subtle but i think important
and i completely agree. an s3 tfstate bucket and a dynamodb table per account also makes more sense than a bucket and dynamodb table per module.
we use a single tfstate bucket in our primary iam account and when we need to do things in a separate account, we use the same bucket and same dynamodb table in the primary account and just assume a role in the other account
if only we could dynamically loop through Terraform AWS providers to keep code DRY. Maybe one day
Yeah, I thought about switching to Terragrunt for this current work I’m trying to do.. Might try one or two more things before switching. Basically I want to create an IAM role with some permissions as well as my DynamoDB with S3 resources from a centrally managed account.
I don’t want to keep a list of account numbers in my organization. Although it’s kind of similar, I’m about to test out multiple tfvars and some template rendered for the policy. Then in my CICD pipeline it’ll just be completely different environments per tfvar I use. Not sure how well that’ll work but hopefully I find out soon enough.
Ideally, I wanted to do a data source lookup on my organization, grab the account numbers and dynamically loop through my provider to create the resources I need.
@OliverS we used to create one bucket per AWS account. Now we only create one bucket period. We use path prefixes with the backend object.
terraform {
required_version = ">= 0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
template = {
source = "hashicorp/template"
version = "~> 2.0"
}
local = {
source = "hashicorp/local"
version = "~> 1.3"
}
}
backend "s3" {
encrypt = true
bucket = "eg-uw2-root-tfstate"
key = "terraform.tfstate"
dynamodb_table = "eg-uw2-root-tfstate-lock"
workspace_key_prefix = "eks"
region = "us-west-2"
role_arn = "arn:aws:iam::xxxxxxxxx:role/eg-gbl-root-terraform"
acl = "bucket-owner-full-control"
}
}
note the workspace_key_prefix
you can also prefix the key
Besides easier management and less things to deploy are there any other benefits @Erik Osterman (Cloud Posse)? I’m wondering besides we originally had one bucket period but then started thinking “well maybe customerA” wants their statefile inside their own account and apart from everyone else so we just started down that path.
for example: customerA |_s3-bucket |_vpc_module |_region-specific-statefile |_app_server_module |_region-specific-statefile |_dynamoDB |_CMK for encryption
more and more we’re working with 10-20 AWS accounts. The coldstart for managing state buckets makes turnkey coldstart provisioning a real hassle.
And their state is still in separate S3 “folders” (it can always be moved later if it’s an issue). But no need to prematurely optimize for the hardest case.
I agree. I’m currently transitioning to all GovCloud accounts and have a very small set of accounts already deployed and it’s already been a hassle. I know I’ll end up having 60+ when its all said and done.
thanks for the feedback
this is, just for the record, a huge mea culpa because we were the staunchest advocates of using a minimum of one bucket per account from the get-go. However, when we had that position we also didn’t have the strongest gitops/continious delivery story for terraform. Now we’ve relaxed that position as we’re do almost entirely continuous delivery of terraform. As a result of that, we needed to make some tradeoffs to simplify things.
awesome, so it seems completely possible to have multiple account tfstate files in a single s3 bucket
thanks for weighing in erik
@Erik Osterman (Cloud Posse) the example you show assumes the bucket already exists, but I am looking for a way to create the state bucket the same way that terraform-state-backend does, without rolling my own (which I could do but would rather use already made module if ther eis one). Please see https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/72 and let me know what you think.
Describe the Feature The module ("tatb") should support 2 use cases: bucket already exists: do not create the bucket dynamodb table will be created separately: do not create the dynamodb …
any ideas? https://discuss.hashicorp.com/t/data-aws-iam-policy-document-and-for-each-showing-changes-on-every-plan-and-nothing-on-apply/14606 Having issues with aws_iam_policy_document always showing a change.
So, I have some IAM policies I am building with for_each which are then used as assume_role_policy and aws_iam_policy but on every plan: Plan: 0 to add, 20 to change, 0 to destroy. and then apply: Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Some details: $ tf version Terraform v0.13.3 + provider instaclustr/instaclustr/instaclustr v1.4.1 + provider registry.terraform.io/hashicorp/aws v3.7.0 + provider registry.terraform.io/hashicorp/helm v1.3.0 + provider registry.terraform….
you forgot to put in effect
and resources
arguments and that’s probably why it shows a difference
So, I have some IAM policies I am building with for_each which are then used as assume_role_policy and aws_iam_policy but on every plan: Plan: 0 to add, 20 to change, 0 to destroy. and then apply: Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Some details: $ tf version Terraform v0.13.3 + provider instaclustr/instaclustr/instaclustr v1.4.1 + provider registry.terraform.io/hashicorp/aws v3.7.0 + provider registry.terraform.io/hashicorp/helm v1.3.0 + provider registry.terraform….
im guessing. ^
yeah, well they are optional with defaults right
I can try it
good idea
did it work ?
sorry, I am in AU and you messaged… late at night. Trying to get around to it today.
and I was so swamped I didn’t even get around to it.. Monday!
2020-09-24
I’m looking for an easy pattern for deploying lambdas with terraform, when the lambda code lives in the terraform module repo. This is for small lambdas that provide maintenance or config services. The problem is always updating the lambda when the code changes: a combination of a null_resource
to build the lambda and an archive_file
to package it into a zip works, but we end up having a build_number
as a trigger on the null_resources that we have to bump to get it to update the code.
Is there some other pattern to make this easier?
I’ve thought about packaging the lambda in gitlab/github CI, but terraform cannot fetch a URL to deploy the lambda source
have you tried this ?
Works great
w00t!
also, @antonbabenko is working on https://serverless.tf
serverless.tf is an opinionated open-source framework for developing, building, deploying, and securing serverless applications and infrastructures on AWS using Terraform.
for simple situations, our pattern is to build the zip
in a GitHub action an upload it to an S3 bucket as an artifact.
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder
Then we use our terraform-external-module-artifact
module to download it and deploy the artifact .
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder
Terraform module to fetch any kind of artifacts using curl (binary and text okay) - cloudposse/terraform-external-module-artifact
Btw, https://github.com/terraform-aws-modules/terraform-aws-lambda - does the same what claranet/terraform-aws-lambda does but better See README and examples for more.
Terraform module, which takes care of a lot of AWS Lambda/serverless tasks (build dependencies, packages, updates, deployments) in countless combinations - terraform-aws-modules/terraform-aws-lambda
Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.
heads up: if you’re using our terraform-aws-rds-cluster
module, we fixed a “bad practice” related to using an inline security group rule, but upgrading is a breaking change. we’ve documented one way of migrating here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/83
This PR #80 changed the Security Group rules from inline to resource-based. This is a good move since inline rules have many issues (e.g. you can't add new rules to the security group since it&…
What is the best way to pass the same local/variable to each module? I was the copy to be available to all our modules. It would be great if there is any way to declare global variable
We’re defining ours in locals{name = “cool”} block and referencing like so: local.name
but then you have to pass that with every module:
module * {
name = local.name
}
This keeps the number of variables to a minimum. I’m not sure there’s a better solution for TF 0.12.x. I don’t think TF 0.13 brings any improvements to this.
Yeah not sure that this is kind of problem with everyone?
Yeah I agree, we can define the map with all the constants
Depends on your use-case, but have you seen the cloudposse/terraform-null-label
? we use this so we can just pass around a single variable called local.this.context
The context
has all the variables.
Here’s an example of how we use it https://github.com/search?q=org%3Acloudposse+module.this.context&type=code
GitHub is where people build software. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects.
Stupid question: How is this beneficial compared to just defining the locals file with:
locals {
this = {
# ... My Context dict
}
}
So basically how is passing module output different than locals variable?
Yup, you could do that too. Take a look at the module to see why we do it. of our 100+ terraform modules use this pattern. using this module has enabled us to enforce consistency.
(note the normalized outputs and tags)
Yeah I was sensing the same thing, it is making sure every service is defining it consistently
Thanks Erik, I love this channel! Learning a lot through it
In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?
AFAIK you cannot loop over regions, it is a limitation.
Here’s one way to work around that:
2020-09-25
I’m working on a module that sets up an aws root account with an Org and children accounts. In this module I want to 1) create an audit logs account 2) create a bucket in this logs account. How would I go about doing this? How would terraform execute actions in a newly created account?
Figure it out. It’s as easy as defining a new provider block with
allowed_account_ids = [local.log_account_id]
assume_role {
role_arn = "arn:aws:iam::${local.log_account_id}:role/OrganizationAccountAccessRole"
}
and then passing the provider to a module that does what you need
hi all, first time to use this for beanstalk - https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
can you please help point out which permissions i need to add to that user?
It tells you in the error message: iam:CreateRole
Also keep in mind bucket names need to be globally unique. Someone else has that bucket already.
No one has any idea on https://sweetops.slack.com/archives/CB6GHNLG0/p1600992655024200?
In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?
This is a terraform-aws-provider >= 3.0.0
requirement that the region cannot be specified with the bucket.
In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?
Also, passing providers with 0.13 in module for_each is currently a problem being tracked.
@Andriy Knysh (Cloud Posse) have you seen any updates on that issue?
in module with count
and for_each
, you can’t pass any providers at all, no single, map, or list of providers
that’s not solved yet
what @Abel Luck has shown is the only way to iterate regions by using multiple providers
@Andriy Knysh (Cloud Posse) is there code that I could look at to do this
Here’s one way to work around that:
2020-09-26
do we have something like aws ec2imagebuilder in terraform?
I believe you’re looking for hashicorps packer
2020-09-27
I can’t disable CloudWatch alarms in https://github.com/cloudposse/terraform-aws-elasticache-redis
I don’t want to create them for dev env, since it incurs some cost per month.
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
possibly solved by adding another module variable that disable alarms, or disable alarms when nothing specified for alarm_actions and ok_actions. Which is better?
Sure, go ahead and add a feature flag ...._enabled
to toggle the creation. We’ll get that merged quickly if tests pass. Post PR in #pr-reviews
this was already added in https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/84
what Add cloudwatch_metric_alarms_enabled variable Update Terratest Update to context.tf why Allow disabling CloudWatch metrics alarms Standardization and interoperability Keep the module up to …
@t.hiroya did you try the latest release?
@Andriy Knysh (Cloud Posse) Thanks, I should have tried that.
2020-09-28
Hey guys facing this issue every time i do
terraform apply
(applied many times)
its always update this alarm
~ dimensions = {
~ "AutoScalingGroupName" = "terra-autoscaling-asg-garden" -> "terra-autoscaling-asg-prod"
}
evaluation_periods = 2
id = "cpu-low-alarm"
everything is fine on aws and tf.state. deleted state locally. but its still there.
#terraform Build once and run everywhere, is a great concept behind docker and containers in general, but how to deploy these containers, here is how I used ECS to deploy my containers, check it out and let me know how do you deploy your containers? https://www.dailytask.co/task/manage-your-containers-deployment-using-aws-ecs-with-terraform-ahmed-zidan
Manage your containers deployment using AWS ECS with Terraform written by Ahmed Zidan
is there a WORKSPACE
EN variable in terraform ?
Are you talking about ${terraform.workspace}
?
no, just a plain WORKSPACE variable that terraform will read?
Not that I know of. What’re you trying to do?
I’m reading someone else code and I say that and I was like, what is this?
Oh maybe they’re using it in place of terraform.workspace?
is atlantis source code
Aha in golang. Then that’s totally possible. I would assume it’d be TF_*
though so maybe that is only used by Atlantis?
there is mentions in the code about it
I am coverting this cloudformation to Terraform:
AppUserCredentials:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Sub "${AWS::StackName}/app-user-credentials"
GenerateSecretString:
SecretStringTemplate: '{"username": "app_user"}'
GenerateStringKey: 'password'
PasswordLength: 16
ExcludePunctuation: true
I am unable to find how can I use the concept of GenerateSecretString with Terraform?
Exactly what I was looking for
2020-09-29
Hello.
I need generate an .env
file from terraform resource. How can I do it?
You can use the local_file
resource. https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file
I need load to content
my environment file like this
APP_ENV=release
HOST=${host}
PORT=3306
DB_SERVER=${mysqlhost}
and change values before store
Hello, has anyone configured Certificate based site to site VPN in Terraform . I get the following error when I try Error: Unsupported argument
on aws.tf line 69, in resource “aws_customer_gateway” “customer_gateway_1”: 69: certificate-arn = “arnacm894867615160:certificate/e3fc78b9-b946-4b41-8494-b33510aea894”
An argument named “certificate-arn” is not expected here.
Often times Terraform has multiple resources for a single AWS CLI command. The certificate-arn is not a parameter for this resource:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/customer_gateway
thanks @Yoni Leitersdorf (Indeni Cloudrail) is this a Terraform issue or AWS ..for some reason the CLI does not support it as well available only in GUI ..fairly new capability they started supporting it since March 2020
Good question. Generally it’s possible for TF to be missing support for new features. You can look in the aws provider repo’s issues for requests to support this. I think I found what you’re looking for: https://github.com/terraform-providers/terraform-provider-aws/issues/10548
If that’s what you need, you’ll need to watch that issue and hope they get around to implementing it.
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Thank you! Yoni appreciate the response just left a comment there why this feature is so important .. Cert Based VPNs are now a major requirement for LTE/G4/G5( dynamic tunnel IP address) and Security perspective and we can’t use Terraform if this feature is not supported …
the parameter exist in command line : create-customer-gateway –bgp-asn <value> [–public-ip <value>] [–certificate-arn <value>] –type <value> [–tag-specifications <value>] [–device-name <value>] [–dry-run | –no-dry-run] [–cli-input-json <value>] [–generate-cli-skeleton <value>]
tying to make this works in a testbed BTW
2020-09-30
Anyone knows of a tool (thats equivalent to cloudformation console) to list all the resources for a terraform state? (depth = 1 is fine, and
terraform show/graph
IS NOT human readable… )
terraform state list
? or do you want actual resource ids?
Error: AccessDenied: User: arn:aws:iam::395290764396:user/sabsab is not authorized to access this resource
status code: 403, request id: 918de6f4-347c-420f-b7af-6a19b9a029a3
on .terraform\modules\elastic_beanstalk_environment.dns_hostname\main.tf line 1, in resource "aws_route53_record" "default":
1: resource "aws_route53_record" "default" {
Please reformat your message to use code blocks…
Looks like you don’t have access to the route53 zone
Thanks for answering Erik, the user sabsab was already set to have route53fullaccess. that is not enough?
Any Boundaries affecting that user? How about SCP?
if you set TF_DEBUG=1
and rerun, you might get more helpful information on the exact operation that failed.
what iam permission do i need to add?
Having an issue with https://github.com/cloudposse/terraform-aws-ecs-container-definition and log configuration. What do I need to set so that Auto-configure Cloudwatch Logs is checked in the container definition in ECS?
log_configuration = {
logDriver = "awslogs"
options = {
"awslogs-group" = "/ecs/ctportal"
"awslogs-region" = var.vpc_region
"awslogs-stream-prefix" = "ecs"
}
secretOptions = []
}
When task definition is created the log parameters are set as defined above but the box to Auto-configure Cloudwatch Logs is not checked in ECS.
You don’t need that option, you’re providing all the info it would otherwise figure out for you
When registering a task definition in the Amazon ECS console, you have the option to allow Amazon ECS to auto-configure your CloudWatch logs. This option creates a log group on your behalf using the task definition family name with ecs as the prefix.
i comment this here because is related: if someone has any problem with the creation of the log group, the answer –> https://aws.amazon.com/es/premiumsupport/knowledge-center/ecs-resource-initialization-error/
v0.13.4 0.13.4 (September 30, 2020) UPGRADE NOTES: The built-in vendor (third-party) provisioners, which include habitat, puppet, chef, and salt-masterless are now deprecated and will be removed in a future version of Terraform. More information on Discuss. Deprecated interpolation-only expressions are detected in more contexts in…
Terraform is beginning a process to deprecate the built-in vendor provisioners that ship as part of the Terraform binary. Users of the Chef, Habitat, Puppet and Salt-Masterless provisioners will need to migrate to the included file, local-exec and remote-exec provisioners which are vendor agnostic. Starting in Terraform 0.13.4, users of the built in vendor provisioners will see a deprecation warning. We expect to remove the four vendor provisioners in Terraform 0.15. Since the release of Terraf…
So… terraform graph
.. As useless as puppet’s? Yessir.
so did terraform module chaining get wrecked with 0.13 or am I missing something? We have a resource_group module that creates the group and then the cluster module references that data, but it fails in 0.13 now
module "resource_group" {
source = "../modules/azure_resource_group"
resource_group_name = "test"
resource_group_location = "westus"
}
module "kubernetes" {
source = "../modules/azure_aks"
cluster_name = var.cluster_name
kubernetes_version = var.kubernetes_version
resource_group_name = module.resource_group.name
output:
Error: Error: Resource Group "test" was not found
on ../modules/azure_aks/main.tf line 1, in data "azurerm_resource_group" "rg":
1: data "azurerm_resource_group" "rg" {
seems to work in 0.12 without issue
doubtful. do this all the time and have converted numerous configs/tfstates to tf 0.13 with no problem…
Yeah just discovered the new depends_on for modules, working now.
I guess it doesn’t infer the ordering anymore
it should infer ordering same as always. if you can come up with a minimal repro config, open a bug report
depends_on I think it should be the last resource