#terraform (2020-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-09-01

Eric Berg avatar
Eric Berg

I’m updating my terraform-opsgenie-incident-management implementation from an earlier release and it looks like the auth mechanism has changed. I removed opsgenie_provider_api_key from being passed to the CP opsgenie modules and added a provider block, but I have been getting this shockingly helpful message and can’t find where to make this change:

Error: Missing required argument

The argument "api_key" is required, but was not set.
Eric Berg avatar
Eric Berg

Not sure if this from the logs helps…sure doesn’t help me:

2020-09-01T10:21:55.404-0400 [DEBUG] plugin: starting plugin: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7 args=[.terraform/plugins/registry.terraform.io/op
sgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7]
2020-09-01T10:21:55.424-0400 [DEBUG] plugin: plugin started: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7 pid=15505
2020-09-01T10:21:55.424-0400 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/registry.terraform.io/opsgenie/opsgenie/0.4.7/darwin_amd64/terraform-provider-opsgenie_v0.4.7
2020-09-01T10:21:55.435-0400 [INFO]  plugin.terraform-provider-opsgenie_v0.4.7: configuring server automatic mTLS: timestamp=2020-09-01T10:21:55.434-0400
2020-09-01T10:21:55.465-0400 [DEBUG] plugin.terraform-provider-opsgenie_v0.4.7: plugin address: address=/var/folders/r1/2sj8z7xn12s5j5729_ll_s7w0000gn/T/plugin781003244 network=unix timestamp=2020-09-01T10:21:55.465-0400
2020-09-01T10:21:55.465-0400 [DEBUG] plugin: using plugin: version=5
2020/09/01 10:21:55 [TRACE] BuiltinEvalContext: Initialized "provider[\"registry.terraform.io/opsgenie/opsgenie\>"]" provider for provider["<http://registry.terraform.io/opsgenie/opsgenie|registry.terraform.io/opsgenie/opsgenie"]
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalOpFilter
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalSequence
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalGetProvider
2020-09-01T10:21:55.524-0400 [TRACE] plugin.stdio: waiting for stdio data
2020/09/01 10:21:55 [TRACE] eval: *terraform.EvalValidateProvider
2020/09/01 10:21:55 [TRACE] buildProviderConfig for provider["registry.terraform.io/opsgenie/opsgenie"]: no configuration at all
2020/09/01 10:21:55 [TRACE] GRPCProvider: GetSchema
2020/09/01 10:21:55 [TRACE] No provider meta schema returned
2020/09/01 10:21:55 [WARN] eval: *terraform.EvalValidateProvider, non-fatal err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalSequence, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalOpFilter, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [ERROR] eval: *terraform.EvalSequence, err: Missing required argument: The argument "api_key" is required, but was not set.
2020/09/01 10:21:55 [TRACE] [walkValidate] Exiting eval tree: provider["registry.terraform.io/opsgenie/opsgenie"]
2020/09/01 10:21:55 [TRACE] vertex "provider[\"<http://registry.terraform.io/opsgenie/opsgenie\|registry.terraform.io/opsgenie/opsgenie\>"]": visit complete
Eric Berg avatar
Eric Berg

it must be opsgenie, because supplying the OPSGENIE_API_KEY env var stops that error, even though the one provider I do declare has app_key defined.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Meyers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(our tests are using OPSGENIE_API_KEY)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and I think we removed api_key from the modules because providers in 0.13 should be passed by reference rather than invoked inside the module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, if you haven’t yet had a chance, check out our new config submodule of the opsgenie stuff - it supports YAML configuration for your desired opsgenie state

Dan Meyers avatar
Dan Meyers

just a point of clarification, the note above says app_key is defined but the error references api_key – just want to make sure thats a typo

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, here’s a clip from #office-hours where we talk about the new config module: https://www.youtube.com/watch?v=fXNajuC4L1o

Eric Berg avatar
Eric Berg

Thanks, @Erik Osterman (Cloud Posse). I was on the office hours call, when this was discussed. Very exciting and a really cool abstraction. I had already set up our basic config and just wanted to throw a few minutes at it to bring it up to 0.13 and see some of those changes reflected in Opsgenie (once our trial has been extended) so i’ve just been working on my original implementation. Once I get a clean plan from that and a few spare moments, i’ll probably convert to the config module.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

let me know if you don’t get it working

1
Eric Berg avatar
Eric Berg

it’s a module that has only uses the opsgenie provider, so it’s not anything else.

sahil kamboj avatar
sahil kamboj

Hey guys facing a weird problem with terrafrom rds module i made a read replica with terraform and that was successful. after that i do terraform apply, its saying it has to change name and want to recreate replica.(i also did this) but again it shows it want to change the name i checked the name in tfstate its whats that should be.(it wants to change the name {masterdb name} to {replica name})

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) could this be related to null label changes?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@sahil kamboj Which modules of ours are you calling? What versions of our modules are you using? As we say in all our documentation, you should be using a specific version, not pinning to master.

@Erik Osterman (Cloud Posse) Unlikely to be due to label change as neither terraform-aws-rds nor terraform-aws-rds-replica use [context.tf](http://context.tf) or the new label version.

sahil kamboj avatar
sahil kamboj

@Jeremy G (Cloud Posse) sry was disconnected for month due to COVID it was a silly mistake in name parameter , its the db name not rds and should be same as master.

sahil kamboj avatar
sahil kamboj

~ name = “frappedb” -> “frappedb-replica” # forces replacement option_group_name = “frappe-read-db-20200831120637864700000001” parameter_group_name = “frappe-read-db-20200831120637864800000002” password = (sensitive value) performance_insights_enabled = false

natalie avatar
natalie

Evaluating terraform Cloud for our team. Wondering if anyone here is using it currently? And maybe can share some pros, cons, regrets, tips, etc? How was the migration from open source Terraform to the cloud, etc? thank you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one con: you’re limited to running their version of terraform and cannot BYOC (bring your on container). the good thing is then your limited to running vanilla terraform, the bad thing is you cannot use any wrappers or run alpha versions of terraform.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fortunately, they’ve just released runners for TFC. this was a huge con before that, since it wasn’t possible to use things like the postgres provider to manage a database in a VPC.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

related to this, providers are now easily downloaded at runtime. also was a limitation, but no longer is.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the biggest compliant I hear is the cost of TFC enterprise & business.

Chris Fowles avatar
Chris Fowles

edit: Disregard - see below

another con is that you can’t run your own workers (in your own accounts) without shelling out for a $$$ enterprise contract

so any moderately regulated workload becomes difficult to deal with from a compliance standpoint, because you’re basically giving a 3rd party highly privileged access to your accounts

Chris Fowles avatar
Chris Fowles

you’re probably only going to get Cons in this thread which doesn’t really reflect on the product itself at all. it’s a pretty great solution for anyone who’s tried to automate terraform on their own and felt the pain. for most of us who have kicked the tires it’s frustration that we can’t use it because of tick boxes rather than technical deficiencies.

1
this1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Chris Fowles can you clarify? I thought runners are now supported with the business account.

natalie avatar
natalie

Thank you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
11:10:40 PM
Chris Fowles avatar
Chris Fowles
What's the difference between Terraform Cloud and Terraform Enterprise?
Terraform Enterprise is offered as a private installation. It is designed to suit the needs of organizations with specific requirements for security, compliance and custom operations.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, so agree with @Chris Fowles - I would pick terraform cloud over all the alternatives (e.g. Atlantis, Jenkins, or custom workflow in some other CI/CD platform). What it does, it does very well and better than the alternatives.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, but there’s now a “hybrid” mode where the runners can be a “private installation” but the dashboard is SaaS.

Chris Fowles avatar
Chris Fowles

any doco on that? I’ve not seen that yet

Chris Fowles avatar
Chris Fowles

ahhh ok found it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Announcing HashiCorp Terraform Cloud Business Tierattachment image

Today we’re announcing availability of the new Business tier offering for Terraform Cloud which includes enterprise features for advanced security, compliance and governance, the ability to execute multiple runs concurrently, and flexible support options.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

pro: also supports SSO now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

con: (okta only)

Chris Fowles avatar
Chris Fowles

con: Business pricing is “Contact us because we don’t know how to bill this yet”

2
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, hate that.

2020-09-02

Shankar Kumar Chaudhary avatar
Shankar Kumar Chaudhary

when i am doing terragrunt apply in tfstate-backened it is going to create table and s3 bucket again. and throwing error why so?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sounds like after first creating the bucket and table with the module, the step of reimporting the local state was not performed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
02:34:24 PM
Release notes from terraform avatar
Release notes from terraform
03:04:26 PM

v0.13.2 0.13.2 (September 02, 2020) NEW FEATURES: Network-based Mirrors for Provider Installation: As an addition to the existing capability of “mirroring” providers into the local filesystem, a network mirror allows publishing copies of providers on an HTTP server and using that as an alternative source for provider packages, for situations where directly accessing the origin registries is…

Release v0.13.2 · hashicorp/terraform

0.13.2 (September 02, 2020) NEW FEATURES: Network-based Mirrors for Provider Installation: As an addition to the existing capability of “mirroring” providers into the local filesystem, a network m…

CLI Configuration - Terraform by HashiCorp

The general behavior of the Terraform CLI can be customized using the CLI configuration file.

stefan avatar

Hi. Is there a possibility to use a CMK instead of the KMS default key for encryption at terraform-aws-dynamodb? Thanks.

cloudposse/terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

Alan Kis avatar
Alan Kis

I see a PR here ^^

cloudposse/terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

stefan avatar

^^

stefan avatar

Hi. Another question: How can I deactivate the ttl_attribute at terraform-aws-dynamodb? If I set it to null or “” I get an error (because it must have a value). If I avoid the argument it will be enabled with the name “EXPIRES”. I have checked the code in the module. I see no way to disable ttl. Can anyone explain to me how this works?

cloudposse/terraform-aws-dynamodb

Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb

loren avatar

fyi, submitted an issue with tf 0.13 that cloudposse modules may run into, since it impacts conditional resource values (e.g. join("", resource.name.*.attr)) that are later referenced in other data sources. this includes module outputs that are passed to data sources later in a config… https://github.com/hashicorp/terraform/issues/26100

3
RB avatar

oh jeez. ya, all of that should be changed to using the try() function instead. more reason to do that now.

loren avatar

@antonbabenko thought you might also want to be aware, in case you get reports on your modules (i hit it on your vpc module)

loren avatar

@RB unfortunately, try() doesn’t fix it… in the repro case in the issue, this still generates a persistent diff: empty = try(random_pet.empty[0].id, "")

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @loren

RB avatar

interesting so it affects both cases.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should affect our modules when enabled=false I suppose

this1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

both cases, since both cases return “” (empty string)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and TF 0.13 just can’t compare empty strings correctly

loren avatar

pretty much, yeah. one workaround i’ve found is to use a ternary with the same condition that you use for the resource, so this does work: empty = false ? random_pet.empty[0].id : ""

1
loren avatar

if TF 0.13 can evaluate the expression all up front, then it works

jose.amengual avatar
jose.amengual

isn’t better to way for them to fix it?

jose.amengual avatar
jose.amengual

before changing every module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t need to change every module since it could affect it only when enabled=false

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but yes, it’s better for them to fix it

jose.amengual avatar
jose.amengual

they get pay for it

loren avatar

they responded and explained why it is happening. it makes sense, though i don’t know what edge cases led them to make the change so that resources with 0 instances are not stored in the state. i’d expect this issue will not be solved quickly, if at all

loren avatar

personally, i’ll be switching to that workaround wherever i can, which i think is a more stable solution anyway

RB avatar

maybe if we all upvote the issue, they will resolve it sooner

loren avatar

please do!

antonbabenko avatar
antonbabenko

Thanks a lot, @loren!

1
Peter Huynh avatar
Peter Huynh

hi all, sometimes, I need to do things outside of terraform, for example provisioning the infra vs updating content (eg putting things into a bucket).

This introduces duplicates of declaration of variables, one set for shell and another for terraform.

Has anyone ran into something similar? Do you have any advise on how to DRY the config?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what about using https://github.com/scottwinkler/terraform-provider-shell (and have all of that in TF state)

scottwinkler/terraform-provider-shell

Terraform provider for executing shell commands and saving output to state file - scottwinkler/terraform-provider-shell

Peter Huynh avatar
Peter Huynh

thanks, I’ll have a look into that.

tajmahpaul avatar
tajmahpaul

Hey guys I’m using the git::<https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.10.0> module. Just wondering if changing backup_window in a deployed RDS instance will create a new RDS instance or update the existing one. Also still new to terraform, so maybe there is an easy way to find out? thanks guys

sheldonh avatar
sheldonh

Shouldn’t. Terraform plan can be run to preview before you do anything.

sheldonh avatar
sheldonh

Terraform apply also asks for confirm.

sheldonh avatar
sheldonh

Always test/learn in safe environment :-)

Lastly RESOURCES = STUFF YOU CREATE data sources read …. If you didn’t create with terrafl use data and it won’t try to create it.

Ie…vpc subnets etc use data unless you are creating.

tajmahpaul avatar
tajmahpaul

Thanks for the reply. Appreciate the information

Matt Gowie avatar
Matt Gowie

Does anyone know a good tool for pulling values from Terraform state outside of terraform itself?

As in, I have a CD process that is running simple bash commands to build and deploy a static site. I’d like to get my CloudFront CDN Distribution ID and the bucket that the static site’s assets should be shipped to from my Terraform state file in S3. I could pull the state file, parse out the outputs I need, and then go about it that way but I am figuring that there must be a tool written around this.

Peter Huynh avatar
Peter Huynh

I was looking for something similar as well. The suggested solution was https://sweetops.slack.com/archives/CB6GHNLG0/p1599083030235300?thread_ts=1599082729.235200&cid=CB6GHNLG0

what about using https://github.com/scottwinkler/terraform-provider-shell (and have all of that in TF state)

Matt Gowie avatar
Matt Gowie

I don’t think that’s what I’m looking since I’m talking about totally outside of the context of a Terraform project.

Drew Davies avatar
Drew Davies

We’re in the process of doing the same thing, and we’ve settled on AWS’ SSM Parameter Store, to store key/values as opposed to Terraform outputs, as a source of truth for both Terraform and other tooling (eg. Ansible, GitHub Actions, etc.)

Drew Davies avatar
Drew Davies

aws_ssm_parameter resources to write data to SSM, and aws_ssm_parameter data sources to read data from SSM.

3
Peter Huynh avatar
Peter Huynh

that’s a nice idea.

Peter Huynh avatar
Peter Huynh

It does mean the scripts will need to reach into the parameter store for values tho.

Drew Davies avatar
Drew Davies

For sure, but there are lots of SDK’s available for AWS API endpoints.

Peter Huynh avatar
Peter Huynh

One thing I am considering is the dotenv file and the tfvars file is the same format

corcoran avatar
corcoran

Annoyingly Param Store still isn’t supported by RAM, so you’ll need well defined roles, prefixes and encryption keys tho’ https://docs.aws.amazon.com/ram/latest/userguide/shareable.html

Shareable Resources - AWS Resource Access Manager

AWS RAM lets you share resources that are provisioned and managed in other AWS services. AWS RAM does not let you manage resources, but it does provide the features that let you make resources available across AWS accounts.

corcoran avatar
corcoran
NUM

Namespace Utility Modules (NUM)

Eric Berg avatar
Eric Berg

I had a little bit of a hard time convincing my architect to throw stuff into SSM, but it’s gone very well and he’s really embraced the idea.

Matt Gowie avatar
Matt Gowie

Ah yeah — I was not thinking yesterday. This is a perfect usecase for SSM PStore + Chamber. Appreciate the reminder @Drew Davies!

1
loren avatar

oh. sweet! the terraform registry has versioned docs for providers! looks like the versioned docs go back about a year or so, here’s the earliest versioned docs for the aws and azure providers * https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docshttps://registry.terraform.io/providers/hashicorp/azurerm/1.35.0/docs

1

2020-09-03

sheldonh avatar
sheldonh

Can you use for iterator with a data source? Just thought about this with looking up list of github users for example. Would like to know if that’s possible, didn’t see anything in docs

loren avatar

iterator, as in count or for_each? if so, sure, certainly

sheldonh avatar
sheldonh

cool. never had the need so just making sure before I wasted more time

Mike Schueler avatar
Mike Schueler

when running terraform in AWS, with s3 backend for the statefile, is there anyway to create the bucket when running for the first time? in the docs, it just says

This assumes we have a bucket created called mybucket
loren avatar

there is a bit of a chicken/egg thing going on. cloudposse has a pretty good writeup of how to keep it all in terraform… https://github.com/cloudposse/terraform-aws-tfstate-backend#create

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

1
sheldonh avatar
sheldonh

Also fyi terraform cloud can be used for free for state file management and has versioning, locking and more. Might be worth considering. I have been using it exclusively for almost the past year instead of any s3 buckets.

sheldonh avatar
sheldonh

I found a github action that creates the backend in terraform cloud fyi

Cody Moore avatar
Cody Moore

I recently updated to the newest vpc module version https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.17.0 but then got this error for the null resource:

Error: Provider configuration not present

To work with
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3], after
which you can remove the provider configuration again.

Would anyone know how to solve it? I noticed that the 0.13 upgrade docs (https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations) mention this blurb:

In this specific upgrade situation the problem is actually the missing resource block rather than the missing provider block: Terraform would normally refer to the configuration to see if this resource has an explicit provider argument that would override the default strategy for selecting a provider. If you see the above after upgrading, re-add the resource mentioned in the error message until you’ve completed the upgrade. But I wasn’t sure how to interpret that if it’s something that might have happened with the vpc module upstream of my usage?

pjaudiomv avatar
pjaudiomv

make sure you have the aws provider version et in your provider resource and try running terraform 0.13upgrade

pjaudiomv avatar
pjaudiomv
provider aws {
  region     = "us-east-1"
  version    = "~> 3.3"
}
pjaudiomv avatar
pjaudiomv

or similar, I think it just cant be null anymore

Cody Moore avatar
Cody Moore

Is it an aws provider issue? It looks like it’s a null provider issue

Cody Moore avatar
Cody Moore

I am generating this provider block now:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}
provider "aws" {
    region = "us-east-1"
}
pjaudiomv avatar
pjaudiomv

oh I see, yea i just skimmed over. I hit this exact issue with aws

pjaudiomv avatar
pjaudiomv

I would try to explicitly set null provider too maybe

Cody Moore avatar
Cody Moore

hmm ok, I’ll give that a shot, thanks

pjaudiomv avatar
pjaudiomv

or try terraform init -reconfigure

Cody Moore avatar
Cody Moore

No luck, it looks like I might need to remove it from state manually?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Cody Moore avatar
Cody Moore

It looks like I was able to replace the state references manually, and that worked

Cody Moore avatar
Cody Moore

For example, every line had:

"provider": "provider[\"<http://registry.terraform.io/-/null\|registry.terraform.io/-/null\>"]",
Cody Moore avatar
Cody Moore

just replaced it with

"provider": "provider[\"<http://registry.terraform.io/hashicorp/null\|registry.terraform.io/hashicorp/null\>"]",
Cody Moore avatar
Cody Moore

(then state pushed)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

intersting..good to know

Alex Jurkiewicz avatar
Alex Jurkiewicz

There is a terraform to perform the find-replace automatically:

terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null

I’ve seen this a few times with 0.12 -> 0.13 conversions that don’t use terraform 0.13upgrade and get messed up state files. Bit of a pothole IMO.

4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good tip @Alex Jurkiewicz

RogierD avatar
RogierD

Just ran into this issue too. Perhaps an idea to place this info somewhere?

Error: Provider configuration not present

To work with
module.subnets.module.nat_label.data.null_data_source.tags_as_list_of_maps[3]
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.subnets.module.nat_label.data.null_data_source.tags_as_list_of_maps[3],
after which you can remove the provider configuration again.
Alex Jurkiewicz avatar
Alex Jurkiewicz

You should submit a PR to add this to the 0.13 migration page in the official Terraform docs

Eric Berg avatar
Eric Berg

Regarding config module in terraform-opsgenie-incident-management, what’s the significance of including “repo David…” in the descriptions?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm… is that convention enforced or just an example?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The reason we did this is so could correlate teams with repositories with stakeholders (e.g. it’s 3am, you get an alert for some service, but the error is not obvious and requires some domain expertise, you don’t know what to do, so who should you escalate to?)

Eric Berg avatar
Eric Berg

Right. Figured, but that’s just text as far as this exercize is concerned, right? That info is consumed by (tired) humans or …something else?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, just consumed by humans. Not used programatically.

Eric Berg avatar
Eric Berg

Got it. Thanks.

2020-09-04

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Does anybody know a nice way to attach and then detach a security group from an ENI during a single run of a tf module? I’m trying to allow the host running TF temporary access to the box while it deploys, then revoke that later

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)
Terraform thinks user-data is changing when it isn't, resulting in unnecessary resource replacement · Issue #5011 · terraform-providers/terraform-provider-aws

This issue was originally opened by @Lsquared13 as hashicorp/terraform#18343. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version Terr…

RB avatar

Found out recently that a tf plan will update the state file, notably the version

Any good tricks to doing a plan without the state updating ?

RB avatar

I understand you can turn off refresh but that would also inhibit the plan

RB avatar

I’m thinking perhaps we can output the current state to a file, do a terraform plan, and push the outputted state back up ?

what are your thoughts?

loren avatar

i do a terraform plan with a new version all the time. it’s never affected the tfstate

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Keep in mind a plan does a state refresh, so it must update the state file.

RB avatar

here’s an old issue recommending the use of -refresh=false

https://github.com/hashicorp/terraform/issues/3631

loren avatar

perhaps it is updating the local copy. that makes sense. it is certainly not updating the remote state, or at least, not in a way that impacts the ability to run applies with the older version

RB avatar

interesting. i’ll try this out locally and see. this was a concern with my uppers regarding atlantis doing plans across the org so i thought id ask

loren avatar

we had the problem a couple times where someone “accidentally” updated the remote state on a dev environment by using a newer version of terraform than the rest of the team. we implemented strict pins of the required_version in the terraform block, and it’s never been a problem since. upgrades are now very deliberate.

terraform {
  required_version = "0.13.1"
}
thumbsup_all2
RB avatar

yep, i think this is what i’ll have to do as well across a repo before i can add atlantis bot to access the repo

pib avatar


Refreshing Terraform state in-memory prior to plan…
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
I would assume that means it isn’t updating the state, but perhaps it behaves differently if you save the plan to a file?

pib avatar

I guess it does say “the refreshed state” but that doesn’t necessarily mean it isn’t updating the state file with the latest version…

Eric Berg avatar
Eric Berg

Is there a doc on contributing to Cloudposse TF mods?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s some stuff to get you started

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-example-module

Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.

Eric Berg avatar
Eric Berg

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you want to participate in review Pull Requests, send me a DM. We get dozens of PRs and can use all the help we can get.

Eric Berg avatar
Eric Berg

So far so good with the config mod. Our rotations are pretty simple for starters, but we do need things like ignore_members and delete_default_resources in teams, so i’ll submit a PR for those changes in a little while.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ping @Dan Meyers when you need a review since he’s currently the “owner” of that.

1
Eric Berg avatar
Eric Berg

Will do. Thanks, Erik!

2020-09-06

Abel Luck avatar
Abel Luck

anyone know how to do a “find by” type of operation on a list of maps in terraform?

my_objs = [ {"foo": "a"} {"foo": "b"}]
# find_by("foo", "a", my_objs) --> {"foo": "a"}
Abel Luck avatar
Abel Luck

Here’s what I’ve come up with :/

[for o in local.my_objs : o if o.foo == "a"][0]
Chris Fowles avatar
Chris Fowles

yeah, for/if is going to be the way that you’ll need to filter a collection by a property

2020-09-08

Abel Luck avatar
Abel Luck

I’m investingating moving our setup to terragrunt to simplify modules. Looking at the official examples (Looking at the https://github.com/gruntwork-io/terragrunt-infrastructure-live-example/tree/master/non-prod) I see that they place the region higher in the hierarchy than the stage. Is this common?

Peter Huynh avatar
Peter Huynh

IMO, if it’s the same region between your environments, then that’s fine.

Abel Luck avatar
Abel Luck

I was thinking that if you had a service that was fundamentally cross-region, for example a jitsi deployment where you want RTC video bridges in regions close to the users.. then you would want to place the stage at a higher level than the region

Peter Huynh avatar
Peter Huynh

yeah, you can have it inside each stage.

There is also other scenarios like AWS ACM or Lambda@Edge, where you need to have them on us-east-1 regardless of where your main region is supposed to be.

Peter Huynh avatar
Peter Huynh

personally, I’ve recently moved away from terragrunt, due to the additional (cognitive) overheads it presented with regards to the additional wiring.

joshmyers avatar
joshmyers

I have this swapped around

joshmyers avatar
joshmyers
terraform
├── coreeng
│   ├── account.hcl
│   └── global
│       ├── env.hcl
│       └── us-east-1
│           ├── atlantis
│           │   └── terragrunt.hcl
│           ├── ecr
│           │   └── terragrunt.hcl
│           └── region.hcl
├── globals.hcl
├── prod
│   ├── account.hcl
│   └── prod
│       ├── account-service.hcl
│       ├── compliance-service.hcl
│       ├── device-service.hcl
│       ├── env.hcl
│       ├── eu-central-1
│       │   ├── account-service
│       │   │   └── terragrunt.hcl
│       │   ├── device-service
│       │   │   └── terragrunt.hcl
│       │   ├── graph-service
│       │   │   └── terragrunt.hcl
│       │   ├── idp-service
│       │   │   └── terragrunt.hcl
│       │   ├── platform-dependencies
│       │   │   └── terragrunt.hcl
│       │   ├── profile-service
│       │   │   └── terragrunt.hcl
│       │   ├── region.hcl
│       │   ├── resource-service
│       │   │   └── terragrunt.hcl
│       │   └── tpa-service
│       │       └── terragrunt.hcl
│       ├── eu-west-1
│       │   ├── account-service
│       │   │   └── terragrunt.hcl
│       │   ├── buckets
│       │   │   ├── main.tf
│       │   │   └── terragrunt.hcl
│       │   ├── compliance-service
│       │   │   └── terragrunt.hcl
│       │   ├── device-service
│       │   │   └── terragrunt.hcl
│       │   ├── graph-service
│       │   │   └── terragrunt.hcl
│       │   ├── idp-service
│       │   │   └── terragrunt.hcl
│       │   ├── platform-dependencies
│       │   │   └── terragrunt.hcl
│       │   ├── profile-service
│       │   │   └── terragrunt.hcl
│       │   ├── region.hcl
│       │   ├── resource-service
│       │   │   └── terragrunt.hcl
│       │   └── tpa-service
│       │       └── terragrunt.hcl
│       ├── graph-service.hcl
│       ├── idp-service.hcl
│       ├── platform-dependencies.hcl
│       ├── profile-service.hcl
│       ├── resource-service.hcl
│       ├── tpa-service.hcl
│       ├── us-east-1
│       │   ├── account-service
│       │   │   └── terragrunt.hcl
│       │   ├── buckets
│       │   │   ├── main.tf
│       │   │   ├── provider.tf
│       │   │   ├── terragrunt.hcl
│       │   │   └── tfplan
│       │   ├── compliance-service
│       │   │   └── terragrunt.hcl
│       │   ├── device-service
│       │   │   └── terragrunt.hcl
│       │   ├── graph-service
│       │   │   └── terragrunt.hcl
│       │   ├── idp-service
│       │   │   └── terragrunt.hcl
│       │   ├── platform-dependencies
│       │   │   └── terragrunt.hcl
│       │   ├── profile-service
│       │   │   └── terragrunt.hcl
│       │   ├── region.hcl
│       │   ├── resource-service
│       │   │   └── terragrunt.hcl
│       │   └── tpa-service
│       │       └── terragrunt.hcl
│       └── us-west-2
│           ├── account-service
│           │   └── terragrunt.hcl
│           ├── device-service
│           │   └── terragrunt.hcl
│           ├── graph-service
│           │   └── terragrunt.hcl
│           ├── idp-service
│           │   └── terragrunt.hcl
│           ├── platform-dependencies
│           │   └── terragrunt.hcl
│           ├── profile-service
│           │   └── terragrunt.hcl
│           ├── region.hcl
│           ├── resource-service
│           │   └── terragrunt.hcl
│           └── tpa-service
│               └── terragrunt.hcl
├── terragrunt.hcl
└── test
    ├── account.hcl
    ├── dev
    │   ├── account-service.hcl
    │   ├── compliance-service.hcl
    │   ├── device-service.hcl
    │   ├── env.hcl
    │   ├── eu-central-1
    │   │   ├── account-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── compliance-service
    │   │   ├── device-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── graph-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── idp-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── platform-dependencies
    │   │   │   └── terragrunt.hcl
    │   │   ├── profile-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── region.hcl
    │   │   ├── resource-service
    │   │   │   └── terragrunt.hcl
    │   │   └── tpa-service
    │   │       └── terragrunt.hcl
    │   ├── eu-west-1
    │   │   ├── account-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── compliance-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── device-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── graph-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── idp-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── platform-dependencies
    │   │   │   └── terragrunt.hcl
    │   │   ├── profile-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── region.hcl
    │   │   ├── resource-service
    │   │   │   └── terragrunt.hcl
    │   │   └── tpa-service
    │   │       └── terragrunt.hcl
    │   ├── graph-service.hcl
    │   ├── idp-service.hcl
    │   ├── platform-dependencies.hcl
    │   ├── profile-service.hcl
    │   ├── resource-service.hcl
    │   ├── tpa-service.hcl
    │   ├── us-east-1
    │   │   ├── account-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── compliance-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── device-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── graph-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── idp-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── platform-dependencies
    │   │   │   └── terragrunt.hcl
    │   │   ├── profile-service
    │   │   │   └── terragrunt.hcl
    │   │   ├── region.hcl
    │   │   ├── resource-service
    │   │   │   └── terragrunt.hcl
    │   │   └── tpa-service
    │   │       └── terragrunt.hcl
    │   └── us-west-2
    │       ├── account-service
    │       │   └── terragrunt.hcl
    │       ├── device-service
    │       │   └── terragrunt.hcl
    │       ├── graph-service
    │       │   └── terragrunt.hcl
    │       ├── idp-service
    │       │   └── terragrunt.hcl
    │       ├── platform-dependencies
    │       │   └── terragrunt.hcl
    │       ├── profile-service
    │       │   └── terragrunt.hcl
    │       ├── region.hcl
    │       ├── resource-service
    │       │   └── terragrunt.hcl
    │       └── tpa-service
    │           └── terragrunt.hcl
    └── qa
        ├── account-service.hcl
        ├── compliance-service.hcl
        ├── device-service.hcl
        ├── env.hcl
        ├── eu-central-1
        │   ├── account-service
        │   │   └── terragrunt.hcl
        │   ├── device-service
        │   │   └── terragrunt.hcl
        │   ├── graph-service
        │   │   └── terragrunt.hcl
        │   ├── idp-service
        │   │   └── terragrunt.hcl
        │   ├── platform-dependencies
        │   │   └── terragrunt.hcl
        │   ├── profile-service
        │   │   └── terragrunt.hcl
        │   ├── region.hcl
        │   ├── resource-service
        │   │   └── terragrunt.hcl
        │   └── tpa-service
        │       └── terragrunt.hcl
        ├── eu-west-1
        │   ├── account-service
        │   │   └── terragrunt.hcl
        │   ├── compliance-service
        │   │   └── terragrunt.hcl
        │   ├── device-service
        │   │   └── terragrunt.hcl
        │   ├── graph-service
        │   │   └── terragrunt.hcl
        │   ├── idp-service
        │   │   └── terragrunt.hcl
        │   ├── platform-dependencies
        │   │   └── terragrunt.hcl
        │   ├── profile-service
        │   │   └── terragrunt.hcl
        │   ├── region.hcl
        │   ├── resource-service
        │   │   └── terragrunt.hcl
        │   └── tpa-service
        │       └── terragrunt.hcl
        ├── graph-service.hcl
        ├── idp-service.hcl
        ├── platform-dependencies.hcl
        ├── profile-service.hcl
        ├── resource-service.hcl
        ├── tpa-service.hcl
        ├── us-east-1
        │   ├── account-service
        │   │   └── terragrunt.hcl
        │   ├── compliance-service
        │   │   └── terragrunt.hcl
        │   ├── device-service
        │   │   └── terragrunt.hcl
        │   ├── graph-service
        │   │   └── terragrunt.hcl
        │   ├── idp-service
        │   │   └── terragrunt.hcl
        │   ├── platform-dependencies
        │   │   └── terragrunt.hcl
        │   ├── profile-service
        │   │   └── terragrunt.hcl
        │   ├── region.hcl
        │   ├── resource-service
        │   │   └── terragrunt.hcl
        │   └── tpa-service
        │       └── terragrunt.hcl
        └── us-west-2
            ├── account-service
            │   └── terragrunt.hcl
            ├── device-service
            │   └── terragrunt.hcl
            ├── graph-service
            │   └── terragrunt.hcl
            ├── idp-service
            │   └── terragrunt.hcl
            ├── platform-dependencies
            │   └── terragrunt.hcl
            ├── profile-service
            │   └── terragrunt.hcl
            ├── region.hcl
            ├── resource-service
            │   └── terragrunt.hcl
            └── tpa-service
                └── terragrunt.hcl

127 directories, 159 files
joshmyers avatar
joshmyers

terraform/$ACCOUNT/$ENVIRONMENT/$REGION/$THING

joshmyers avatar
joshmyers

and using Atlantis/OPA integration

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also consider cross posting in #terragrunt for more feedback

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

personally, I don’t like imputing the state by filesystem organization (the “terragrunt” way).

This year, we’ve moved to defining the entire state of an environment (e.g. prod us-west-2) in a single YAML file used by all terraform workspaces for that environment. That way our project folder hierarchy is flat (e.g. projects/eks, or projects/vpc). In one of the project folders is where you have all the business logic in terraform. Now the relationship between eks and vpc and region, environment etc, will all be defined in a single configuration file called uw2-prod.yaml (for example). We make heavy use of terraform remote state so pass state information between project workspaces. Best of all, the strategy works natively with terraform cloud, but terragrunt tooling does not. Using #terragrunt with terraform cloud means terragrunt needs to be triggered by some other CI/CD system.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
projects:
  globals:
    stage: prod
    account_number: "xxxxxx"

  terraform:
    dns-delegated:
      vars:
        zone_config:
        - subdomain: prod
          zone_name: uw2.ourcompany.net

    eks:
      command: "/usr/bin/terraform-0.13"
      vars:
        node_groups:
          main: &standard_node_group
            availability_zones: null
            attributes: null
            desired_size: null
            disk_size: null
            enable_cluster_autoscaler: null
            instance_types: null
            kubernetes_labels: null
            kubernetes_version: null
            max_size: 2
            min_size: null
            tags: null
        gpu:
          <<: *standard_node_group
          instance_types: ["g4dn.xlarge"]
          kubernetes_labels:
            ourcompany.net/instance-class: GPU

    eks-iam:
      command: "/usr/bin/terraform-0.13"
      vars: {}

    vpc:
      vars:
        cidr_block: "10.101.0.0/18"

  helmfile:
    autoscaler:
      vars:
        installed: true

    aws-node-termination-handler:
      vars:
        installed: true

    cert-manager:
      vars:
        installed: true
        ingress_shim_default_issuer_name: "letsencrypt-prod"

    echo-server:
      vars:
        installed: false

    external-dns:
      vars:
        installed: true

    idp-roles:
      vars:
        installed: true

    ingress-nginx:
      vars:
        installed: true

    reloader:
      vars:
        installed: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here’s a sneakpeak of what that configuration looks like. No copying dozens of files around to create a new environment.

joshmyers avatar
joshmyers

Do you have some type of hierarchy of variables that get loaded in and if so, how does something like Atlantis make sure it triggers for the right thing e.g. if eu-west-1 vars change and it is a single file, is that for prod or dev? Personally I don’t mind the opinionated hierarchy. Isn’t too much boiler plate code so relatively DRY, is pretty obvious what is going on, can’t shoot yourself in the foot easily. Folks who are new to Terraform get it

joshmyers avatar
joshmyers

Are ya’ll using Terraform cloud now rather than e.g. Atlantis?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so basically managing terraform cloud is done using terraform cloud. So the workspaces are defined by the configurations. When the config/uw2-prod.yaml file changes, terraform cloud picks up those changes and redeploys the configuration (e.g. tfvars) for all workspaces. And using triggers, we can depend on the the workspace configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem with atlantis is it cannot be triggered from other pipelines (not elegantly at least. e.g. I don’t consider a bot running /atlantis plan a solution). But with terraform cloud, it can be triggered from other pipelines. This is the main reason we’ve reduced usage of atlantis in new engagements.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


hierarchy of variables that get loaded
yes, so each project can still define defaults.auto.tfvars for setting shared across all projects.

Then there’s the concept of globals for an environment:

projects:
  globals:
    stage: prod
    account_number: "xxxxxx"
  terraform:
    dns-delegated:
      vars:
        zone_config:
        - subdomain: prod
          zone_name: uw2.ourcompany.net

Those globals get merged with vars

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So changes to projects/ causes a plan for all environments (since it affects all environments)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Changes to .yaml causes plan for the yaml modified. Once this is modified, it cascades and triggers all dependent workspaces to plan.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The import aspects to note:

  1. state stored in git via yaml configurations for each environment
  2. terraform cloud manages terraform cloud workspaces defined in the yaml
  3. terraform projects use settings defined in workspace (that were set by the terraform cloud configuration). this is why it’s all “native” terraform.
  4. triggers are used to auto plan/apply dependencies.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you know, in our original approach we started with our little project called tfenv to terraform init -from-module=... everywhere. We outgrew that because there were just too many envs. We were fighting terraform. Terraform 0.12 came out and changed the way init -from-module worked. The lesson taught us to stick to vanilla terraform and find out an elegant way to work with it. The problem with terragrunt in my book is that is diverging from what terraform does natively. It provided a lot more value before 0.12 and 0.13, but that value is diminishing.

https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

The Wrong Abstraction — Sandi Metz

I’ve been thinking about the consequences of the “wrong abstraction.” &nbsp;My RailsConf 2014 “all the little things” talk included a section where I asserted: > duplication is far cheaper than the wrong abstraction And in the summary, I went on to advise: >

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the other thing we outgrew was the one-repo per account. it was impossible (very tedious) for aws account automation and general git automation.

Abel Luck avatar
Abel Luck

Woah, ok, many questions here.

Abel Luck avatar
Abel Luck

We’re having trouble managing our terraform structure. We aren’t a typical sass that has one product with dev/qa/prod stages.. rather we operate kind of like a consultancy (though we’re not) and operate the same solution multiple times for different clients. So we have heavy heavy re-use of all our infra code.

We are using vanilla terraform now with a structure like so:

Abel Luck avatar
Abel Luck
example
├── modules/
│   ├── other-stuff/
│   ├── ssm/
│   └── vpc/
└── ops/
    ├── dev/
    │   ├── ansible/
    │   ├── packer/
    │   └── terraform/
    │       ├── other-stuff/
    │       │   └── main.tf
    │       ├── ssm/
    │       │   └── main.tf
    │       └── vpc/
    │           └── main.tf
    ├── prod-client1/
    │   ├── ansible/
    │   ├── packer/
    │   └── terraform/
    │       ├── other-stuff/
    │       │   └── main.tf
    │       ├── ssm/
    │       │   └── main.tf
    │       └── vpc/
    │           └── main.tf
    └── test-client1/
        ├── ansible/
        ├── packer/
        └── terraform/
            ├── other-stuff/
            │   └── main.tf
            ├── ssm/
            │   └── main.tf
            └── vpc/
                └── main.tf
Abel Luck avatar
Abel Luck

but with more prod, more test, more clients, more modules, more everything. it is gnarly.

Also.. every one of those client roots is its own AWS account.

Abel Luck avatar
Abel Luck

each of the ops/*-client/terraform/$module/ root modules uses relative paths to ../../../../modules/$module

Abel Luck avatar
Abel Luck

not shown of course are the vars and other data specific to each deployment

Abel Luck avatar
Abel Luck

there is so much duplication… and things start to diverge over time.. one client doesnt use cloudflare so we wire up a special dns provider in one of their root modules

Abel Luck avatar
Abel Luck

it’s a massive headache.. hence why we are looking at terragrunt

Abel Luck avatar
Abel Luck

We seem to be a bit behind the status quo, as I’ve noticed more people being vocal about moving away from terragrunt as terraform improves

Abel Luck avatar
Abel Luck

The single project yaml definition looks great, but is only available on terraform cloud? What other options do we have?

Abel Luck avatar
Abel Luck

Creating a new deployment when using Terragrunt will still require copying a bunch of files and a directory tree, but at least that content is very thin.. it’s all just vars, not actual terraform code.

1
Abel Luck avatar
Abel Luck

Now I’m unsure.. Is that project config (config/uw2-prod.yam ) a terraform cloud feature or tooling from cloudposse?

Abel Luck avatar
Abel Luck

We also make heavy use of remote state, and any attempt to generalize the config is intractable as you cannot use vars in tf backend configs.

Abel Luck avatar
Abel Luck


Yes, so basically managing terraform cloud is done using terraform
cloud. So the workspaces are defined by the configurations. When the config/uw2-prod.yaml file changes, terraform cloud picks up those changes and redeploys the configuration (e.g. tfvars) for all workspaces. And using triggers, we can depend on the the workspace configuration.

Abel Luck avatar
Abel Luck

Lot to unpack there. Sounds like cloudposse tooling?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s our own strategy, but terraform cloud doesn’t allow you to bring your own tools. Therefore the solution is technically vanilla terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For coldstart though, we use a tool written in variant2 to bring up the environment, but for day-2 operations everything is with terraform cloud

gugaiz avatar

Hi, I am trying to read the output security_group_id from the elastic-beanstalk-environment module, but I am getting This value does not have any attributes when call it. Any ideas?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Do you have a count set for the module?

gugaiz avatar

@Alex Jurkiewicz yes..

Alex Jurkiewicz avatar
Alex Jurkiewicz

Your reference is wrong

Alex Jurkiewicz avatar
Alex Jurkiewicz

it needs a [0] probably

Alex Jurkiewicz avatar
Alex Jurkiewicz

The error is saying “the thing you are trying to read the attribute security_group_id on is not a map/object”

gugaiz avatar

thanks.. I guess it has to be module.app_beanstalk_environment.0.security_group_id

gugaiz avatar

I wish that would have been the error, it would be easier to find the issue

Alex Jurkiewicz avatar
Alex Jurkiewicz

no, it needs to be module.app_beanstalk_environment[0].security_group_id

gugaiz avatar

I think both work.. thanks!!

1

2020-09-09

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Is it possible to just simply ‘set’ a block? I have a basic object containing several volume definitions:

  container_volumes          = {
    files_staging = {
      name = var.file_staging_bucket
      docker_volume_configuration = [{
        scope         = "shared"
        autoprovision = false
        driver        = "rexray/s3fs"
      }]
    }
    files_store = {
      name = var.file_store_bucket
      docker_volume_configuration = [{
        scope         = "shared"
        autoprovision = false
        driver        = "rexray/s3fs"
      }]
    }

And would rather like to use it like so:

 volume = toset([
   local.container_volumes["files_staging"],
   local.container_volumes["files_store"]
 ])

Unfortunately, TF whines that it wants it to be a block rather than a simple assignment (even though it’s literally the same thing in the underlying json…). Please tell me this isn’t how you’re meant to work with these stupid blocks:

  dynamic "volume" {
    for_each = ["files_staging", "files_store",]
    content { 
      name = volume.value
      dynamic "docker_volume_configuration" {
        for_each = local.container_volumes[volume.value].docker_volume_configuration
        content {
          scope = docker_volume_configuration.value.scope
          autoprovision = docker_volume_configuration.value.autoprovision
          driver = docker_volume_configuration.value.driver
        }
      }
    }
  }
Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I just want to be able to set it with an =

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I really, really hate block syntax. I don’t think I’ve seen a single case where I prefer it over just a list and a for loop.

Jaeson avatar

I’ve just hit a wall similar to this. I’m trying to define a number of s3 buckets. Some of them have lifecycle_rules, one uses encryption, and one is publicly exposed. This translates to about 2-3 different blocks for each use-case. It would be great if I could either assign a block directly, pass a block, or have it work with empty blocks pre-defined. I’m wondering if there’s something fundamental that I missed with TF 12 syntax.

Jaeson avatar
Feature Request: Set block arguments using a map · Issue #21458 · hashicorp/terraform

Current Terraform Version Terraform v0.12.0 Use-cases I would like to be able to set block arguments from a map of key-values. For example, suppose I have a map containing four argument values for …

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

That is absolutely and utterly infuriating.

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

prioritizing the reader over the writer grumble

1
Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I would like to start a petition to fork Terraform for the express purpose of removing attribute blocks because they’re dumb.

10002
Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, they are a poor hack for the lack of more expressive variable typing

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

What really annoys me about them is it just renders down to a json array anyway, yet they stop you from just specifying an array to replace them for “readability” (see: verbosity)

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

Then they had to add a whole bunch of extra syntactic rubbish to make up for it, like the ‘dynamic’ keyword

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah. The purpose of HCL is to be more declarative than full code. dynamic is extra unnecessary code

Jaeson avatar

I’m so glad to hear others express this. I’d been thinking I was alone, and now I feel validated.

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I’m also glad I’m not alone. Has this been brought up on github anywhere? I’d even really like to see just an optional flag that says “–allow-assigning-to-blocks” or something, so the user has to acknowledge it’s not best practice but let us do it anyway

Alex Jurkiewicz avatar
Alex Jurkiewicz

You used to be able to do that, there was a hacky way to do so. They patched it out several versions ago (around 0.9/0.10 IIRC).

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

I recall, it was “accidentally allowed” from what I read

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, unintentionally

Alex Jurkiewicz avatar
Alex Jurkiewicz

Hashicorp aren’t interested in giving you multiple ways to declare the same graph. Which IMO is a good thing. I don’t want two ways to define these sub-blocks. That’s a price too high

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

That’s fair, but I think the way that they decided on is far too limited and doesn’t mesh with the rest of the functionality they’ve given the language. Unfortunately I can’t think of a reasonable suggestion to fix the current system other than “just let us do it the natural way”

Alex Jurkiewicz avatar
Alex Jurkiewicz

I agree. I think they’ve painted themselves into a corner but I hope not

David J. M. Karlsen avatar
David J. M. Karlsen

hm, something broke badly with cloudposse/ecr/aws v0.27.0

David J. M. Karlsen avatar
David J. M. Karlsen

even if I give a name as name = format("%s/%s/%s", var.orgPrefix, var.system, each.value)

Makeshift (Connor Bell) avatar
Makeshift (Connor Bell)

var.name is passed into terraform-null-label for processing into the prefix. As far as I can tell it shouldn’t replace slashes though. It might be easier to pass it through separately and let null-label do the work of creating the prefix for you

name = each.value
namespace = var.orgPrefix
stage = var.system
David J. M. Karlsen avatar
David J. M. Karlsen

yes, then it works, but still - why mangle the name

David J. M. Karlsen avatar
David J. M. Karlsen

I just blew all my repos, which is of course a rookie mistake since I did not check the plan properly 1st

David J. M. Karlsen avatar
David J. M. Karlsen

it will strip the /

Chris Warren avatar
Chris Warren

Hi everyone wave I’ve got a question about the new module features (count, foreach).. I want to access an output from a module with a count (its a true/false flag to create only if the flag is true) and use that output conditionally… but I get an error that it is an empty tuple. Anyone have experience with this?

Chris Warren avatar
Chris Warren

I promise I spent a few hours on this :smile: just came across try(...) which seems to solve my problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in modules with count, the output is list, and you need to use something like join("", xxxxx.*.name) to get a single item

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in modules with for_each, the output is a map

Chris Warren avatar
Chris Warren

this is a map that I’m working on

Chris Warren avatar
Chris Warren

I get an error about it being an empty tuple

Chris Warren avatar
Chris Warren
output "alb_data" {
  value = {
    "public" = {
      "http_listener_arn"  = coalescelist(aws_alb_listener.http_public_redirect[*].arn, ["none"])[0]
      "https_listener_arn" = aws_alb_listener.https_public.arn
      "dns"                = aws_lb.public.dns_name
    }
    "private" = {
      "http_listener_arn"  = coalescelist(aws_alb_listener.http_private_redirect[*].arn, ["none"])[0]
      "https_listener_arn" = aws_alb_listener.https_private.arn
      "dns"                = aws_lb.private.dns_name
    }
  }
}
Chris Warren avatar
Chris Warren
alb                          = var.create_alb ? module.alb[0].alb_data : var.params.alb
Chris Warren avatar
Chris Warren
Error: Invalid index

  on ../../../modules/services/app/main.tf line 8, in locals:
   8:   alb                          = var.create_alb ? module.alb[0].alb_data : var.params.alb
    |----------------
    | module.alb is empty tuple

The given key does not identify an element in this collection value.
Chris Warren avatar
Chris Warren

changing to this seems promising

Chris Warren avatar
Chris Warren
alb                          = try(module.alb[0].alb_data, var.params.alb)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it’s empty, try will not help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it just hides issues

Chris Warren avatar
Chris Warren

hmm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how does module.alb look like?

Chris Warren avatar
Chris Warren

pretty standard

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this var.create_alb ? module.alb[0].alb_data : var.params.alb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

is the same as var.create_alb ? join("", module.alb.*.alb_data) : var.params.alb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and same as try(module.alb[0].alb_data, var.params.alb)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all should work, but they have a slightly diff behaviors

Chris Warren avatar
Chris Warren

I see.. is the empty tuple not considered an error?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

show how you invoke the module?

Chris Warren avatar
Chris Warren
module "alb" {
  source = "./modules/ALB"

  count = var.create_alb ? 1 : 0

  alb_listener_certificate_arn = local.alb_listener_certificate_arn
  cluster_name                 = local.cluster_name
  enable_alb_logs              = var.enable_alb_logs
  env                          = local.env
  private_subnet_ids           = local.private_subnet_ids
  public_subnet_ids            = local.public_subnet_ids
  service_name                 = var.service_name
  vpc_global_cidr              = local.vpc_global_cidr
  vpc_id                       = local.vpc_id
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you correctly set the variable var.create_alb in both the module and the invocation of the module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it sounds like it’s false here

module "alb" {
  source = "./modules/ALB"
  count = var.create_alb ? 1 : 0
Chris Warren avatar
Chris Warren
variable "create_alb" {
  description = "Set to true to create ALB for this service, leave false to remain on shared cluster ALB"
  default     = false
}

is in my [variables.tf](http://variables.tf) file on the module… most of the invocations here will have this set to false

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and true here

alb                          = var.create_alb ? module.alb[0].alb_data : var.params.alb
Chris Warren avatar
Chris Warren

this is in the same module though how can they have different values?

Chris Warren avatar
Chris Warren
alb                          = var.create_alb ? module.alb[0].alb_data : var.params.alb

and

module "alb" {
  source = "./modules/ALB"
  count = var.create_alb ? 1 : 0

are in the same module

Chris Warren avatar
Chris Warren

the first is a local

Chris Warren avatar
Chris Warren

that i use for setting listener rules and such

Chris Warren avatar
Chris Warren

if I try the join function w/ empty string I get this:

Error: Invalid function argument

  on ../../../modules/services/app/main.tf line 8, in locals:
   8:   alb                          = var.create_alb == true ? join("",module.alb[0].alb_data) : var.params.alb
    |----------------
    | module.alb[0].alb_data is object with 2 attributes

Invalid value for "lists" parameter: list of string required.
Chris Warren avatar
Chris Warren

I feel like perhaps I’m making a silly mistake somewhere..

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this module.alb[0] gets the first item from the list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should be join("",module.alb.*.alb_data)

Chris Warren avatar
Chris Warren

ah I see

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where module.alb.*. is list

Chris Warren avatar
Chris Warren

you know the try(...) actually is working very well here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes try does the same

Chris Warren avatar
Chris Warren

hmmm

Chris Warren avatar
Chris Warren
Error: Invalid function argument

  on ../../../modules/services/app/main.tf line 8, in locals:
   8:   alb                          = var.create_alb ? join("",module.alb.*.alb_data) : var.params.alb
    |----------------
    | module.alb is tuple with 1 element

Invalid value for "lists" parameter: element 0: string required.
Chris Warren avatar
Chris Warren

@Andriy Knysh (Cloud Posse) - thank you so much for helping me talk through this issue, I really appreciate it and everything else CP has provided the community! I am happy w/ current solution but still interested in talking this through… but if you are busy no need to continue here!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you share the entire code, we can run a plan and see what happens (it’s difficult to understand anything looking at snippets of code)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module.alb.*.alb_data

and

var.params.alb
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should be the same type

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and since alb_data is not a string, join(""…) will not work here (sorry, did not notice that)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so try is the best in this case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(but still does not explain the original issue you were seeing)

Chris Warren avatar
Chris Warren

hmm this confuses me more or maybe sheds some light…

Error: Inconsistent conditional result types

  on ../../../modules/services/app/main.tf line 8, in locals:
   8:   alb                          = var.create_alb ? module.alb.*.alb_data : var.params.alb
    |----------------
    | module.alb is tuple with 1 element
    | var.create_alb is true
    | var.params.alb is object with 2 attributes
Chris Warren avatar
Chris Warren

just playing around w/ it trying to understand

Chris Warren avatar
Chris Warren
Error: Inconsistent conditional result types

  on ../../../modules/services/app/main.tf line 8, in locals:
   8:   alb                          = var.create_alb ? module.alb.*.alb_data : var.params.alb
    |----------------
    | module.alb is empty tuple
    | var.create_alb is false
    | var.params.alb is object with 2 attributes

The true and false result expressions must have consistent types. The given
expressions are tuple and object, respectively.
Chris Warren avatar
Chris Warren

it says module.alb is a tuple with 1 element but I’m trying to refer to the alb_data output.. not module.alb

Chris Warren avatar
Chris Warren

the 2nd output I understand… module.alb is empty because we don’t create the module since it has count set to 0

Chris Warren avatar
Chris Warren

I try to avoid using * since 0.12 came out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try(module.alb[0].alb_data, var.params.alb) should work for you

2020-09-10

Abel Luck avatar
Abel Luck

Being able to forward submodule outputs with output "foo" { value = module.foo } is very handy. Is there a way to do an export like that, but remove the layer of indirection? So that all of the outputs of foo are available as top level outputs of the exporting module?

Alex Jurkiewicz avatar
Alex Jurkiewicz

no, outputs can only be created explicitly

RB avatar

Maybe if a tool was written to convert the tf to hcl, read all the modules and outputs, then check if all modules were outputted. If not, flag it

RB avatar

Or better yet, make the tool dump the module outputs if they are missing

sheldonh avatar
sheldonh

I thought you could export the entire module output as a full object. Is that only for resources?

RB avatar

you can export the entire module reference as an output of the current module

1
RB avatar
module "bananas" {
  source = "./bananas"
}

output "module_bananas" {
  value = module.bananas
}
this1
Abel Luck avatar
Abel Luck

We need a way to splat an output onto the module

output * {
value = module.bananas
} 
Luke Maslany avatar
Luke Maslany

Hello all. I’m looking for a pointer to some guidance/best practice for cycling IAM access keys as part of a terraformed deployment pipeline. Any recommendations?

pjaudiomv avatar
pjaudiomv

I version the resource which forces a recreation

sheldonh avatar
sheldonh

I think tainting the resource also does this, but is something you’d do on demand. Anyone?

1
pjaudiomv avatar
pjaudiomv

yea Tainting will totally work too. I have a pipeline that pgp encrypts the secret and then decrypts it and stores the creds in vault. any consumer of the creds uses vault to retrieve them. this allows for seamless rotation.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Luke Maslany can you add some more details about what technology you’re using for your pipeline? Perhaps there are alternatives to using/rotating IAM access keys and instead using runners with service accounts.

Luke Maslany avatar
Luke Maslany

Thanks both.

I might be overthinking this then. We do have pipelines that store IAM keys in the AWS SSM parameter store.

In my mind I was looking to identify a way to have terraform toggle between the two access keys associated with an account.

I was thinking to have terraform recreate the key that wasn’t in active use, then use the new key to complete the terraform execution.

If the new deployment failed, the old key would be unaffected and would continue to work.

If the deployment succeeded, the production instances would now be running with the new key, and the next time a deployment was run it’d replace the old key as part of the new execution.

Luke Maslany avatar
Luke Maslany

I’m currently looking at a deployment pipeline that uses a Jenkins job, to execute terraform, which creates a new ASG using the most recent AMI of an immutable image and user data created through the interpolation of a file template. The interpolated file template contains the access key. The user data then performs a transform on the config file for the application on the instances at runtime.

I am not thrilled about the current solution as the keys are visible in the user data for the instance.

However I am not sure how quickly I will be able to unpick the current implementation as it uses multiple nested modules, running some pretty old versions of terraform.

With that in mind, I’ve been looking to see if there was a quick win that would allow me to cycle the keys on each deployment, by adding some logic into the module that currently provisions the aws_iam_access_key to swap between key1 and key2 depending on which key was already in active use.

I have a feeling though it is going to be quicker to just fork the current module and update the user data script to pull the key direct from the SSM Parameter Store at runtime using an IAM role on the instance.

Kelvin Tan avatar
Kelvin Tan

hi folks! had a question (similar to this old issue) on passing through GCP service account credentials to the atlantis environment. We currently use terraform to host atlantis on an AWS ECS cluster, and would prefer not to have to keep the credentials file in Github or manually baking it wholesale into the task definition. Was wondering if there was an easy way to reference the required credentials.

Right now thinking of either placing it on AWS secrets manager or SSM parameter store, then querying it through the provider module and passing through to the google provider. Open to any other ideas on this

how to pass in google provider credentials file to each run · Issue #223 · runatlantis/atlantis

Firstly, we&#39;re using the google provider https://www.terraform.io/docs/providers/google/index.html which makes use of a local service account credentials file to execute terraform. Second, we&#…

jose.amengual avatar
jose.amengual

this is better in the #atlantis channel

how to pass in google provider credentials file to each run · Issue #223 · runatlantis/atlantis

Firstly, we&#39;re using the google provider https://www.terraform.io/docs/providers/google/index.html which makes use of a local service account credentials file to execute terraform. Second, we&#…

jose.amengual avatar
jose.amengual

look at the Cloudposse atlantis module and see how they use Parameter Store

1
jose.amengual avatar
jose.amengual

you can do somemething similar

Jaeson avatar

I’m having issues with TF being too pedantic about how I set up my IAC. I’m not really sure what the name for it is, but the way that it expects block syntax, doesn’t accept the same as arguments, and doesn’t allow for automation in the block syntax, at least as I see it. In the example below, I want to define a number of s3 buckets as part of the IAC for a microservice based application. If I had started with boto3, I’d be done by now, and have all the flexibility I needed. I’m pretty frustrated with this. Anyway, *is there any way to do what I have done below in a more DRY / maintainable way*? I just discovered another variation of the bucket which also appears to be defined in block syntax – one of the buckets is encrypted. So that means another 2-3 blocks for just that bucket.

pib avatar

You could probably use dynamic blocks to do what you want https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

Jaeson avatar

I did make that attempt. I wasn’t able to get it to work.

pib avatar

The way that S3 is configured in terraform doesn’t really lend itself to making a generic module that covers all possibilities since there are so many options that aren’t separate resources.

Jaeson avatar

.. and since those options are implemented as blocks. ( with all the restrictions of block syntax )

pib avatar

Sometimes blocks in blocks

Jaeson avatar

yes. blocks in blocks. sometimes nesting to a stupid degree.

pib avatar

Cloudwatch is similarly annoying

Jaeson avatar

I think that feeling comes from the overhead necessary to implement a declarative language. At least with CF, I never find myself going over a single blog post trying to figure out how for / for_each / objects work for a new use-case, only to find out that the way I wanted to do something was decided by the TF team to be not-a-best-practice. With CF, I more often find myself just realizing that the thing I want to do is missing.

Jaeson avatar

With that said, though, the TF team does seem to be pretty responsive, always asking about the exact use-case the user is trying to accomplish, and often offering a work-around.

Jaeson avatar

… I just feel like I rarely have the time to pull away from getting something working to create such posts.

Jaeson avatar
locals {
  private_buckets = [
    "company-${var.environment}-secrets",

    "company-${var.environment}-service-auth",
    # object level logging = company-prod-cloudtrail-logs
    "company-${var.environment}-db-connections",
    # object level logging = company-prod-cloudtrail-logs
    "company-${var.environment}-service-file-upload",
    # object level logging = company-prod-cloudtrail-logs
    "company-${var.environment}-service-feedback"
    # object level logging = company-prod-cloudtrail-logs
  ]

  default_lifecycle = {
    id = "DeleteAfterOneMonth"
    expiration_days = 31
    abort_incomplete_multipart_upload_days = 7
    enabled = false
  }

  private_buckets_w_lifecycles = {
    "company-service-imports" = {
      "name" = "company-${var.environment}-service-imports"
      "lifecycle_rl" = local.default_lifecycle 
    }
  }

  public_object_buckets = [
    # "company-${var.environment}-service-transmit"
  ]

  public_buckets_w_lifecycles = {
    "company-service-transmit" = {
      "name" = "company-${var.environment}-service-transmit"
      "lifecycle_rl" = local.default_lifecycle 
    }
  }

}


resource "aws_s3_bucket" "adv2_priv_bucket" {
    for_each = toset(local.private_buckets)
    bucket = each.value

    tags = local.tags
}


resource "aws_s3_bucket" "adv2_priv_bucket_w_lc" {
    for_each = local.private_buckets_w_lifecycles
    bucket = each.value.name

    lifecycle_rule {
        id = each.value.lifecycle_rl.id
        expiration {
          days = each.value.lifecycle_rl.expiration_days
        }
        enabled = each.value.lifecycle_rl.enabled
        abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
    }

    tags = local.tags
}


resource "aws_s3_bucket" "adv2_pubobj_bucket_w_lc" {
    for_each = local.public_buckets_w_lifecycles
    bucket = each.value.name

    # log to cloudtrail bucket (this is for server logging, not object level logging)
    # logging {
    #   target_bucket = aws_s3_bucket.adv2_cloudtrail_log_bucket.id
    # }

    lifecycle_rule {
        id = each.value.lifecycle_rl.id
        expiration {
          days = each.value.lifecycle_rl.expiration_days
        }
        enabled = each.value.lifecycle_rl.enabled
        abort_incomplete_multipart_upload_days = each.value.lifecycle_rl.abort_incomplete_multipart_upload_days
    }

    tags = local.tags
}


resource "aws_s3_bucket" "adv2_pubobj_bucket" {
    for_each = toset(local.public_object_buckets)
    bucket = each.value

    tags = local.tags
}


resource "aws_s3_bucket_public_access_block" "adv2_priv_s3" {
  for_each = aws_s3_bucket.adv2_priv_bucket

  bucket = each.value.id

  # AWS console language in comments
  # Block public access to buckets and objects granted through new access control lists (ACLs)
  block_public_acls   = true

  # Block public access to buckets and objects granted through any access control lists (ACLs)
  ignore_public_acls = true

  # Block public access to buckets and objects granted through new public bucket or access point policies
  block_public_policy = true

  # Block public and cross-account access to buckets and objects through any public bucket or access point policies
  restrict_public_buckets = true
}


resource "aws_s3_bucket_public_access_block" "adv2_priv_s3_w_lc" {
  for_each = aws_s3_bucket.adv2_priv_bucket_w_lc

  bucket = each.value.id

  # AWS console language in comments
  # Block public access to buckets and objects granted through new access control lists (ACLs)
  block_public_acls   = true

  # Block public access to buckets and objects granted through any access control lists (ACLs)
  ignore_public_acls = true

  # Block public access to buckets and objects granted through new public bucket or access point policies
  block_public_policy = true

  # Block public and cross-account access to buckets and objects through any public bucket or access point policies
  restrict_public_buckets = true
}
Release notes from terraform avatar
Release notes from terraform
03:54:21 PM

v0.14.0-alpha20200910 0.14.0 (Unreleased) ENHANCEMENTS: cli: A new global command line option -chdir=…, placed before the selected subcommand, instructs Terraform to switch to a different working directory before executing the subcommand. This is similar to switching to a new directory with cd before running Terraform, but it avoids changing the state of the calling shell. (<a href=”https://github.com/hashicorp/terraform/issues/26087” data-hovercard-type=”pull_request”…

main: new global option -chdir by apparentlymart · Pull Request #26087 · hashicorp/terraform

This new option is intended to address the previous inconsistencies where some older subcommands supported partially changing the target directory (where Terraform would use the new directory incon…

Matt Gowie avatar
Matt Gowie

Terraform 0.13.3 will start warning of an upcoming deprecation to the ansible, chef, and puppet provisioners — https://www.reddit.com/r/Terraform/comments/iq2z11/terraform_0133_will_include_a_deprecation_notice/

Terraform 0.13.3 will include a deprecation notice about vendor (tool-specific) provisioners

NB: I’m cross-posting from the HashiCorp community forum for visibility and feedback. Terraform is beginning a process to deprecate the built-in…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:28:03 PM
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lol!! alright everyone, let’s gear up for terraform 0.14! =P

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, totally agree

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(unfortunately, we still have some places we added >= 0.12, < 0.14 #FML

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyways, we have some better tooling to handle this and hopefully will be less painful with every iteration

Matt Gowie avatar
Matt Gowie

@Erik Osterman (Cloud Posse) Is the new standard to only pin >= 0.12?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, >= 0.x where x is the minimum supported version

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so maybe >= 0.13 if it uses for_each on modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but never < y

1
jose.amengual avatar
jose.amengual

Let’s DO IT!!!!!!

jose.amengual avatar
jose.amengual

lol

loren avatar

fyi, just saw that alpha releases are the new norm… i don’t think they are about to drop 0.14 so soon after 0.13 though… https://discuss.hashicorp.com/t/terraform-0-14-0-alpha-releases/14003

Terraform 0.14.0-alpha releases

The Terraform core team is excited to announce that we will be releasing early pre-release builds throughout the development of 0.14.0. Our hope is that this will encourage the community to try out in-progress features, and offer feedback to help guide the rest of our development. These builds will be released as the 0.14.0-alpha series, with the pre-release version including the date of release. For example, today’s release is 0.14.0-alpha20200910. Each release will include one or more change…

1
loren avatar

and here are details on the feature in this alpha release… https://discuss.hashicorp.com/t/terraform-0-14-concise-plan-diff-feedback-thread/14004

Terraform 0.14: concise plan diff feedback thread

We have a new concise diff renderer, released today in Terraform 0.14.0-alpha20200910. This post explains why we’ve taken this approach, how the rendering algorithm works, and asks for your feedback. You can try out this feature today: Download Terraform 0.14.0-alpha20200910 Review the changelog More on Terraform 0.14-alpha release Background Terraform 0.12 introduced a new plan file format and structural diff renderer, which was a significant change from 0.11. Most notably, for updated reso…

Igor avatar

@Erik Osterman (Cloud Posse) Removing that upper bound on TF version is going to pay dividends

1
Adam Blackwell avatar
Adam Blackwell

Apologies if this is a question that Google can answer: Are there any examples of using a kms secret with the terraform-aws-rds-cluster? We currently run an Ansible job to create databases, but want to enable all developers to be able to create their own databases and are not willing to put credentials in a statefile.

Adam Blackwell avatar
Adam Blackwell

More Info: 1: I found https://github.com/cloudposse/terraform-aws-rds-cluster/issues/26 from 2018, but thought there might be new information elsewhere. 2: We use Jenkins and Atlantis, so we could have Jenkins call the Ansible playbook and put our root passwords in Vault, but I’d like to make things simpler.

jose.amengual avatar
jose.amengual

I use Parameter store

jose.amengual avatar
jose.amengual

in jenkins there is a Parameter Store plugin for jenkins

jose.amengual avatar
jose.amengual

you can create the PS with terraform with a lifecycle rule to ignore value changes

jose.amengual avatar
jose.amengual
resource "aws_ssm_parameter" "dd_api_key" {
  name        = "/pepe-service/datadog/api_key"
  description = "API key for datadog"
  type        = "SecureString"
  value       = "APIKEY"
  tags        = var.tags
  lifecycle {
    ignore_changes = [
      value,
    ]
  }
}
jose.amengual avatar
jose.amengual

that is what Terraform knows

jose.amengual avatar
jose.amengual

but the value is injected by hand, jenkins, another tool

jose.amengual avatar
jose.amengual

chamber etc

2020-09-11

Sai Krishna avatar
Sai Krishna

I am using tf-13 , terraform init works fine for me but when I do a terraform plan its throwing an error saying cannot initialize plugin. Anyone else saw this ?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you paste what you’re seeing here?

Sai Krishna avatar
Sai Krishna

Error: Could not load plugin

Plugin reinitialization required. Please run “terraform init”.

Plugins are external binaries that Terraform uses to access and manipulate resources. The configuration provided requires plugins which can’t be located, don’t satisfy the version constraints, or are otherwise incompatible.

Terraform automatically discovers provider requirements from your configuration, including providers used in child modules. To see the requirements and constraints, run “terraform providers”.

Failed to instantiate provider “registry.terraform.io/-/aws” to obtain schema: unknown provider “registry.terraform.io/-/aws

msharma24 avatar
msharma24

You try TF_LOG=DEBUG terraform plan to see verbose output

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Wait, what’s that “/-/aws” there? How are you declaring the plugin?

Sai Krishna avatar
Sai Krishna

I am not declaring the plugin I just have my provider as aws

Sai Krishna avatar
Sai Krishna
provider "aws" {
  region = "us-west-2"
  profile = var.awsProfile
}
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you look for any other references to aws in your code? Obviously exclude things like ‘resource “aws..“’

Sai Krishna avatar
Sai Krishna
` 2020/09/11 12:28:45 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
2020/09/11 12:28:45 [INFO] backend/local: starting Plan operation
2020-09-11T12:28:45.887-0400 [INFO]  plugin: configuring client automatic mTLS
2020-09-11T12:28:45.910-0400 [DEBUG] plugin: starting plugin: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 args=[.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5]
2020-09-11T12:28:45.922-0400 [DEBUG] plugin: plugin started: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 pid=30307
2020-09-11T12:28:45.922-0400 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5
2020-09-11T12:28:45.957-0400 [INFO]  plugin.terraform-provider-aws_v3.6.0_x5: configuring server automatic mTLS: timestamp=2020-09-11T12:28:45.957-0400
2020-09-11T12:28:45.988-0400 [DEBUG] plugin: using plugin: version=5
2020-09-11T12:28:45.989-0400 [DEBUG] plugin.terraform-provider-aws_v3.6.0_x5: plugin address: address=/var/folders/cs/fpp3k3zj61q0hd1pn41nmf9xl5ttrf/T/plugin474919102 network=unix timestamp=2020-09-11T12:28:45.988-0400
2020-09-11T12:28:46.212-0400 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2020-09-11T12:28:46.215-0400 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.6.0/darwin_amd64/terraform-provider-aws_v3.6.0_x5 pid=30307
2020-09-11T12:28:46.215-0400 [DEBUG] plugin: plugin exited 

`

Sai Krishna avatar
Sai Krishna

I couldnt find any … this worked like 10 mins back .. I first had a conflict of tf version so I upgraded to tf 13.2 and ran the plan it worked… made some changes and ran consecutive plans which lead to this issue

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you try this in a clean environment without state, without .terraform folder, etc.?

Sai Krishna avatar
Sai Krishna

Yea that helped… my state file was corrupted

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Makes sense

Justin Lai avatar
Justin Lai

Hi I’m working with the VPC Module but getting

Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
	status code: 400, request id: 8332210c-dcbe-4b6d-bde4-c8d37ce655c0

  on .terraform/modules/aws_infrastructure.vpc.eks_subnets/nat-gateway.tf line 67, in resource "aws_route" "default":
  67: resource "aws_route" "default" {

Wondering if there’s any debug help, or what I can do to get around this. I’ve already done terraform apply and this is a 2nd run

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you share your code for using the module? Free free to replace any ids etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is there a chance that the terraform state didn’t get persisted? … so it’s trying to recreate it

1
Chien Huey avatar
Chien Huey

Is there a way to pull the k8s auth token from the CloudPosse terraform-aws-eks-cluster module? If not, is there any objection to a PR to expose the token via the module’s outputs?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Chien Huey avatar
Chien Huey

specifically I’m looking to access the token attribute from the aws_eks_cluster_auth data resource here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/df8b991bef53fcab8f01c542cd1c3ccc6242b61c/auth.tf#L72

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think we should not do it, it’s not a concern of the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can get EKS kubeconfig from the cluster anytime

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Chien Huey what is your use-case?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah sorry,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we have the token already

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can add it to the outputs

Chien Huey avatar
Chien Huey

I am trying to use the helm provider functionality along with the CloudPosse EKS modules to bootstrap a cluster. As part of that bootstrap, I want to use the helm provider to install fluxcd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I thought about kubeconfig, which we can get from the cluster anytime w/o including it into the module

Eric Berg avatar
Eric Berg

@Erik Osterman (Cloud Posse), this touches on your recent statement – or maybe it’s in your module dev docs – that you do not expose secrets via TF module outputs. I get that, but there are plenty of use cases where it is really helpful.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if we expose that, we can add a variable to show it or not

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

default to not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What if the module has a feature flag for the output? If the flag is set to true, the output contains that setting. If its set to false, then it’s null.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes that’s what I wanted to do

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the flag should default to false to not show the token in the output

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Chien Huey if you want to open a PR and add a feature flag called aws_eks_cluster_auth_token_output_enabled

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ll approve that

Chien Huey avatar
Chien Huey

ok great, I’ll do that

cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) any opinion on the variable name?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

aws_eks_cluster_auth_token_output_enabled or auth_token_output_enabled

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe the second

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, let’s do auth_token_output_enabled

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heads up @Chien Huey

Chien Huey avatar
Chien Huey

got it

1
Chien Huey avatar
Chien Huey

thanks everyone

1
Jon avatar

Hello, I’m trying to create a custom root module that is comprised of a bunch of public modules. In this case, a bunch of CloudPosse modules but am having some questions regarding the layout..

I’m trying to follow this https://github.com/cloudposse/terraform-root-modules but then stumbled on this https://github.com/cloudposse/reference-architectures/tree/0.13.0/templates .

So, is the repo layout supposed to look like what is shown in link #2 and using the example Makefile from link #1

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would not use our terraform-root-modules or the current state of reference-architectures. The root modules are on life-support for existing customers. Most of them are tf 0.11. Our reference architectures for 0.12+ are not yet public, but should be by the end of the year.

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

They’re entirely redesigned from the ground up to work with terraform cloud

1
Jon avatar

no, I want to create “my own” root module that is ultimately just using a bunch of public modules

Jon avatar

but was using those links as a reference point on how to start

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, yes, that may be helpful then as an idea for how to organize things.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, happy to jump on a call with you anytime and get you unblocked. calendly.com/cloudposse

2020-09-14

Patrick Sodré avatar
Patrick Sodré

Hi Folks, I am trying to reference a module in a private repo. Is there a standard way to tell git+ssh to use the $USER variable for login instead of using “root” when using the geodesic shell?

pjaudiomv avatar
pjaudiomv

I do this on pipelines by adding a private key to the pipeline container and then adding the repo url to known hosts. then i can reference the module using ssh

1
Patrick Sodré avatar
Patrick Sodré

do you modules start start with git://<fixed-pipeline-user>@<your private git>/…?

If so, in my case I would like to still let users still need to run geodesic locally

Patrick Sodré avatar
Patrick Sodré

never mind… I was thrown off by an error with the module url: all I needed was to reference the module correctly:

export TF_CLI_INIT_FROM_MODULE="git::<ssh://git@<private> git repo>/..."
1
pjaudiomv avatar
pjaudiomv

yea I use mostly gitlab ci and my before_script on yamls end up looking like this

before_script:
  - mkdir ~/.ssh
  - chmod 700 ~/.ssh
  - cat "${GIT_PRIVATE_KEY}" >> ~/.ssh/id_rsa
  - ssh-keyscan git.domain.com >> ~/.ssh/known_hosts
  - chmod 600 ~/.ssh/*
  - terraform init
Patrick Sodré avatar
Patrick Sodré

Great! That works!

1
Yen Kuo avatar
Yen Kuo

Hi guys, is it possible to attach existed security group to the EC2 instances instead of creating a new one in terraform-aws-elastic-beanstalk-environment module?

pjaudiomv avatar
pjaudiomv

Yes check out the module inputs, you add them as a list using allowed_security_groups

Yen Kuo avatar
Yen Kuo

@pjaudiomv thanks for your reply but it seems like allow the security group you provide to access the created default security group…?

pjaudiomv avatar
pjaudiomv

ah ok so it creates a default security group but the ingress is set to the provided security groups. this is not a configurable option to not create that group

Yen Kuo avatar
Yen Kuo

cool thanks

Matt Gowie avatar
Matt Gowie

Anyone have a recommendation for a blog post / video / example repo that shows how to do Multiple Accounts in AWS well using Terraform? IMHO it’s a complicated topic and I’ve blundered through it lightly before, so I’m looking to avoid that going forward.

1
pjaudiomv avatar
pjaudiomv

my gitlab yaml ewnds up looking like this

---
image:
  name: hashicorp/terraform:0.13.1
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'


before_script:
  - mkdir ~/.ssh
  - chmod 700 ~/.ssh
  - cat "${GITLAB_PRIVATE_KEY}" >> ~/.ssh/id_rsa
  - ssh-keyscan git.domain.com >> ~/.ssh/known_hosts
  - chmod 600 ~/.ssh/*

stages:
  - plan
  - apply

terraform_plan_only:
  stage: plan
  resource_group: terraform-lock
  script:
    - |
      for ACCT in $AWS_ACCT_LIST
      do
        rm -rf .terraform
        echo $ACCT
        eval "ACCT_FILE=\$$ACCT"
        cat "${ACCT_FILE}" > "$(pwd)/.env"
        source "$(pwd)/.env"
        export $(cat $(pwd)/.env | xargs)

        terraform init \
          -backend-config=bucket="$S3_BUCKET" \
          -backend-config=dynamodb_table="$LOCK_TABLE" \
          -backend-config=region="$AWS_REGION"

        terraform plan
      done

  only:
    - merge_requests

terraform_apply:
  stage: apply
  retry: 1
  resource_group: terraform-lock
  script:
    - |
      for ACCT in $AWS_ACCT_LIST
      do
        rm -rf .terraform
        echo $ACCT
        eval "ACCT_FILE=\$$ACCT"
        cat "${ACCT_FILE}" > "$(pwd)/.env"
        source "$(pwd)/.env"
        export $(cat $(pwd)/.env | xargs)

        terraform init \
          -backend-config=bucket="$S3_BUCKET" \
          -backend-config=dynamodb_table="$LOCK_TABLE" \
          -backend-config=region="$AWS_REGION"

        terraform plan -out=plan.plan
        terraform apply plan.plan
      done
  only:
    - master
pjaudiomv avatar
pjaudiomv

I can paste what those env vars look like in a min

pjaudiomv avatar
pjaudiomv

AWS_ACCT_LIST space seperated list of account aliases staging-2343525 prod-25345634 dev-234346

each one of those has a corresponding env var thats a file

ex staging-2343525

  AWS_ACCESS_KEY_ID=AKGFHGCHGKAIZEXBLEG5
  AWS_SECRET_ACCESS_KEY=SECRET
  AWS_REGION=us-east-1
  LOCK_TABLE=tfstate-lock-2343525
  S3_BUCKET=tfstate-2343525
pjaudiomv avatar
pjaudiomv

thus could be a horrible example, but is my experience and works for my use case

Matt Gowie avatar
Matt Gowie

I get what you’re trying to show me. That stuff I fully understand. The more interesting part I’m looking to understand from folks is the IAM assumable roles / user delegation setup through Terrafrom. So maybe my question is not clear enough

pjaudiomv avatar
pjaudiomv

pjaudiomv avatar
pjaudiomv

ah ok yes that is a much larger and different discussion

Matt Gowie avatar
Matt Gowie

Thank you though dude — sorry, I didn’t express that I do appreciate the help!

pjaudiomv avatar
pjaudiomv

yea I use a bunch of cross account roles and a seperate identity provider

1
pjaudiomv avatar
pjaudiomv

np

zeid.derhally avatar
zeid.derhally

Are you using AWS organization?

zeid.derhally avatar
zeid.derhally

It’s a lot easier to create accounts under AWS Organization because it will automatically create the role OrganizationAccountAccessRole. You can assume that role and finish configuring the account

Matt Gowie avatar
Matt Gowie

@zeid.derhally I am using Organizations and I do create the account through that process. Good to know that is the smartest path. I will add that to my list of “Many steps to accomplish adding a new account / environment”.

2020-09-15

Martin Canovas avatar
Martin Canovas

Hi guys, I’m getting the error below when using the terraform-aws-ec2-instance module:

Error: Your query returned no results. Please change your search criteria and try again.

  on .terraform/modules/replicaSet40/main.tf line 64, in data "aws_ami" "info":
  64: data "aws_ami" "info" {
gusse avatar

you should probably post the messages in the thread to keep the chat a bit cleaner..

Looks to me you are looking for a ami id from var.ami_rhel77 but have defined a ami id for var.ami_omserver40

Martin Canovas avatar
Martin Canovas

sorry, I pasted the wrong variable but I do have

variable "ami_rhel77" {
  default = "ami-0170fc126935d44c3"
}
Martin Canovas avatar
Martin Canovas

and it’s in the correct AWS region

gusse avatar

assuming the ami ID and the owner accounts ID is correct, I would double check that you have shared the ami to the account where you execute the terraform code

Martin Canovas avatar
Martin Canovas

the “ami-0170fc126935d44c3” is a RHEL 7.7 public image available to all and shared by Red Hat

gusse avatar

oh, in that case I’m not sure what might be wrong.. Maybe the owner account id is not their accounts id? I don’t use rhel but quick google suggests that they share ami’s from 309956199498

Martin Canovas avatar
Martin Canovas

That was it! I had the wrong owner_id . Thanks a lot!

1
Martin Canovas avatar
Martin Canovas

I still have the same error but for the eks modure which has this local variable:

eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
Martin Canovas avatar
Martin Canovas

and I have this

variable "kubernetes_version" {
  default = "1.18"
}
gusse avatar

I think latest EKS version is 1.17

Martin Canovas avatar
Martin Canovas

let me try it

Martin Canovas avatar
Martin Canovas

awesome! that worked. Thanks again! I didn’t know eks is a little behind the versions on kubernetes.io site.

Martin Canovas avatar
Martin Canovas

and here is my Terraform code:

module "replicaSet40" {
  source                        = "git::<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=tags/0.24.0>"
  ssh_key_pair                  = var.ssh_key_pair
  instance_type                 = var.ec2_replicaSet
  ami                           = var.ami_rhel77
  ami_owner                     = var.ami_owner
  vpc_id                        = module.vpc.vpc_id
  root_volume_size              = 10
  assign_eip_address            = false
  associate_public_ip_address   = false
  security_groups               = [aws_security_group.sg_replicaSet.id]
  subnet                        = module.subnets.private_subnet_ids[0]
  name                          = "om-replicaSet40"
  namespace                     = var.namespace
  stage                         = var.stage
  tags                          = var.instance_tags
}
Martin Canovas avatar
Martin Canovas
variable "ami_owner" {
  default = "655848829540"
}

variable "ami_omserver40" {
  default = "ami-00916221e415292ed"
}
Martin Canovas avatar
Martin Canovas

I do find my ami when running aws ec2 describe-images --owners self

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

AMis are region specific. Are you using the same region?

Martin Canovas avatar
Martin Canovas

Thanks Andriy for replying. Yes, I’m using the AMI for the same region. This already has been resolved. The owner_id was incorrect.

2

2020-09-16

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
09:45:13 AM

HCS Azure Marketplace Integration Affected Sep 16, 09:32 UTC Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.

If you have questions or are experiencing difficulties with this service please reach out to your customer support team.

IMPACT: Creating HashiCorp Consul Service on Azure clusters may fail.

We apologize for this…

HCS Azure Marketplace Integration Affected

HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
02:15:19 PM

HCS Azure Marketplace Integration Affected Sep 16, 14:06 UTC Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.

We apologize for this disruption in service and appreciate your patience.Sep 16, 09:32 UTC Identified - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with…

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
04:25:22 PM

HCS Azure Marketplace Integration Affected Sep 16, 16:16 UTC Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.

HashiCorp Cloud TeamSep 16, 14:06 UTC Update - We are confident we have identified an issue externally with Azure Marketplace Application offerings and are working with Microsoft support to escalate and resolve the issue.

We apologize for this disruption in service and appreciate…

HCS Azure Marketplace Integration Affected

HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
04:45:26 PM

HCS Azure Marketplace Integration Affected Sep 16, 16:35 UTC Resolved - We are considering this incident resolved. If you see further issues please contact HashiCorp Support.

We apologize for this disruption in service and appreciate your patience.

Hashicorp Cloud TeamSep 16, 16:16 UTC Monitoring - A fix has been implemented by our Azure partners and we are seeing recovery from our tests. We will continue to monitor the environment until we are sure the incident is resolved.

HashiCorp Cloud TeamSep 16, 14:06 UTC Update - We are…

MrAtheist avatar
MrAtheist

Can someone chime in on the pros and cons of using terraform “workspace”? I’m trying to see how to structure TF for multiple environments and most of the “advanced” gurus prefer to avoid it. This is the one im following and I’m so confused as a beginner newb 

https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html

Terraform: Up and Running

Chapter 4. How to Create Reusable Infrastructure with Terraform Modules At the end of Chapter 3, you had deployed the architecture shown in Figure 4-1. Figure 4-1. A … - Selection from Terraform: Up and Running [Book]

loren avatar

i started out using workspaces, but felt they were too implicit/invisible. the explicitness of a directory hierarchy made more sense for our team/usage

Terraform: Up and Running

Chapter 4. How to Create Reusable Infrastructure with Terraform Modules At the end of Chapter 3, you had deployed the architecture shown in Figure 4-1. Figure 4-1. A … - Selection from Terraform: Up and Running [Book]

MrAtheist avatar
MrAtheist

sure, but isnt that duplicating whole bunch of iac just for the sake of it? or do u have the “core” modules under ./module and reference it under production/staging/test/whatever?

loren avatar

exactly, we use a core module over and over across multiple accounts and envs

Zach avatar

modules in a different repo and pinned to your compositions works great

loren avatar

technically, we use terragrunt to manage the workflow, but that’s not strictly necessary

Zach avatar

we just use 1 mono-repo for the modules rather than 1 repo per module (small team, too much to deal with if we broke it further)

MrAtheist avatar
MrAtheist

so if i understand the modules correctly, the “referencee” still have the [vars.tf](http://vars.tf) duplicated for every module u want to utilize?

staging/main.tf -- reference ../modules/whatever.tf
staging/vars.tf -- vars for ../modules/whatever.tf
test/main.tf ... same as above
test/vars.tf
production/...
loren avatar

you have options… you can expose them on the cli, or code the values in [main.tf](http://main.tf), or use a wrapper like terragrunt to use core module directly (something like terraform’s -from-module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Will answer this in next week’s #office-hours

aaratn avatar

I am not a big fan of terraform workspaces due to the fact we can not keep the state files in different s3 buckets. I prefer terragrunt over terraform workspaces

Jurgen avatar

I think I was lucky and only started with TF 12 +, workspaces are the best.. both in s3 and TF Cloud. I just went for parameterised stacks and depending on the workspace select key (using the built in ‘terraform.workspace’ variable) in a map and that is it. Later on I have extended this out to every env having its own file. I saw a lot of examples like this:

https://take.ms/M7qwC

and just didn’t get it, this is not DRY. I don’t care about ‘stage’ or ‘prod’. I just have environments with different settings.

1
Release notes from terraform avatar
Release notes from terraform
08:14:19 PM

v0.13.3 0.13.3 (September 16, 2020) BUG FIXES: build: fix crash with terraform binary on openBSD (#26250) core: prevent create_before_destroy cycles by not connecting module close nodes to resource instance destroy nodes (<a href=”https://github.com/hashicorp/terraform/issues/26186” data-hovercard-type=”pull_request”…

vendor: upgrade go-userdirs dependency to fix crash [v0.13 backport] by mildwonkey · Pull Request #26250 · hashicorp/terraform

There are two commits here, since go mod tidy had a few things to clean up before I upgraded the dependency, and I thought it might be easier to review as separate commits.

don't connect module closers to destroy nodes by jbardin · Pull Request #26186 · hashicorp/terraform

One of the tenets of the graph transformations is that resource destroy nodes can only be ordered relative to other resources, and can&#39;t be referenced directly. This was broken by the module cl…

Matt Gowie avatar
Matt Gowie
10:21:00 PM

Hey folks — would love some input into a module dependency issue I’m having using the CP terraform-aws-elasticsearch module.

I have my root project which consumes the above module. That module takes in an enabled flag var and a dns_zone_id var. They are used together in the below expression to determine if the ES module should create hostnames for the ES cluster:

module "domain_hostname" {
  source  = "git::<https://github.com/cloudposse/terraform-aws-route53-cluster-hostname.git?ref=tags/0.7.0>"
  enabled   = var.enabled && var.dns_zone_id != "" ? 1 : 0
  ...
}

This is invoked twice for two different hostnames (kibana endpoint and normal ES endpoint).

Now my consumption of the ES module doesn’t do anything special AFAICT. I do pass in dns_zone_id as an reference to another modules output: dns_zone_id = module.subdomain.zone_id

I previously thought the module in module usage pattern was causing the below issue (screenshotted) because that was just too deep of a dependency tree for Terraform to walk (or something along those lines), but I’ve just now upgraded to Terraform 0.13 for this project and I’m using the new depends_on = [module.subdomain]. Yet, I’m still getting this same error as I was on 0.12:

Matt Gowie avatar
Matt Gowie

Similar issue from the project issues itself, but back in 2018: https://github.com/cloudposse/terraform-aws-elasticsearch/issues/13

I was previously solving this via a two phase apply, but I was really hoping the upgrade to 0.13 would allow me to get around that hack.

module requires two phase apply due to `value of count cannot be computed` · Issue #13 · cloudposse/terraform-aws-elasticsearch

what While trying to test a new module, which depends on this one, I added example usage: module &quot;vpc&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-vpc.git?ref=master>…

loren avatar

if the zone specified to var.dns_zone_id is being created in the same apply, then this will happen

loren avatar

there is no way around that limitation in terraform. just have to remove that condition from the expression

Matt Gowie avatar
Matt Gowie

Yeah, that sounds like my sticking point… now is there no way to get around that dependency snag even with the new module depends_on?

Matt Gowie avatar
Matt Gowie


depends_on - Creates explicit dependencies between the entire module and the listed targets. This will delay the final evaluation of the module, and any sub-modules, until after the dependencies have been applied. Modules have the same dependency resolution behavior as defined for managed resources.

Resources - Configuration Language - Terraform by HashiCorp

Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.

Matt Gowie avatar
Matt Gowie

Like why does that not solve the problem. If I’m getting the dns_zone_id value from module.subdomain and I specify “Hey wait for subdomain to be applied” via depends_on… I was assuming that was the whole point of depends_on. But maybe I’m misunderstanding that.

loren avatar

it shifts the problem to the user calling the module, but no it does not remove the limitation

loren avatar

oh, depends_on is not count

Matt Gowie avatar
Matt Gowie

Argh. This is frustrating.

roth.andy avatar
roth.andy

@Matt Gowie the depends_on in 0.13 lets you have modules depend on other things, but it doesn’t fix the count issue. Count needs to be calculated before terraform starts applying anything, which is why the issue is still appearing

loren avatar

i haven’t tried using the two together, but i don’t think it will matter… terraform understands the directed graph, so because of the var reference, it already knows that it needs to resolve one before the other

Matt Gowie avatar
Matt Gowie

I wonder if I instead update the module to accept dns_zone_name and then use data.aws_route53_zone to look up the zone to find the zone_id then that might do it? I can statically pass the dns_zone_name.

loren avatar

the count and for_each expressions all must resolve right at the beginning of the plan. they can depend on a data source, but only as long as that data source does not depend on a resource

Matt Gowie avatar
Matt Gowie

Okay — Take away for myself is that count is always calculated during the plan and needs to resolve. Regardless of depends_on. TIL.

2
loren avatar

yeah, i’d still recommend removing that condition from the expression and rethinking the approach

Matt Gowie avatar
Matt Gowie

Maybe I’ll update the module to add an explicit flag for the hostname resources instead of having it rely on a calculated enabled + dns_zone_id != "" condition

loren avatar

that’s been my workaround also

Matt Gowie avatar
Matt Gowie

I’ll shoot for that for now. Thanks for the insight gents!

2020-09-17

Martin Canovas avatar
Martin Canovas

Hey folks, after upgrading my Terraform from 0.12 to 0.13, I’m unable to run terraform initdue to terraform failing to find provider packages.

Martin Canovas avatar
Martin Canovas
terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Using previously-installed hashicorp/template v2.1.2
- Using previously-installed hashicorp/kubernetes v1.13.2
- Using previously-installed hashicorp/random v2.3.0
- Using previously-installed mongodb/mongodbatlas v0.6.4
- Using previously-installed hashicorp/null v2.1.2
- Using previously-installed hashicorp/local v1.4.0
- Finding hashicorp/aws versions matching ">= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints >= 3.0.*, >= 2.0.*, >=
2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0,
~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*
Martin Canovas avatar
Martin Canovas
terraform providers --version
Terraform v0.13.3
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.1.2
+ provider registry.terraform.io/mongodb/mongodbatlas v0.6.4
Martin Canovas avatar
Martin Canovas
cat versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.0"
    }
    mongodbatlas = {
      source  = "mongodb/mongodbatlas"
      version = ">= 0.6.4"
    }
  }
  required_version = ">= 0.13"
}
Martin Canovas avatar
Martin Canovas
cat provider.tf
# Configure the AWS Provider
provider "aws" {
  region  = "us-east-2"
  profile = "default"
}

# Configure the MongoDB Atlas Provider
provider "mongodbatlas" {
}
pjaudiomv avatar
pjaudiomv

add a version constraint to your aws provider block

pjaudiomv avatar
pjaudiomv
provider aws {
  version = "~> 3.0"
  region  = "us-east-2"
  profile = "default"
}
Martin Canovas avatar
Martin Canovas

I added the version constraint into my aws provider block, deleted the .terraform directory and run terraform init again. I still get the same errors.

Martin Canovas avatar
Martin Canovas

Is it because the vpc module has this versions.tf file:

terraform {
  required_version = ">= 0.12.0, < 0.14.0"

  required_providers {
    aws      = ">= 2.0, < 4.0"
    template = "~> 2.0"
    local    = "~> 1.2"
    null     = "~> 2.0"
  }
}
loren avatar

in particular, the commands using state replace-provider

Martin Canovas avatar
Martin Canovas

so, I had already read that documentation and also run the state replace-provider. See the output of my terraform providers :

Martin Canovas avatar
Martin Canovas
➜ terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] >= 3.0.*, ~> 3.0
├── provider[registry.terraform.io/mongodb/mongodbatlas] >= 0.6.4
├── module.subnets
│   ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│   ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│   ├── module.this
│   ├── module.utils
│   │   ├── provider[registry.terraform.io/hashicorp/local] >= 1.2.*
│   │   └── module.this
│   ├── module.nat_instance_label
│   ├── module.nat_label
│   ├── module.private_label
│   └── module.public_label
├── module.eks_cluster
│   ├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
│   ├── provider[registry.terraform.io/hashicorp/kubernetes] ~> 1.11
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*, < 4.0.*
│   ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│   ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│   ├── module.label
│   └── module.this
├── module.omserver40
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   └── module.this
├── module.omserver42
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   └── module.this
├── module.cm-ubuntu16
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   └── module.this
├── module.eks_node_group
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 3.0.*
│   ├── provider[registry.terraform.io/hashicorp/template] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/local] >= 1.3.*
│   ├── provider[registry.terraform.io/hashicorp/random] >= 2.0.*
│   ├── module.label
│   └── module.this
├── module.replicaSet40
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   └── module.this
├── module.replicaSet42
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   └── module.this
├── module.vpc
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*, < 4.0.*
│   ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│   ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│   ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│   ├── module.label
│   └── module.this
├── module.alb_om40
│   ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│   ├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
│   ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│   ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
│   ├── module.access_logs
│       ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│       ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│       ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│       ├── module.label
│       └── module.s3_bucket
│           ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
│           ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
│           ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
│           └── module.default_label
│   ├── module.default_label
│   └── module.default_target_group_label
├── module.bastion
│   ├── provider[registry.terraform.io/hashicorp/null] >= 2.0.*
│   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.0.*
│   └── module.this
└── module.alb_om42
    ├── provider[registry.terraform.io/hashicorp/local] ~> 1.3
    ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
    ├── provider[registry.terraform.io/hashicorp/template] ~> 2.0
    ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
    ├── module.access_logs
        ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
        ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
        ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
        ├── module.s3_bucket
            ├── provider[registry.terraform.io/hashicorp/null] ~> 2.0
            ├── provider[registry.terraform.io/hashicorp/aws] ~> 2.0
            ├── provider[registry.terraform.io/hashicorp/local] ~> 1.2
            └── module.default_label
        └── module.label
    ├── module.default_label
    └── module.default_target_group_label

Providers required by state:

    provider[registry.terraform.io/hashicorp/kubernetes]

    provider[registry.terraform.io/hashicorp/aws]

    provider[registry.terraform.io/hashicorp/null]

    provider[registry.terraform.io/hashicorp/template]

    provider[registry.terraform.io/mongodb/mongodbatlas]
Martin Canovas avatar
Martin Canovas
Martin Canovas avatar
Martin Canovas

I also tried to rule out the state files by copying all my terraform code to a new directory and run terraform init. Still got the same error.

loren avatar

yeah, i think you have some kind of irreconcilable version constraint?

- Finding hashicorp/aws versions matching ">= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, ~> 2.0, >= 2.0.*, < 4.0.*, >= 3.0.*, >= 2.0.*, >= 2.0.*, >= 2.0.*"...
loren avatar

in particular:

~> 2.0
1
loren avatar

in ├── module.alb_om40

loren avatar

and module.access_logs

loren avatar

and module.s3_bucket

Martin Canovas avatar
Martin Canovas

those are all Cloud Posse modules

pjaudiomv avatar
pjaudiomv

Stat version of them are you targeting, it may be that you want to target a newer version

Martin Canovas avatar
Martin Canovas

I point the modules source to the Github repo of Cloud Posse using the latest tag

1
pjaudiomv avatar
pjaudiomv

Got it

loren avatar

i believe they’ve become amenable to reducing the restriction to >= open a pr

Martin Canovas avatar
Martin Canovas

let me show you one example from the ALB module tag 0.17.0

Martin Canovas avatar
Martin Canovas
cat versions.tf
terraform {
  required_version = ">= 0.12.0, < 0.14.0"

  required_providers {
    aws      = "~> 2.0"
    template = "~> 2.0"
    null     = "~> 2.0"
    local    = "~> 1.3"
  }
}
pjaudiomv avatar
pjaudiomv

Ah it may be that you need to pin your provider to 2 then

loren avatar

or fork the module and change the version constraint

1
loren avatar

(and open a pr )

Matt Gowie avatar
Matt Gowie

Yeah, if you’re trying to upgrade to AWS provider 3 then you’ll need to deal with those ~> 2.0 blocks.

If you submit PRs for using >= 2.0 and post them in #pr-reviews then we can check em out and try to get them merged if they pass tests. The new version did introduce a bunch of small changes that have broken tests though so it’s not a totally pain free module upgrade all of the time.

Martin Canovas avatar
Martin Canovas

sure, give me some time as I’m pretty busy with work but will try to create this PR today. Thanks for all the help

1
Martin Canovas avatar
Martin Canovas

Well, I finished work and started looking in to this PR. I don’t know how would I go about this. because module terraform-aws-alb calls the terraform-aws-lb-s3-bucket module which calls the terraform-aws-s3-log-storage module. They all use ~> 2.0

loren avatar

update the deepest module first, publish new version, then move up the stack

Martin Canovas avatar
Martin Canovas

Done. PRs created.

1
loren avatar
minamijoyo/tfmigrate

A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.

6
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is it the part about using terraform against localstack that stands out? (which is pretty interesting)

minamijoyo/tfmigrate

A Terraform state migration tool for GitOps. Contribute to minamijoyo/tfmigrate development by creating an account on GitHub.

loren avatar

for me, the ability to define state moves in code, and plan/apply them is really slick. i’m always renaming and rethinking things, and that often translates to a lot of state manipulation

srimarakani avatar
srimarakani

Hello All, quick question regarding terraform… how can I use terraform to clone an existing EMR cluster

jose.amengual avatar
jose.amengual

you can not do it in one go

jose.amengual avatar
jose.amengual

you will have to create a TF project import the resource with terraform import and represent that resource in code by looking at the state file

jose.amengual avatar
jose.amengual

BUT

jose.amengual avatar
jose.amengual

you can use this

jose.amengual avatar
jose.amengual
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer

jose.amengual avatar
jose.amengual

which will do it for you

jose.amengual avatar
jose.amengual

it is a pretty good tool

jose.amengual avatar
jose.amengual

and you need to RTFM a bit too

1

2020-09-18

Nitin Prabhu avatar
Nitin Prabhu

:wave: Hi guys this is Nitin here and I have just came across this slack channel. If this is not the right channel then please do let me know. As part of provisioning EKS cluster on AWS we are exploring terraform-aws-eks-cluster https://github.com/cloudposse/terraform-aws-eks-cluster What is the advantage of using cloud posse terraform module over the community published terraform module to provision EKS cluster on AWS Thanks a lot

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Peter Huynh avatar
Peter Huynh

Personally, terraform is a fast moving river and it changes rapidly. This is one of the reasons why I am staying away from 3rd party modules for now. (Again, just iterating that this is my personal opinion only).

But if there is a 3rd party module that does what I needed, then I’d rather prefer to adopt that module from a well established source like CloudPosse over a random community module in the registry.

The guys here are very knowledgeable in terraform, and so I’d trust them to keep things up-to-date, well tested and compatible with the recent releases of terraform.

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

1
Nitin Prabhu avatar
Nitin Prabhu

thanks Peter for your inputs

1
roth.andy avatar
roth.andy

Speaking from personal experience, the other terraform module (terraform-aws-modules/terraform-aws-eks) is really unstable. They commit breaking changes all the time. It was a nightmare to use. The CloudPosse one has been WAY more stable

1
Nitin Prabhu avatar
Nitin Prabhu

thanks Andrew

roth.andy avatar
roth.andy
Is the complexity of this module getting too high? · Issue #635 · terraform-aws-modules/terraform-aws-eks

A general question for users and contributors of this module My feeling is that complexity getting too high and quality is suffering somewhat. We are squeezing a lot of features in a single modul…

1
Nitin Prabhu avatar
Nitin Prabhu

@roth.andy thanks for the pointers. I think if you refer to a tag rather than master you will always get a stable module

Nitin Prabhu avatar
Nitin Prabhu

even in cloud posse tf module recommendation is to use a tag rather than master

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi guys

wave1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so CloudPosse modules are also community-driven and open-sourced, and we accept/review PRs all the time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and for the EKS modules, we support all of them and update all the time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and they are used in production for many clients

Nitin Prabhu avatar
Nitin Prabhu

thanks @Andriy Knysh (Cloud Posse) for the inputs. Does cloudposse module give us the ability to use the tf deployment mechanism ?

Nitin Prabhu avatar
Nitin Prabhu

deployment mechanism = kubernetes_deployment

Nitin Prabhu avatar
Nitin Prabhu

so basically if I use cloud posse eks module to provision eks cluster can we then somehow deploy helm and flux on the eks cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we deploy EKS clusters with terraform, and all Kubernetes releases (system and apps) using helmfiles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

works great

Nitin Prabhu avatar
Nitin Prabhu

thanks Andriy I will look into helmfile.

Nitin Prabhu avatar
Nitin Prabhu

can it also be used to deploy flux ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sure

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

flux has helm charts (and operator)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can always create a helmfile to use the chart

Nitin Prabhu avatar
Nitin Prabhu

ok so I just need flux then and no helmfile as flux can do what helmfile will do

Nitin Prabhu avatar
Nitin Prabhu

butt how do I install flux with cloud posse ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

helmfile is to provision flux itself into a k8s cluster

Nitin Prabhu avatar
Nitin Prabhu

ahh cool got you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use helm for that

Nitin Prabhu avatar
Nitin Prabhu

thanks a lott Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but helmfile adds many useful features so it’s easier to provision than using just helm

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(one other HUGE differentiator is :100: of our our terraform 0.12+ modules have integration tests with terratest which means we don’t merge a PR until it passes at least some minimal smoke tests)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes. and the EKS modules have not just a smoke test, in the tests we actually waiting for the worker nodes to join the cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and for the Fargate Profile module, we even deploy a Kubernetes release so EKS would actually create a Fargate profile for it https://github.com/cloudposse/terraform-aws-eks-fargate-profile/blob/master/test/src/examples_complete_test.go#L173

cloudposse/terraform-aws-eks-fargate-profile

Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile

Nitin Prabhu avatar
Nitin Prabhu

thanks for your inputs guys

Nitin Prabhu avatar
Nitin Prabhu

I had one question regarding terraform helmfile

Nitin Prabhu avatar
Nitin Prabhu

can this provider be used in production ?

Nitin Prabhu avatar
Nitin Prabhu

or is there any other way to deploy charts using terraform + helmfile ?

Nitin Prabhu avatar
Nitin Prabhu

Appreciate all your inputs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re not yet using it in production, but have plans to do that probably in the next quarter.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@roth.andy has been using it lately, not sure how far it went

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mumoshu any one you know using it in production?

roth.andy avatar
roth.andy

not great

roth.andy avatar
roth.andy

I’m currently using local-exec

roth.andy avatar
roth.andy

I’m going to take another stab at it later though, I just couldn’t spend any more time on it

1
mumoshu avatar
mumoshu

@Erik Osterman (Cloud Posse) they’re not in the sweetops slack, but a company partnering with me is testing it towards going production.

2
mumoshu avatar
mumoshu

their goal is to complete a “cluster canary deployment” in a single terraform apply run. they seem to have managed it :)

mumoshu avatar
mumoshu

andrew, @Andrew Nazarov, and many others have contributed to test the provider(thanks!). and i believe all the fundamental issues are already fixed.

i have only two TODOs at this point. it requires helmfile v0.128.1 or greater so i would like to add a version check so that the user is notified to upgrade helmfile binary if necessary.

also importing existing helmfile-managed releases are not straight-forward. i’m going to implement terraform import and add some guidance for that. https://github.com/mumoshu/terraform-provider-helmfile/issues/33

Document any information related to importing an existing environment · Issue #33 · mumoshu/terraform-provider-helmfile

If I’ve been using helmfile as a standalone tool, is there a way to smoothly transition ownership of those charts while using this plugin?

roth.andy avatar
roth.andy

Sweeet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


also importing existing helmfile-managed releases are not straight-forward. i’m going to implement terraform import and add some guidance for that.
Ahh good to know! that will be important when we get to implementing it.

Nitin Prabhu avatar
Nitin Prabhu

thanks guys that really helps us. Will let you know once I present the findings to our team

mumoshu avatar
mumoshu

fyi the eksctl provider has support for terraform import now

Jimmie Butler avatar
Jimmie Butler

Anyone been able to get https://github.com/cloudposse/terraform-aws-ecs-web-app to work without codepipeline/git enabled?

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

Jimmie Butler avatar
Jimmie Butler
"Error: If `individual` is false, `organization` is required." unless known repo_owner org is supplied · Issue #63 · cloudposse/terraform-aws-ecs-web-app

Found a bug? Maybe our Slack Community can help. Describe the Bug Here is my terraform code module &quot;ecs_web_app&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-ecs-web->…

Jimmie Butler avatar
Jimmie Butler

And after specifying repo_owner I run into

Error: If `anonymous` is false, `token` is required.

  on .terraform/modules/athys.ecs_codepipeline.github_webhooks/main.tf line 1, in provider "github":
   1: provider "github" {
Jimmie Butler avatar
Jimmie Butler
codepipeline_enabled = false
webhook_enabled      = false
repo_owner           = "xxx" 
RB avatar

I switched to using alb service task and the alb ingress modules instead of that one

Jimmie Butler avatar
Jimmie Butler

Will give that a shot, thanks

Jimmie Butler avatar
Jimmie Butler

@RB Any idea what would cause

Error: InvalidParameterException: The new ARN and resource ID format must be enabled to add tags to the service. Opt in to the new format and try again. "athys"

  on .terraform/modules/ecs_alb_service_task/main.tf line 355, in resource "aws_ecs_service" "default":
 355: resource "aws_ecs_service" "default" {
Jimmie Butler avatar
Jimmie Butler

Actually think I found a var to fix that issue. Still don’t really understand the cause, as all resources are brand new (so why use old arns?)

RB avatar

did you tick the boxes in your aws account to opt into the long arn format ?

RB avatar

if so, did you rebuild your ecs cluster ?

Jimmie Butler avatar
Jimmie Butler

Rebuilt ecs cluster, but didn’t realize there’s a setting on the was account. That’s probably it

Jimmie Butler avatar
Jimmie Butler

Thanks

RB avatar

i added the use old arn variable to make sure to not tag the task/service (i forget w hich one) to allow using that module for the not-long arn formats

RB avatar

nice! np

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


The new ARN and resource ID format must be enabled to add tags to the service

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this must be done manually in the AS console

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for each region separately

RB avatar

ah it’s per region. interesting.

RB avatar

mine are set to “undefined” which means

RB avatar


An undefined setting means your IAM user or role uses the account’s default setting.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to be an admin to do so

Aumkar Prajapati avatar
Aumkar Prajapati

Hey guys, had a quick question, is there any reason adding or removing a rule from a wafv2 acl in the terraform itself forces a destroy/recreate of the entire acl? Currently trying to look for ways to get around this as I need the ACL modified in place rather than destroyed everytime a dev goes to modify the wafv2 acl rules.

pjaudiomv avatar
pjaudiomv

no idea, but great question. following

1
loren avatar

i also don’t know, but sometimes can decode why terraform wants to do something from the config and the plan output. if you can share those, maybe someone will be able to help

Aumkar Prajapati avatar
Aumkar Prajapati

It’s pretty much just a rule that Terraform mentions is forcing a replacement.

1
loren avatar

which often flows from something in the config. but can’t help if we can’t see it

maarten avatar
maarten

I’ve co-authored https://github.com/Flaconi/terraform-aws-waf-acl-rules/ and have not seen any of those issues. Maybe you can share your code ?

Flaconi/terraform-aws-waf-acl-rules

Module for simple management of WAF Rules and the ACL - Flaconi/terraform-aws-waf-acl-rules

Aumkar Prajapati avatar
Aumkar Prajapati

That’s wafv1 ^ my Terraform is for wafv2

1
sweetops avatar
sweetops

Anyone in here using Firelens to ship logs to multiple destinations? I’m using cloudposse/ecs-container-definition/aws and trying to come up with a log_configuration that will ship to both cloudwatch and logstash both.

Matt Gowie avatar
Matt Gowie

@sweetops This might help you:

<source>
  @type forward
  bind 0.0.0.0
  port 24224
</source>

<filter *firelens*>
  @type parser
  key_name log
  reserve_data true
  remove_key_name_field true
  <parse>
    @type json
  </parse>
</filter>

<match *firelens*>
  @type datadog
  api_key "#{ENV['DD_API_KEY']}"
  service "#{ENV['DEPLOYMENT']}"
  dd_source "ruby"
</match>
Matt Gowie avatar
Matt Gowie

I believe to ship to two locations you would just need two <match> sections that ship to your respective destinations.

Matt Gowie avatar
Matt Gowie

Haha btw — That is a fluentd configuration. We have the AWS fluentbit / firelens configuration as a sidecar on each of our ECS containers and then host a single fluentd container in our cluster for shipping externally.

Matt Gowie avatar
Matt Gowie

Here is the Dockerfile for that container:

FROM fluent/fluentd:v1.7.4-debian-1.0

USER root


RUN buildDeps="sudo make gcc" \
 && apt-get update \
 && apt-get install -y --no-install-recommends $buildDeps \
 && sudo gem install fluent-plugin-datadog \
 && sudo gem sources --clear-all \
 && SUDO_FORCE_REMOVE=yes \
    apt-get purge -y --auto-remove \
                  -o APT::AutoRemove::RecommendsImportant=false \
                  $buildDeps \
 && rm -rf /var/lib/apt/lists/* \
	&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem

COPY fluent.conf /fluentd/etc
simplepoll avatar
simplepoll
05:43:07 PM

Do you pin the version of TF and/or your providers/plugins?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform now has an official stance on how to pin as pointed out to me by @Jeff Wozniak

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
05:47:58 PM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this bit us hard so we no longer use it since all of our modules are intended to be composed in other modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
05:48:39 PM
loren avatar

we use a strict pin only in root modules, and any composable modules use only a min version

loren avatar

and the min version is usually optional. we only add the restriction if we know we’re using a feature that depends on a min version of something (e.g. module-level count requires tf 0.13)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Hmmm interesting. So at a minimum, a min version. Not leaving it empty.

1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Would appreciate your guys’ feedback on the above. Trying to determine best practices for our environment.

Jaeson avatar

Hi everyone. I’m creating public subnets like this:

resource "aws_subnet" "adv2_public_subnets" {
  for_each = var.adv2_public_subnet_map[var.environment]

  vpc_id = var.vpc_map[var.environment]
  cidr_block = each.value
  availability_zone = each.key
  tags = merge(local.tags, { "Name" = "adv2-${var.environment}-pub-net-${each.key}" } )
}

and I’d like to be able to refer to them similar to this:

resource "aws_lb" "aws_adv2_public_gateway_alb" {
  name               = "services-${var.environment}-public"
  internal           = false
  load_balancer_type = "application"
  
  subnets            = aws_subnet.adv2_public_subnets

  idle_timeout       = 120
  tags = local.tags
}

This also failed to work:

  subnets            = [aws_subnet.adv2_public_subnets[0].id, aws_subnet.adv2_public_subnets[1].id]

… I’ve since been unable to figure out how to refer to the subnets created as a list of strings

I think the issue is that subnets is a collection of objects, not a list of strings, but I’m not sure how to say give me back a list of strings of attribute x for each object in the collection.

I’m also really not sure how to easily figure out what exactly aws_subnet.adv2_public_subnets is returning without breaking apart the project and creating something brand new just to figure out what that would be. … is there a way to see this?

Jaeson avatar

Also tried:

subnets            = [aws_subnet.adv2_public_subnets.*.id]

where I tried to implement what I saw in this example: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#application-load-balancer

loren avatar

the object is a map, so you need to use for to access the attribute you want:

[ for subnet in aws_subnet.adv2_public_subnets : subnet.id ]
Jaeson avatar

ah, that’s how you do that. It wasn’t clear to me how to apply that to a value.

loren avatar

it’s not intuitive. would love if we could index into the map for an attribute, the way we can with a list/tuple, a la aws_subnet.adv2_public_subnets[*].id

Jaeson avatar

I had no idea that could be done (for list/tuple)

Jaeson avatar

Sweet. That does exactly what I want. Thanks! For reference,

resource "aws_lb" "aws_adv2_public_gateway_alb" {
  name               = "services-${var.environment}-public"
  internal           = false
  load_balancer_type = "application"
  
  subnets            = [ for subnet in aws_subnet.adv2_public_subnets : subnet.id ]
1

2020-09-19

organicnz avatar
organicnz

Hi guys, I’ve got an error when tried terraform apply to spin up a few nodes on Google, just preparing infra for the GitLab Ci/CD pipelines on Kubernetes. We had an outage at our ISP for a few days, but not sure that it can be related to this issue.

Code: https://gitlab.com/organicnz/gitops-experiment.git

terraform apply -auto-approve -lock=false       
google_container_cluster.default: Creating...
google_container_cluster.default: Still creating... [10s elapsed]
Failed to save state: HTTP error: 308


Error: Failed to persist state to backend.

The error shown above has prevented Terraform from writing the updated state
to the configured backend. To allow for recovery, the state has been written
to the file "errored.tfstate" in the current working directory.

Running "terraform apply" again at this point will create a forked state,
making it harder to recover.

To retry writing this state, use the following command:
    terraform state push errored.tfstate
organicnz avatar
organicnz
Review Suggested Edits
Stack OverflowThe World’s Largest Online Community for Developers

2020-09-20

Jurgen avatar

is there some way I can get tf to load a directory of variable files?

1
Matt Gowie avatar
Matt Gowie

Not that I know of. But if you want to roll that into your projects workflow then the common approach for this type of thing is to script around it using Make or bash.

loren avatar

Use the extension .auto.tfvars?

loren avatar


Terraform also automatically loads a number of variable definitions files if they are present:

Files named exactly terraform.tfvars or terraform.tfvars.json.

Any files with names ending in .auto.tfvars or .auto.tfvars.json.

loren avatar
Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

kskewes avatar
kskewes

We heavily use symlinks in Linux to hydrate global, environment, region variables and auto tfvars

Jurgen avatar

yeah, right… ok

Jurgen avatar

all very interestings ideas

RB avatar

Your variables could be outputs in nodule A, then module B could use a remote state data source to retrieve module As outputs which can then be used

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(or use YAML configuration files)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

More and more we’re taking the approach that HCL is reponsible for business logic and YAML is responsible for configuration.

1
Matt Gowie avatar
Matt Gowie

How do you load the YAML into TF vars?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So our modules always operate on inputs (variables), but at the root-level module, they operate on declarative configurations in YAML.

Matt Gowie avatar
Matt Gowie

Ah. And you load as locals?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-opsgenie-incident-management

Contribute to cloudposse/terraform-opsgenie-incident-management development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Ah then you load into a module similar to the context.tf pattern and that can provide the variable interpolation bit. Cool, good stuff

Jurgen avatar

yeah, nice! https://github.com/cloudposse/terraform-opsgenie-incident-management/blob/master/examples/config/main.tf

This is totally what I was looking for.. amazing. I like it @Erik Osterman (Cloud Posse)

2020-09-21

jose.amengual avatar
jose.amengual

Im staring upgrading TF projects to 0.13 an in one particular project I have 3 provider aliases and I’m getting Error: missing provider provider["[registry.terraform.io/hashicorp/aws](http://registry.terraform.io/hashicorp/aws)"].us_east_2 which is weird because it does not complain for any other and the upgrade command works just fine

jose.amengual avatar
jose.amengual

nevermind……

jose.amengual avatar
jose.amengual

being dyslexic is not cool some times

Laurynas avatar
Laurynas

I’m creating a cloudfront terraform module and want to optionally add geo restriction:

    restrictions {
    dynamic "geo_restriction" {
      for_each = var.geo_restriction
      content {
        restriction_type = geo_restriction.restriction_type
        locations        = geo_restriction.locations
      }
    }
  }

variables.tf:

variable "geo_restriction" {
  type = object({
    restriction_type = string
    locations        = list(string)
  })
  description = "(optional) geo restriction for cloudfront"
  default     = null
}

However this gives me an error when I pass the default null variable:

for_each = var.geo_restriction
    |----------------
    | var.geo_restriction is null

Cannot use a null value in for_each.

Is there a way to fix this? Or am I doing it wrong?

zeid.derhally avatar
zeid.derhally

Your default should be an empty list, [] . You can then use toset() for the for_each

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

or: var.geo_restriction == null ? [] : var.geo_restriction

1

2020-09-22

Jimmie Butler avatar
Jimmie Butler

Anyone have an example including EFS + Fargate, including permissions?

Jimmie Butler avatar
Jimmie Butler

I’ve been struggling to get past

ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-5e587347.efs.us-west-1.amazonaws.com" - check that your file system ID is correct. See <https://docs.aws.amazon.com/console/efs/mount-d>...
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Oh I ran into this too. A few things to look into:

  1. Make sure you can actually get to the EFS from where FARGATE is running. Specifically, it uses the NFS protocol.
  2. Make sure you’ve got DNS resolution (using AWS’s DNS).
  3. Look at the policies on the EFS end, to make sure they’re allowing FARGATE to connect in.
Jimmie Butler avatar
Jimmie Butler

The DNS piece may be what I’m missing, will take a look at that thanks.

Jimmie Butler avatar
Jimmie Butler

Thank you so much @Yoni Leitersdorf (Indeni Cloudrail) I was missing a DNS flag on my vpc. Was fighting this for awhile.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Happy to help wasted hours of my life on this.

Richard Quadling avatar
Richard Quadling

Hello. Just looking to use https://github.com/cloudposse/terraform-aws-elasticache-redis. Part of the task is to create users on the redis server that are essentially read only users. Is this possible with this module, or terraform in general? We already have a bastion SSH tunnel in place that only allows tunnelling to specific destinations, so no issue with connecting to the Redis instances.

My guess is that unless there’s a specific resource to monitor, terraform isn’t going to be involved.

But any suggestions would be appreciated.

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

Joe Niland avatar
Joe Niland

You’re talking about Redis ACL right?

I don’t think the AWS API deals with this at all https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_Operations.html so it’s unlikely the Terraform resources would.

Interesting that no-one has done this (based on my 5 second Google search)

Actions - Amazon ElastiCache

The following actions are supported:

1
Richard Quadling avatar
Richard Quadling

Thank you for that. I’m not an expert in this area at all and so learning what’s what.

Would certainly be an interesting ask though.

MrAtheist avatar
MrAtheist

Does anyone know how to terrafom apply just for additional outputs? (and ignore the rest of the changes as it’s being tempered with) edit: there doesnt seem to be a way to pass in ignore_changes into a module? im using the vpc module and just want to append some outputs ive missed without messing with the diff.

Matt Gowie avatar
Matt Gowie

@MrAtheist I think you’re looking for terraform refresh

2
MrAtheist avatar
MrAtheist

ahh life saving!! i was googling like a fanatic and at one point I also stumbled upon terraform refresh… but in no where in the doc does it mention that it can update the outputs. Thanks!

Matt Gowie avatar
Matt Gowie

np!

2020-09-23

Tomek avatar

:wave: I’m trying to use https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack and was wondering how the kms_key_arn is supposed to be used with the required slack_webhook_url string parameter. I was going to create a SecureString parameter in Parameter Store and am not sure if that’s the correct way to go about using kms_key_arn

cloudposse/terraform-aws-sns-lambda-notify-slack

Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack. - cloudposse/terraform-aws-sns-lambda-notify-slack

Tomek avatar

ah i think I can just use the aws_ssm_parameter data source as the value for slack_webhook_url in my terraform and ignore the kms_key_arn attribute https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter

cloudposse/terraform-aws-sns-lambda-notify-slack

Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack. - cloudposse/terraform-aws-sns-lambda-notify-slack

pjaudiomv avatar
pjaudiomv

is there anyway to mv a local terraform state to remote s3, I feel like im missing something super trivial

Matt Gowie avatar
Matt Gowie

Terraform should handle it for you if you add the remote S3 backend config. You add the config, terraform init, and then it will prompt you to transfer it.

pjaudiomv avatar
pjaudiomv

yea I saw that in the docs and tried it and it never asked me

pjaudiomv avatar
pjaudiomv

im a try the things again, thanks.

Matt Gowie avatar
Matt Gowie

Huh. What tf version? Do you maybe have multiple terraform blocks / backend configs or something similar?

pjaudiomv avatar
pjaudiomv
0.12.28
pjaudiomv avatar
pjaudiomv
terraform {
  required_version = ">= 0.12.0"
}

provider aws {
  region = var.region
}
pjaudiomv avatar
pjaudiomv

and no backup config

pjaudiomv avatar
pjaudiomv

i mean backend

vFondevilla avatar
vFondevilla

without the backend configuration you can’t

Matt Gowie avatar
Matt Gowie

Hm. Yeah, I’ve definitely had this work across a number of 0.12.* versions.

Matt Gowie avatar
Matt Gowie

Yeah, you need that backend config.

vFondevilla avatar
vFondevilla

You need to include the backend config and issue an terraform init for the state migration to happen

Matt Gowie avatar
Matt Gowie

Or are you using the backend init flags?

pjaudiomv avatar
pjaudiomv

oh my b, i mean i didnt before…because it was local.

pjaudiomv avatar
pjaudiomv
terraform {
  backend s3 {
    bucket         = "tfstate-account-number"
    region         = "us-east-1"
    dynamodb_table = "tfstate-lock-account-number"
    key            = "egress-proxy/terraform.tfstate"
  }
}
Matt Gowie avatar
Matt Gowie

Is that in conjunction with your other terraform block? I’m wondering if they don’t jive together if there are two. Not sure if I’ve done that myself.

pjaudiomv avatar
pjaudiomv

yes it is, i will join em

pjaudiomv avatar
pjaudiomv

sweet that worked

pjaudiomv avatar
pjaudiomv

thanks for the help

Matt Gowie avatar
Matt Gowie

cool-doge

1
DJ avatar
DJ
06:21:32 PM

Anyone have any more detailed info on this new OSS project from Hashicorp?

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m curious too - I remember the announcement, but don’t know what was announced

Chris Fowles avatar
Chris Fowles

product hasn’t been announced yet other than the hype building

DJ avatar

We’re on 0.14 already?!

roth.andy avatar
roth.andy
Terraform 0.14.0-alpha releases

The Terraform core team is excited to announce that we will be releasing early pre-release builds throughout the development of 0.14.0. Our hope is that this will encourage the community to try out in-progress features, and offer feedback to help guide the rest of our development. These builds will be released as the 0.14.0-alpha series, with the pre-release version including the date of release. For example, today’s release is 0.14.0-alpha20200910. Each release will include one or more change…

DJ avatar

Awesome. Thanks

Release notes from terraform avatar
Release notes from terraform
06:34:18 PM

v0.14.0-alpha20200923 0.14.0 (Unreleased) UPGRADE NOTES: configs: The version argument inside provider configuration blocks has been documented as deprecated since Terraform 0.12. As of 0.14 it will now also generate an explicit deprecation warning. To avoid the warning, use provider requirements declarations instead. (<a href=”https://github.com/hashicorp/terraform/issues/26135” data-hovercard-type=”pull_request”…

configs: deprecate version argument inside provider configuration blocks by mildwonkey · Pull Request #26135 · hashicorp/terraform

The version argument is deprecated in Terraform v0.14 in favor of required_providers and will be removed in a future version of terraform (expected to be v0.15). The provider configuration document…

RB avatar

anyone able to do an s3_import using an rds cluster ?

RB avatar

discovered i needed to provider master user and pass and now it fails with a weird error after trying to create it for 5 min

RB avatar

i keep getting this error S3_SNAPSHOT_INGESTION

RB avatar

which i imagine is cause of some iam restriction

RB avatar

but it has s3 and rds full rights …

jose.amengual avatar
jose.amengual

we do this for snapshots and imports

RB avatar

what kind of iam role do you use ?

RB avatar

have you run into this S3_SNAPSHOT_INGESTION error ?

RB avatar

nvm, i got around it by doing this in the UI and then backporting it back into terraform

jose.amengual avatar
jose.amengual

I do not remember running nto that error

jose.amengual avatar
jose.amengual

Glad you solved it

Jon avatar

can for_each be used for data source lookups? I want to do something like this:

## retrieve all organizational account ID's
data "aws_organizations_organization" "my_org" {
  for_each = toset(var.ACCOUNT_ID)

  ACCOUNT_ID = each.key
}
Jon avatar

would my var.ACCOUNT_ID just be a list of strings? or have to be a more complex variable declaration? I’m having issues trying to get it to work at the moment.

Jon avatar
variable ACCOUNT_ID {
   default = [""]
   type = list(string)
}
Jon avatar

never mind.. I got it. Had to remove ACCOUNT_ID = each.key from the data lookup. Thanks

1
Jon avatar

I guess I’m trying to figure out how to actually “filter” and get the specific value that I need.. Does for_each use the lookups as a normal data lookup? or..

Jon avatar

What I am trying to figure out how to do is have a data source lookup on my AWS organization and somehow save all of the account numbers. Then, I’d like to pass that information as a variable into my AWS provider so I can loop through accounts and create common resources across multiple accounts but I don’t want to maintain a hardcoded list of account numbers. Oh, and I’m using tfvars and not workspaces.

Jon avatar
## retrieve all organizational account ID's
data "aws_organizations_organization" "my_org" {
  for_each = toset(var.ACCOUNT_ID)
}

provider "aws" {
  region = var.region
  assume_role {
    role_arn     = "arn:${var.partition}:iam::${each.value.accounts}:role/<ROLE_NAME>"
  }
}
1
Jon avatar

looks like this is a known limitation. Sigh.. https://github.com/hashicorp/terraform/issues/19932

Instantiating Multiple Providers with a loop · Issue #19932 · hashicorp/terraform

Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …

OliverS avatar
OliverS

I like to use cloudposse/terraform-state-backend but it seems overkill to create one bucket per terraform state. The module doesn’t seem to allow me to re-use an existing bucket, is there another module that still does the dynamodb setup but uses already existing module?

Almost seems like the s3 stuff should be in separate module, this way one could create a common bucket to be used for all terraform stacks (in separate “folders” in that bucket, of course) and a dynamodb table + backend.tf file for each cluster. I could refactor that myself of course, but then I would loose the bug fixes & improvements you guys make.

Alex Jurkiewicz avatar
Alex Jurkiewicz

buckets are free, I wouldn’t worry about it

Alex Jurkiewicz avatar
Alex Jurkiewicz

the cost of accidentally overwriting one stack’s state with another stack’s is extremely high, and using different buckets is an effective way to reduce that risk

Peter Huynh avatar
Peter Huynh

agree. It’s also easier to manage permissions between buckets as opposed to objects inside a single bucket.

RB avatar

idk i kind of agree with op. we use a single versioned bucket for tfstates and it works. we have upwards over 1000 modules with multiple workspaces so probably upwards of 3000 states. creating 3000 buckets seems ridiculous comparatively to 1 with versioning.

1
OliverS avatar
OliverS

yeah well that’s exactly it, as shown by @RB it doesn’t seem to scale well.

Another example: we have an EKS cluster setup with terraform, that’s one state, then we have AWS resources for each deployment of an app in that cluster, each deployment has its own terraform state that “extends” the cluster terraform state (uses remote state as input). It would make sense to have those deployment terraform states all in the same bucket as the cluster state.

So I could see one bucket per cluster, but then if you want to break down the state into smaller pieces for re-usability, separate buckets massively clutters bucket namespace.

RB avatar

a single bucket has infinite prefixes. why not just use a prefix per cluster

RB avatar

so that we keep everything with high cardinality this is the scheme we use.

<s3://aws-account-id_tfstates/github_org/repo/module_path>

Jon avatar

I used to have 1 statefile bucket that managed everything. Then I recently migrated to having 1 statefile bucket per AWS account with DynamoDB locking. Inside of those account specific buckets there could have 50 folders all separating out different statefiles but at least those statefiles live with that account.

RB avatar

yep that makes sense

RB avatar

fyi jon, those are not folders, they are prefixes. the aws console just shows them in a directory structure

1
RB avatar

you might think im nitpicking and the difference is subtle but i think important

RB avatar

and i completely agree. an s3 tfstate bucket and a dynamodb table per account also makes more sense than a bucket and dynamodb table per module.

RB avatar

we use a single tfstate bucket in our primary iam account and when we need to do things in a separate account, we use the same bucket and same dynamodb table in the primary account and just assume a role in the other account

Jon avatar

if only we could dynamically loop through Terraform AWS providers to keep code DRY. Maybe one day

loren avatar

You can generate your tf provider blocks with cdktf or terragrunt

1
Jon avatar

Yeah, I thought about switching to Terragrunt for this current work I’m trying to do.. Might try one or two more things before switching. Basically I want to create an IAM role with some permissions as well as my DynamoDB with S3 resources from a centrally managed account.

I don’t want to keep a list of account numbers in my organization. Although it’s kind of similar, I’m about to test out multiple tfvars and some template rendered for the policy. Then in my CICD pipeline it’ll just be completely different environments per tfvar I use. Not sure how well that’ll work but hopefully I find out soon enough.

Jon avatar

Ideally, I wanted to do a data source lookup on my organization, grab the account numbers and dynamically loop through my provider to create the resources I need.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@OliverS we used to create one bucket per AWS account. Now we only create one bucket period. We use path prefixes with the backend object.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform {
  required_version = ">= 0.13"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    template = {
      source  = "hashicorp/template"
      version = "~> 2.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 1.3"
    }
  }

  backend "s3" {
    encrypt              = true
    bucket               = "eg-uw2-root-tfstate"
    key                  = "terraform.tfstate"
    dynamodb_table       = "eg-uw2-root-tfstate-lock"
    workspace_key_prefix = "eks"
    region               = "us-west-2"
    role_arn             = "arn:aws:iam::xxxxxxxxx:role/eg-gbl-root-terraform"
    acl                  = "bucket-owner-full-control"
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note the workspace_key_prefix

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can also prefix the key

Jon avatar

Besides easier management and less things to deploy are there any other benefits @Erik Osterman (Cloud Posse)? I’m wondering besides we originally had one bucket period but then started thinking “well maybe customerA” wants their statefile inside their own account and apart from everyone else so we just started down that path.

for example: customerA |_s3-bucket |_vpc_module |_region-specific-statefile |_app_server_module |_region-specific-statefile |_dynamoDB |_CMK for encryption

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

more and more we’re working with 10-20 AWS accounts. The coldstart for managing state buckets makes turnkey coldstart provisioning a real hassle.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And their state is still in separate S3 “folders” (it can always be moved later if it’s an issue). But no need to prematurely optimize for the hardest case.

1
1
Jon avatar

I agree. I’m currently transitioning to all GovCloud accounts and have a very small set of accounts already deployed and it’s already been a hassle. I know I’ll end up having 60+ when its all said and done.

Jon avatar

thanks for the feedback

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is, just for the record, a huge mea culpa because we were the staunchest advocates of using a minimum of one bucket per account from the get-go. However, when we had that position we also didn’t have the strongest gitops/continious delivery story for terraform. Now we’ve relaxed that position as we’re do almost entirely continuous delivery of terraform. As a result of that, we needed to make some tradeoffs to simplify things.

1
RB avatar

awesome, so it seems completely possible to have multiple account tfstate files in a single s3 bucket

1
RB avatar

thanks for weighing in erik

OliverS avatar
OliverS

@Erik Osterman (Cloud Posse) the example you show assumes the bucket already exists, but I am looking for a way to create the state bucket the same way that terraform-state-backend does, without rolling my own (which I could do but would rather use already made module if ther eis one). Please see https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/72 and let me know what you think.

Make s3 bucket and dynamodb table optional · Issue #72 · cloudposse/terraform-aws-tfstate-backend

Describe the Feature The module (&quot;tatb&quot;) should support 2 use cases: bucket already exists: do not create the bucket dynamodb table will be created separately: do not create the dynamodb …

Jurgen avatar
Data aws_iam_policy_document and for_each showing changes on every plan and nothing on apply

So, I have some IAM policies I am building with for_each which are then used as assume_role_policy and aws_iam_policy but on every plan: Plan: 0 to add, 20 to change, 0 to destroy. and then apply: Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Some details: $ tf version Terraform v0.13.3 + provider instaclustr/instaclustr/instaclustr v1.4.1 + provider registry.terraform.io/hashicorp/aws v3.7.0 + provider registry.terraform.io/hashicorp/helm v1.3.0 + provider registry.terraform….

RB avatar

you forgot to put in effect and resources arguments and that’s probably why it shows a difference

Data aws_iam_policy_document and for_each showing changes on every plan and nothing on apply

So, I have some IAM policies I am building with for_each which are then used as assume_role_policy and aws_iam_policy but on every plan: Plan: 0 to add, 20 to change, 0 to destroy. and then apply: Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Some details: $ tf version Terraform v0.13.3 + provider instaclustr/instaclustr/instaclustr v1.4.1 + provider registry.terraform.io/hashicorp/aws v3.7.0 + provider registry.terraform.io/hashicorp/helm v1.3.0 + provider registry.terraform….

RB avatar

im guessing. ^

Jurgen avatar

yeah, well they are optional with defaults right

Jurgen avatar

I can try it

Jurgen avatar

good idea

RB avatar

did it work ?

Jurgen avatar

sorry, I am in AU and you messaged… late at night. Trying to get around to it today.

Jurgen avatar

and I was so swamped I didn’t even get around to it.. Monday!

2020-09-24

Abel Luck avatar
Abel Luck

I’m looking for an easy pattern for deploying lambdas with terraform, when the lambda code lives in the terraform module repo. This is for small lambdas that provide maintenance or config services. The problem is always updating the lambda when the code changes: a combination of a null_resource to build the lambda and an archive_file to package it into a zip works, but we end up having a build_number as a trigger on the null_resources that we have to bump to get it to update the code.

Is there some other pattern to make this easier?

I’ve thought about packaging the lambda in gitlab/github CI, but terraform cannot fetch a URL to deploy the lambda source

Abel Luck avatar
Abel Luck

Haven’t seen this! Looks promising, will give it a spin. Thanks!

np1
Abel Luck avatar
Abel Luck

Works great

RB avatar

w00t!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, @antonbabenko is working on https://serverless.tf

Doing serverless with Terraformattachment image

serverless.tf is an opinionated open-source framework for developing, building, deploying, and securing serverless applications and infrastructures on AWS using Terraform.

cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for simple situations, our pattern is to build the zip in a GitHub action an upload it to an S3 bucket as an artifact.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ses-lambda-forwarder

This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then we use our terraform-external-module-artifact module to download it and deploy the artifact .

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ses-lambda-forwarder

This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-external-module-artifact

Terraform module to fetch any kind of artifacts using curl (binary and text okay) - cloudposse/terraform-external-module-artifact

antonbabenko avatar
antonbabenko

Btw, https://github.com/terraform-aws-modules/terraform-aws-lambda - does the same what claranet/terraform-aws-lambda does but better See README and examples for more.

terraform-aws-modules/terraform-aws-lambda

Terraform module, which takes care of a lot of AWS Lambda/serverless tasks (build dependencies, packages, updates, deployments) in countless combinations - terraform-aws-modules/terraform-aws-lambda

claranet/terraform-aws-lambda

Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

heads up: if you’re using our terraform-aws-rds-cluster module, we fixed a “bad practice” related to using an inline security group rule, but upgrading is a breaking change. we’ve documented one way of migrating here: https://github.com/cloudposse/terraform-aws-rds-cluster/issues/83

How to migrate from the inline Security Group rules to SG rules as separate resources · Issue #83 · cloudposse/terraform-aws-rds-cluster

This PR #80 changed the Security Group rules from inline to resource-based. This is a good move since inline rules have many issues (e.g. you can&#39;t add new rules to the security group since it&…

3
Yash avatar

What is the best way to pass the same local/variable to each module? I was the copy to be available to all our modules. It would be great if there is any way to declare global variable

MattyB avatar

We’re defining ours in locals{name = “cool”} block and referencing like so: local.name

Yash avatar

but then you have to pass that with every module:

module * {
  name = local.name
}
MattyB avatar

This keeps the number of variables to a minimum. I’m not sure there’s a better solution for TF 0.12.x. I don’t think TF 0.13 brings any improvements to this.

Yash avatar

Yeah not sure that this is kind of problem with everyone?

Yash avatar

Yeah I agree, we can define the map with all the constants

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Depends on your use-case, but have you seen the cloudposse/terraform-null-label ? we use this so we can just pass around a single variable called local.this.context

party_parrot2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The context has all the variables.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub

GitHub is where people build software. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects.

Yash avatar

Stupid question: How is this beneficial compared to just defining the locals file with:

locals {
  this = {
    # ... My Context dict
  }
}

So basically how is passing module output different than locals variable?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, you could do that too. Take a look at the module to see why we do it. of our 100+ terraform modules use this pattern. using this module has enabled us to enforce consistency.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note the normalized outputs and tags)

Yash avatar

Yeah I was sensing the same thing, it is making sure every service is defining it consistently

1
Yash avatar

Thanks Erik, I love this channel! Learning a lot through it

OliverS avatar
OliverS

In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?

2020-09-25

Abel Luck avatar
Abel Luck

I’m working on a module that sets up an aws root account with an Org and children accounts. In this module I want to 1) create an audit logs account 2) create a bucket in this logs account. How would I go about doing this? How would terraform execute actions in a newly created account?

Abel Luck avatar
Abel Luck

Figure it out. It’s as easy as defining a new provider block with

  allowed_account_ids = [local.log_account_id]
  assume_role {
    role_arn = "arn:aws:iam::${local.log_account_id}:role/OrganizationAccountAccessRole"
  }

and then passing the provider to a module that does what you need

charlespogi avatar
charlespogi
charlespogi avatar
charlespogi

can you please help point out which permissions i need to add to that user?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It tells you in the error message: iam:CreateRole

Also keep in mind bucket names need to be globally unique. Someone else has that bucket already.

1
OliverS avatar
OliverS

In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a terraform-aws-provider >= 3.0.0 requirement that the region cannot be specified with the bucket.

In the latest of terraform-aws-backend-state module, the region can no longer be specified as module parameter, it is inferred from the provider region. If the region were a module parameter, I could loop over a set of regions. How can I do this now?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, passing providers with 0.13 in module for_each is currently a problem being tracked.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) have you seen any updates on that issue?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in module with count and for_each, you can’t pass any providers at all, no single, map, or list of providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s not solved yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what @Abel Luck has shown is the only way to iterate regions by using multiple providers

OliverS avatar
OliverS

@Andriy Knysh (Cloud Posse) is there code that I could look at to do this

2020-09-26

charlespogi avatar
charlespogi

do we have something like aws ec2imagebuilder in terraform?

RB avatar

I believe you’re looking for hashicorps packer

vFondevilla avatar
vFondevilla

yup, packer + codebuild + codepipeline makes the trick

1

2020-09-27

t.hiroya avatar
t.hiroya

I can’t disable CloudWatch alarms in https://github.com/cloudposse/terraform-aws-elasticache-redis

I don’t want to create them for dev env, since it incurs some cost per month.

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

t.hiroya avatar
t.hiroya

possibly solved by adding another module variable that disable alarms, or disable alarms when nothing specified for alarm_actions and ok_actions. Which is better?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sure, go ahead and add a feature flag ...._enabled to toggle the creation. We’ll get that merged quickly if tests pass. Post PR in #pr-reviews

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Add `cloudwatch_metric_alarms_enabled` variable. Update Terratest. Update to `context.tf` by aknysh · Pull Request #84 · cloudposse/terraform-aws-elasticache-redis

what Add cloudwatch_metric_alarms_enabled variable Update Terratest Update to context.tf why Allow disabling CloudWatch metrics alarms Standardization and interoperability Keep the module up to …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@t.hiroya did you try the latest release?

t.hiroya avatar
t.hiroya

@Andriy Knysh (Cloud Posse) Thanks, I should have tried that.

2020-09-28

sahil kamboj avatar
sahil kamboj

Hey guys facing this issue every time i do

terraform apply
(applied many times)

its always update this alarm

~ dimensions                = {
          ~ "AutoScalingGroupName" = "terra-autoscaling-asg-garden" -> "terra-autoscaling-asg-prod"
        }
        evaluation_periods        = 2
        id                        = "cpu-low-alarm"

everything is fine on aws and tf.state. deleted state locally. but its still there.

zidan avatar

#terraform Build once and run everywhere, is a great concept behind docker and containers in general, but how to deploy these containers, here is how I used ECS to deploy my containers, check it out and let me know how do you deploy your containers? https://www.dailytask.co/task/manage-your-containers-deployment-using-aws-ecs-with-terraform-ahmed-zidan

Manage your containers deployment using AWS ECS with Terraform

Manage your containers deployment using AWS ECS with Terraform written by Ahmed Zidan

jose.amengual avatar
jose.amengual

is there a WORKSPACE EN variable in terraform ?

Matt Gowie avatar
Matt Gowie

Are you talking about ${terraform.workspace}?

jose.amengual avatar
jose.amengual

no, just a plain WORKSPACE variable that terraform will read?

Matt Gowie avatar
Matt Gowie

Not that I know of. What’re you trying to do?

jose.amengual avatar
jose.amengual

I’m reading someone else code and I say that and I was like, what is this?

Matt Gowie avatar
Matt Gowie

Oh maybe they’re using it in place of terraform.workspace?

jose.amengual avatar
jose.amengual

is atlantis source code

jose.amengual avatar
jose.amengual
Matt Gowie avatar
Matt Gowie

Aha in golang. Then that’s totally possible. I would assume it’d be TF_* though so maybe that is only used by Atlantis?

jose.amengual avatar
jose.amengual

there is mentions in the code about it

Yash avatar

I am coverting this cloudformation to Terraform:

  AppUserCredentials:
    Type: AWS::SecretsManager::Secret
    Properties:
      Name: !Sub "${AWS::StackName}/app-user-credentials"
      GenerateSecretString:
        SecretStringTemplate: '{"username": "app_user"}'
        GenerateStringKey: 'password'
        PasswordLength: 16
        ExcludePunctuation: true

I am unable to find how can I use the concept of GenerateSecretString with Terraform?

Yash avatar

Exactly what I was looking for

Yash avatar

Thanks Yoni!

1

2020-09-29

Igor Bronovskyi avatar
Igor Bronovskyi

Hello. I need generate an .env file from terraform resource. How can I do it?

Igor Bronovskyi avatar
Igor Bronovskyi

I need load to content my environment file like this

APP_ENV=release
HOST=${host}
PORT=3306
DB_SERVER=${mysqlhost}

and change values before store

Solomon Tekle avatar
Solomon Tekle

Hello, has anyone configured Certificate based site to site VPN in Terraform . I get the following error when I try Error: Unsupported argument

on aws.tf line 69, in resource “aws_customer_gateway” “customer_gateway_1”: 69: certificate-arn = “arnawsacm894867615160:certificate/e3fc78b9-b946-4b41-8494-b33510aea894”

An argument named “certificate-arn” is not expected here.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Often times Terraform has multiple resources for a single AWS CLI command. The certificate-arn is not a parameter for this resource:

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/customer_gateway

RB avatar

triple backticks are your friend @Solomon Tekle

1
Solomon Tekle avatar
Solomon Tekle

thanks @Yoni Leitersdorf (Indeni Cloudrail) is this a Terraform issue or AWS ..for some reason the CLI does not support it as well available only in GUI ..fairly new capability they started supporting it since March 2020

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Good question. Generally it’s possible for TF to be missing support for new features. You can look in the aws provider repo’s issues for requests to support this. I think I found what you’re looking for: https://github.com/terraform-providers/terraform-provider-aws/issues/10548

If that’s what you need, you’ll need to watch that issue and hope they get around to implementing it.

`aws_customer_gateway` certificate authentication support · Issue #10548 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

1
1
Solomon Tekle avatar
Solomon Tekle

Thank you! Yoni appreciate the response just left a comment there why this feature is so important .. Cert Based VPNs are now a major requirement for LTE/G4/G5( dynamic tunnel IP address) and Security perspective and we can’t use Terraform if this feature is not supported …

Solomon Tekle avatar
Solomon Tekle

the parameter exist in command line : create-customer-gateway –bgp-asn <value> [–public-ip <value>] [–certificate-arn <value>] –type <value> [–tag-specifications <value>] [–device-name <value>] [–dry-run | –no-dry-run] [–cli-input-json <value>] [–generate-cli-skeleton <value>]

Solomon Tekle avatar
Solomon Tekle

tying to make this works in a testbed BTW

2020-09-30

MrAtheist avatar
MrAtheist

Anyone knows of a tool (thats equivalent to cloudformation console) to list all the resources for a terraform state? (depth = 1 is fine, and terraform show/graph IS NOT human readable… )

loren avatar

terraform state list? or do you want actual resource ids?

charlespogi avatar
charlespogi
Error: AccessDenied: User: arn:aws:iam::395290764396:user/sabsab is not authorized to access this resource
        status code: 403, request id: 918de6f4-347c-420f-b7af-6a19b9a029a3

  on .terraform\modules\elastic_beanstalk_environment.dns_hostname\main.tf line 1, in resource "aws_route53_record" "default":
   1: resource "aws_route53_record" "default" {
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please reformat your message to use code blocks…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like you don’t have access to the route53 zone

charlespogi avatar
charlespogi

Thanks for answering Erik, the user sabsab was already set to have route53fullaccess. that is not enough?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Any Boundaries affecting that user? How about SCP?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you set TF_DEBUG=1 and rerun, you might get more helpful information on the exact operation that failed.

charlespogi avatar
charlespogi

what iam permission do i need to add?

loren avatar
loren
02:31:17 PM

live streaming all day, https://www.twitch.tv/cdkday

Tom Vaughan avatar
Tom Vaughan
02:55:52 PM

Having an issue with https://github.com/cloudposse/terraform-aws-ecs-container-definition and log configuration. What do I need to set so that Auto-configure Cloudwatch Logs is checked in the container definition in ECS?

log_configuration = {
  logDriver = "awslogs"
  options = {
    "awslogs-group" = "/ecs/ctportal"
    "awslogs-region" = var.vpc_region
    "awslogs-stream-prefix" = "ecs"
  }
  secretOptions = []
}

When task definition is created the log parameters are set as defined above but the box to Auto-configure Cloudwatch Logs is not checked in ECS.

Zach avatar

You don’t need that option, you’re providing all the info it would otherwise figure out for you
When registering a task definition in the Amazon ECS console, you have the option to allow Amazon ECS to auto-configure your CloudWatch logs. This option creates a log group on your behalf using the task definition family name with ecs as the prefix.

Paula avatar

i comment this here because is related: if someone has any problem with the creation of the log group, the answer –> https://aws.amazon.com/es/premiumsupport/knowledge-center/ecs-resource-initialization-error/

Release notes from terraform avatar
Release notes from terraform
06:34:18 PM

v0.13.4 0.13.4 (September 30, 2020) UPGRADE NOTES: The built-in vendor (third-party) provisioners, which include habitat, puppet, chef, and salt-masterless are now deprecated and will be removed in a future version of Terraform. More information on Discuss. Deprecated interpolation-only expressions are detected in more contexts in…

Notice: Terraform to begin deprecation of vendor, tool-specific, provisioners starting in Terraform 0.13.4

Terraform is beginning a process to deprecate the built-in vendor provisioners that ship as part of the Terraform binary. Users of the Chef, Habitat, Puppet and Salt-Masterless provisioners will need to migrate to the included file, local-exec and remote-exec provisioners which are vendor agnostic. Starting in Terraform 0.13.4, users of the built in vendor provisioners will see a deprecation warning. We expect to remove the four vendor provisioners in Terraform 0.15. Since the release of Terraf…

mrwacky avatar
mrwacky

So… terraform graph.. As useless as puppet’s? Yessir.

Alucas avatar

so did terraform module chaining get wrecked with 0.13 or am I missing something? We have a resource_group module that creates the group and then the cluster module references that data, but it fails in 0.13 now

Alucas avatar
module "resource_group" {
  source                  = "../modules/azure_resource_group"
  resource_group_name     = "test"
  resource_group_location = "westus"
}



module "kubernetes" {
  source                     = "../modules/azure_aks"
  cluster_name               = var.cluster_name
  kubernetes_version         = var.kubernetes_version
  resource_group_name        = module.resource_group.name

output:

Error: Error: Resource Group "test" was not found

  on ../modules/azure_aks/main.tf line 1, in data "azurerm_resource_group" "rg":
   1: data "azurerm_resource_group" "rg" {

seems to work in 0.12 without issue

loren avatar

doubtful. do this all the time and have converted numerous configs/tfstates to tf 0.13 with no problem…

Alucas avatar

Yeah just discovered the new depends_on for modules, working now.

Alucas avatar

I guess it doesn’t infer the ordering anymore

loren avatar

it should infer ordering same as always. if you can come up with a minimal repro config, open a bug report

2
Emmanuel Gelati avatar
Emmanuel Gelati

depends_on I think it should be the last resource

    keyboard_arrow_up