#terraform (2020-02)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-02-01

pianoriko2 avatar
pianoriko2

Thanks @Erik Osterman (Cloud Posse) this is helpful.

2020-02-02

2020-02-03

Prasanna Pawar avatar
Prasanna Pawar

@here how to do vpc peering using multiple natgateway with terraform ?

Igor Bronovskyi avatar
Igor Bronovskyi
resource "aws_internet_gateway" "main_gw_1" {
  vpc_id = aws_vpc.main.id
}

resource "aws_internet_gateway" "main_gw_2" {
  vpc_id = aws_vpc.main.id
}
Cloud Posse avatar
Cloud Posse
05:00:19 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Feb 12, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-02-05

Brij S avatar

can outputs not have conditional count like resources do?

on ../outputs.tf line 7, in output "distribution_cross_account_role_arn":
   7:   count       = var.aws_env == "prod" ? 1 : 0

An argument named "count" is not expected here.
Adrian avatar

what for? if you use count with “prod” in resource you will have output for prod

Brij S avatar

well a resource is only created if aws_env == prod, otherwise not

Brij S avatar

so in tht output, it only needs to output if the aws_env is prod, otherwise the resource wouldnt exist in the first place

Adrian avatar

so you will have output if aws_env ==prod otherwise it will be empty

Brij S avatar

exactly

Brij S avatar

since that resource wouldnt exist if aws_env != prod

Adrian avatar

e. g.

output "slack_channel" {
  value = var.enabled ? var.slack_channel : "UNSET"
}
Adrian avatar

put some fancy text instead of “UNSET”, “No output for this env” :P

Adrian avatar

or “Valid only for prod”

Brij S avatar

so value = var.aws_env == "prod" ? aws_iam_role…… : "UNSET" ?

Adrian avatar
locals {
  aws_env = "prod"
}

output "test" {
   value = local.aws_env == "prod" ? "This is prod" : "UNSET"
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

test = This is prod
Adrian avatar

so yes

Rhawnk avatar

Does anyone know if terraform .12.x allows for_each to loop through regions. Im attempting to create global dynamo tables in aws and figured I could save keystrokes if i use a for_each and pass the value into provider.

resource "aws_dynamodb_table" "table" {
    
  for_each = toset(var.table_regions)
  
  provider = aws.each.key
Rhawnk avatar

i get “invalid attribute name” after plan

creature avatar
creature

try to use a dynamic . this is ripped from the Terraform Up & Running book.

creature avatar
creature
resource "aws_autoscaling_group" "example" { launch_configuration = aws_launch_configuration . example . name vpc_zone_identifier = data . aws_subnet_ids . default . ids target_group_arns = [ aws_lb_target_group . asg . arn ] health_check_type = "ELB" min_size = var . min_size max_size = var . max_size tag { key = "Name" value = var . cluster_name propagate_at_launch = true } dynamic "tag" { for_each = var . custom_tags content { key = tag . key value = tag . value propagate_at_launch = true } } } 

Brikman, Yevgeniy. Terraform: Up & Running (Kindle Locations 3300-3316). O'Reilly Media. Kindle Edition. 
creature avatar
creature

sorry that paste sucks coming from PDF. It’s chapter 6 tips and tricks

Rhawnk avatar

thanks, ill give it a try

Rhawnk avatar

actually i dont think that will work, as that will loop though an element “like tags” within the resource, i want it to loop the entire resource and change the provider (i.e. region)

Rhawnk avatar

I heard there was work on getting for_each to work for modules, that is likely the limitation im hitting here as well

creature avatar
creature

I’m not an expert, but I suspect you might be right.

Rhawnk avatar

Ha, nor am I, but thanks for the input

creature avatar
creature

check this issue and see if any of the workarounds might help.

https://github.com/hashicorp/terraform/issues/17519

count and for_each for modules · Issue #17519 · hashicorp/terraform

Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = "foo" name2 = "bar" } } main.tf module "x1" { sour…

Rhawnk avatar
count and for_each for modules · Issue #17519 · hashicorp/terraform

Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = "foo" name2 = "bar" } } main.tf module "x1" { sour…

creature avatar
creature

awesome

Rhawnk avatar

guess ill be writing in triplicate for now, until the module support comes out

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also it depends on what your reasons are for going multi region, but from an HA perspective sharing the same state bucket across regions could be limiting true HA if the failed region happens to be where you store terraform state

creature avatar
creature

do you recommend splitting the state per region typically Erik?

Rhawnk avatar

i was looking to set up global tables, based on the tf documentation you create all 3 individual tables, then tie them together with aws_dynamodb_global_table resource

Rhawnk avatar

but i do see your point, i am using workspaces for ecs clusters that would be reading from same table, suppose i would stick to my same process, and just keep the global table state to single bucket

Rhawnk avatar

this is my first jump into multi-region, so im used to all my statefile eggs in the same basket of us-east-1

marcinw avatar
marcinw


until the module support comes out
This will be a loooooong wait.

marcinw avatar
marcinw

But, you can generate Terraform programmatically in which case you get for-each in modules for free.

marcinw avatar
marcinw
mjuenema/python-terrascript

Create Terraform files using Python scripts. Contribute to mjuenema/python-terrascript development by creating an account on GitHub.

JSON Configuration Syntax - Configuration Language - Terraform by HashiCorp

In addition to the native syntax that is most commonly used with Terraform, the Terraform language can also be expressed in a JSON-compatible syntax.

Rhawnk avatar

Thanks I’ll give it a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@ljmsc @pamasaur Soon, we still expect that in another 0.12.x.

nyan_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


do you recommend splitting the state per region typically Erik?
I do, if you can stomach the extra complexity of managing an additional state bucket. It also depends on how mission critical this stuff is and if your organization has the (human) resources to manage it. Also, realize these things trickle down to things like DNS zones and service discovery as well. If you’re managing DNS entries for resources in a specific region with a different state backend, then the zone should also be managed in that region.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so from strictly an architectural POV, I think it’s the right way to go. But when considered in light of the management trade offs, then maybe not worth it.

1
jose.amengual avatar
jose.amengual

I will strongly recommend not to use a single state bucket for multi region and will strongly recommend to run terraform for how many regions you have by way of using a variable instead of going trough a loop

jose.amengual avatar
jose.amengual

So you will end up with state buckets per region that is resilient to full region failure

creature avatar
creature

thank you for the answer

jose.amengual avatar
jose.amengual

Plus you need to keep in mind naming convention for all resources that global like iam

1
jose.amengual avatar
jose.amengual

So add the region to the name of every resource

jose.amengual avatar
jose.amengual

We just went through all this and we are now multi region and we learned a few lessons

creature avatar
creature

just getting started here, so really appreciate all the knowledge to make my journey more pleasant

jose.amengual avatar
jose.amengual

It is painful, I can tell you that much

creature avatar
creature

I’ve been in the game over 20 years. Can’t be as painful as a bunch of engineers turning wrenches by hand.

jose.amengual avatar
jose.amengual

You will see….

1
jose.amengual avatar
jose.amengual

Soon enough

Richy de la cuadra avatar
Richy de la cuadra
identifier of the CA certificate for the DB instance was added by fedemzcor · Pull Request #54 · cloudposse/terraform-aws-rds

new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Richy de la cuadra we’ll review it ASAP

identifier of the CA certificate for the DB instance was added by fedemzcor · Pull Request #54 · cloudposse/terraform-aws-rds

new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md

Richy de la cuadra avatar
Richy de la cuadra

i did it with a lots of love

2020-02-06

Rich Allen avatar
Rich Allen

Hi, probably a dumb question but I would like to check what I’m trying to build/fix is possible. I’m using the following module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn

We have a root aws account which manages the hosted zone example.com , I am trying to create a static site in a child organization at mysite.example.com. I’ve gone ahead and created a certificate from certificate manager in the child account. The root account has validated the certificate via DNS and I have verified the the child account has the certificate validated.

I have also set a route53 CNAME entry in the root account mysite.example.com -> ourCFDISTROID.cloudfront.net

I’m currently receiving an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Is what I’m trying to do going to work in aws? I’ve hit a wall an am not sure how to proceed.

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Rich Allen please share your module invocation terraform code

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

1
Rich Allen avatar
Rich Allen
module "examplecom" {
  source           = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=0.20.0>"
  namespace                = var.namespace
  stage                    = var.stage
  name                     = var.name
  origin_force_destroy     = false
  default_root_object      = "index.html"
  acm_certificate_arn      = var.acm_certificate_arn
  parent_zone_id           = var.parent_zone_id // this references a zone id outside of the child organization. The root org controls example.com
  cors_allowed_origins     = ["mysite.example.com"]
  cors_allowed_headers     = ["GET", "HEAD"]
  cors_allowed_methods     = ["GET", "HEAD"]
}
Rich Allen avatar
Rich Allen

for what it is worth, I now do not think this is an ssl issue. If you turn off redirects, and navigate to http I received a origin access error.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Rich Allen did you request the certificate just for the parent domain, or for subdomains as well (*.[example.com](http://example.com))?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one of the possible reasons for ERR_SSL_VERSION_OR_CIPHER_MISMATCH is cert name mismatch

Rich Allen avatar
Rich Allen

just the sub domain, not the bare domain

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
How to Fix ERR_SSL_VERSION_OR_CIPHER_MISMATCH (Quick Steps)attachment image

The ERR_SSL_VERSION_OR_CIPHER_MISMATCH error is typically caused by problems with your SSL certificate or web server. Check out how to fix it.

Rich Allen avatar
Rich Allen

I will reprovision the cert using the bare domain + san

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The domain name alias is for a website whose name is different, but the alias was not included in the certificate
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you using CNAME, this [ourCFDISTROID.cloudfront.net](http://ourCFDISTROID.cloudfront.net) should be included in SANs as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make sure the CNAME is included in the aliases for the distribution, like here for example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L106

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, did you provision DNS zone delegation in the child account?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since mysite.example.com is in diff account, you need to have a Route53 zone for it in the child account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and add Name Servers to the child DNS zone pointing to the root DNS zone

Rich Allen avatar
Rich Allen

I did not, I provisioned mysite.example.com -> cfid.cloudfront.net in the root account

Rich Allen avatar
Rich Allen

it was my understanding that a hosted zone, was unique to an account, so are you saying I should have a hosted zone in both the root and child account?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and zone delegation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
What is DNS Delegation?

In an answer to my previous question I noticed these lines: It’s normally this last stage of delegation that is broken with most home user setups. They have gone through the process of buying …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

otherwise DNS resolution will not work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I mean, you can provision everything (master zone, sun-domain zone) in the root account and it will work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you are using child accounts, you prob want to provision everything related to the sub-account in it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

might be not your case, just throwing out ideas

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so I think you need to check the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you provision the site/CDN in the child account, you need to have the certificate provisioned in the same child account and assigned to the CloudFront distribution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you have two certificates, in root and child accounts?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then the CNAME must be added to aliases for the distribution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so here is the thing: if you created the SSL cert only in the root account and created the sub-domain DNS record in the root account, then the CloudFront distribution URL ourCFDISTROID.cloudfront.net must be added to the SANs of the certificate

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module will not work cross-account, it will not create alias in the parent zone which is in diff account https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L254

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you have to set var.parent_zone_id = ""

Rich Allen avatar
Rich Allen

no I must have misspoke, the ssl cert is only provisioned in the child account. The dns validation record/(ACME) was set on the root account.

Rich Allen avatar
Rich Allen

FYI appreciate the help here, I’m working through a few of these just running a bit behind with your advice haha!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anyway, you need to add the distribution URL to the SANs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and CNAME must be added to aliases for the distribution

Rich Allen avatar
Rich Allen

Okay so for now, I don’t have multi account dns resolution set up, and I think I would have to authorize and test that change a bit more as affects my scope here. Knowing that is staying the same right now, it seems like I need to do the following: add a SANS record in our certificate for the cf distribution. I must manually validate the ACME challenge, and then I must manually create the mysite.example.com CNAME CFDistro.cloudfront.net record. I should ignore the alias key (as that will not work cross account and I’m manually setting it for now until I can research multi-account dns resolution).

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s different for multi-account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

btw, you can set the alias to the CNAME since the cert is in the same account. As long as CloudFront sees the cert for sub-domain, it will allow you to add CNAME aliases to the distribution

johncblandii avatar
johncblandii

Hey folks, I’m starting to push out some videos around different devops/engineering topics. I’d love some feedback and even suggestions/requests for topics.

I’ll add links to the first few in this thread.

Igor avatar

Is there any way to work around the following errors in TF:

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.

or

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.

If TF knows the resourceA count, why wouldn’t I be able to then use length(resourceA) on resourceB count…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you think terraform knows the count because in your head you know how many instances you want. But TF is not so smart as you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is not a good way of dealing with that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in many cases, we ended up adding a new var count_of_xxx and explicitly providing the count to TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s relatively old, was created for TF 0.11. TF 0.12 is much smarter, but still can’t do it in all cases)

Igor avatar

Makes sense, I was grasping at straws here, though I think I knew the answer all along

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, in those cases we were not able to “fix” it, we ended up adding explicit count OR splitting the code into two (or more) folders and using remote state

Igor avatar

The annoying part with it, is sometimes it works when you add a new resource to the existing statefile, but then when you run from scratch, you hit this.

Igor avatar

Makes sense why that is, but it’d be good if TF at least gave a warning in those cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We also used one other way of fixing it. If for example, the count depends on some resource IDs, the IDs are not know before those are created

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Try to use names for example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or any other attributes that you provide

Igor avatar

If I understand correctly, I think you are referring to a different issue; the depends_on one.

Igor avatar

Very similar though in level of frustration;)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

No, if in the count expression you use resources IDs, those are not known before the resources are created

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

But let’s say you provide resources names to terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Those are known before the resources are created

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If you use names in the count expression, it might work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

But not always

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

obviously it will work if the names are in an input variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what I’m referring to, you can reference ResourceA.name in the count, and it could work in some cases even before the ResourceA are created since terraform could figure it out

Igor avatar

Oh I see what you mean

Igor avatar

Good tip

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# aws_organizations_account.default["prod"] will be created
  + resource "aws_organizations_account" "default" {
      + arn                        = (known after apply)
      + email                      = "xxxxxxxx"
      + iam_user_access_to_billing = "DENY"
      + id                         = (known after apply)
      + joined_method              = (known after apply)
      + joined_timestamp           = (known after apply)
      + name                       = "prod"
      + parent_id                  = (known after apply)
      + status                     = (known after apply)
    }

  # aws_organizations_account.default["staging"] will be created
  + resource "aws_organizations_account" "default" {
      + arn                        = (known after apply)
      + email                      = "xxxxxxxxx"
      + iam_user_access_to_billing = "DENY"
      + id                         = (known after apply)
      + joined_method              = (known after apply)
      + joined_timestamp           = (known after apply)
      + name                       = "staging"
      + parent_id                  = (known after apply)
      + status                     = (known after apply)
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all those (known after apply) you can’t use in counts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

name - you can, and TF would figure it out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. count = lenght(aws_organizations_account.default.*.name) might work in some cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

count = lenght(aws_organizations_account.default.*.id) will never work

Igor avatar

Brilliant, thanks

Gabe avatar

Hey everyone, does anyone have advice on how to best manage terraform with micro services? Do you use a monorepo with for all of the terraform? Put the terraform with the service? Why did you decide that and how has it worked out?

jose.amengual avatar
jose.amengual

we use a repo + remote state for each microservice

jose.amengual avatar
jose.amengual

so we can independently change that service config without having to commit to a big repo

jose.amengual avatar
jose.amengual

repos are environment agnostic

Gabe avatar

thanks @jose.amengual, how do you handle changes that apply to every microservice?

jose.amengual avatar
jose.amengual

pr to the repo, review and once approve terraform apply

jose.amengual avatar
jose.amengual

you can use different methos to run terraform

Gabe avatar

does that become cumbersome when you have a lot of micro services? right now we have a monorepo with ~40 microservices… anytime we need to make a change that impacts all of them it is a huge PITA to plan and apply terraform everywhere

Gabe avatar

we use atlantis… but it ends up being 3 (environments) *40 plans and applies

Gabe avatar

trying to see if there is a better way

jose.amengual avatar
jose.amengual

well we have 4 so is not much for us

jose.amengual avatar
jose.amengual

I guess if you have one project that calls all the other microservices TFs as modules you will endup having VERY LONG plan runs

jose.amengual avatar
jose.amengual

now I will argue that not for every software deployment you will have to change infrastructure every time

jose.amengual avatar
jose.amengual

but I do not know your needs

marcinw avatar
marcinw

I think with Terraform Cloud/Enterprise you can point workspaces to track individual folders, so if you have one workspace per microservice, a monorepo could work.

marcinw avatar
marcinw

It will soon be possible with Spacelift, though using a policy-based approach.

marcinw avatar
marcinw

BTW I’d probably rather avoid having a separate project for each microservice, and would try to group them by product area - ie. responsible org/team/tribe.

2020-02-07

Dhrumil Patel avatar
Dhrumil Patel

Hello Guys I am new in terraform and stuck in problem to create elastic beanstalk application using terraform can you help me here ? Here is my code :

Dhrumil Patel avatar
Dhrumil Patel

resource “aws_elastic_beanstalk_application” “default” { name = var.application_name description = var.application_description } resource “aws_elastic_beanstalk_application_version” “default” { name = “${var.application_name}-v1” application = aws_elastic_beanstalk_application.default.name description = var.application_description bucket = var.bucket_id key = var.object_id } resource “aws_elastic_beanstalk_environment” “default” { depends_on = [aws_elastic_beanstalk_application_version.default] name = “${var.application_name}-env” application = aws_elastic_beanstalk_application.default.name solution_stack_name = “64bit Amazon Linux 2018.03 v2.9.5 running Python 3.6” version_label = “${var.application_name}-v1” dynamic “setting”{ for_each = {“ImageId” = var.ami, “InstanceType” = var.instance_type} content{ namespace = “awslaunchconfiguration” name = setting.key value = setting.value } } }

grv avatar

error message?

Dhrumil Patel avatar
Dhrumil Patel

here error message :

Dhrumil Patel avatar
Dhrumil Patel

Error: Error waiting for Elastic Beanstalk Environment (...) to become ready: 2 errors occurred: * 2020-02-07 09:25:38.663 +0000 UTC (...) : Stack named '..' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. * 2020-02-07 09:25:38.781 +0000 UTC (..) : LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support.

Dhrumil Patel avatar
Dhrumil Patel

I think elastic Beanstalk environment can’t communicate with instances.

Dhrumil Patel avatar
Dhrumil Patel

here creation log :

Dhrumil Patel avatar
Dhrumil Patel

2020-02-07 2221 UTC+0530 INFO Launched environment: TestApp-007-env. However, there were issues during launch. See event log for details. 2020-02-07 2219 UTC+0530 ERROR LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support. 2020-02-07 2219 UTC+0530 ERROR Stack named ‘..’ aborted operation. Current state: ‘CREATE_FAILED’ Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. 2020-02-07 2232 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 2232 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 2216 UTC+0530 INFO Waiting for EC2 instances to launch. This may take a few minutes. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group named: .. 2020-02-07 2255 UTC+0530 INFO Adding instance.. to your environment. 2020-02-07 2255 UTC+0530 INFO Added EC2 instance .. to Auto Scaling Group.. 2020-02-07 2212 UTC+0530 INFO Created Auto Scaling launch configuration named: .. 2020-02-07 2212 UTC+0530 INFO Created security group named: .. 2020-02-07 2212 UTC+0530 INFO Created load balancer named: .. 2020-02-07 2256 UTC+0530 INFO Created security group named: … 2020-02-07 2234 UTC+0530 INFO Using … as Amazon S3 storage bucket for environment data. 2020-02-07 2233 UTC+0530 INFO createEnvironment is starting.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Dhrumil Patel avatar
Dhrumil Patel

Ok thanks

Igor avatar

Sidenote, I would recommend that you take out your aws accounts ids before posting outputs. It’s best to keep those secret.

1
Dhrumil Patel avatar
Dhrumil Patel

Ya I forgot about it thanks for reminding me.

Igor avatar

You should be able to edit them out

Dhrumil Patel avatar
Dhrumil Patel

Is there any thing wrong with this code ?

Dhrumil Patel avatar
Dhrumil Patel

I need to create code as my internship assignment and my mentor told me that I can’t using public registry modules thats why I am asking ?

Joe Niland avatar
Joe Niland

Your EC2 instance is not launching correctly and/or not in time. Check for problems in /var/log/eb-activity.log

You also want to increase the Command timeout or disable health checks while you’re investigating.

Dhrumil Patel avatar
Dhrumil Patel

Ok

Dhrumil Patel avatar
Dhrumil Patel

Problem solved, I am providing AMI to autoscalling group and that AMI causing problem. Instance spawn using that AMI can’t communicate with elastic beanstalk. When I didn’t provide AMI in elastic beanstalk environment then it works perfectly fine. Don’t know why this is happennning any suggestion ?

Joe Niland avatar
Joe Niland

Would need to see your error logs, however you must specify an AMI. I think your custom AMI may have a launch error.

Dhrumil Patel avatar
Dhrumil Patel

Actually I am not using custom AMI I am using one of the ubuntu AMI from AMI store.

2020-02-10

Pierre-Yves avatar
Pierre-Yves

Hello, I am using terraform remote state and have to move one resource to its parent folder . I am trying to use terraform state mv to avoid recreation the resource.

# download the state file
terraform state pull > local_state.out

# change the state file
terraform state mv -state=local_state.out module.network.module.vpn1.azurerm_subnet.vpn1 module.network.azurerm_subnet.vpn1

Move "module.network.module.vpn1.azurerm_subnet.vpn1" to "module.network.azurerm_subnet.vpn1"
Successfully moved 1 object(s).

but then we I do the terraform plan -state=local_state.out Terraform still wants to delete the resource I have moved

do you have any hint on how to achieve this move ?

maarten avatar
maarten

can you copy-paste the output of plan here ?

aaratn avatar

@Pierre-Yves you will need to upload the state again to remote backend

maarten avatar
maarten

he’s explicitely using a local state file local_state.out , so that’s not it.

aaratn avatar

well, the question initially says that he is using remote backend

aaratn avatar

I could be wrong, I will wait for his confirmation if the backend is local-file

maarten avatar
maarten

that’s not relevant, he posted his command line commands and he clearly pulls from remote to local file, and from that moment on uses the local state file terraform plan -state=local_state.out

aaratn avatar

not sure if he did partial init in that case

Pierre-Yves avatar
Pierre-Yves

yes I have download the remote file with terraform state pull to mv everything and once the plan match my need I want to upload it back with terraform state push. then plan again to be sure and apply

aaratn avatar

@Pierre-Yves did you terraform state push already before running terraform plan ?

Pierre-Yves avatar
Pierre-Yves

no I have specified -state=local_state.out

aaratn avatar

You in order to use local state, you might need to do terraform init afaik

aaratn avatar

with local state

aaratn avatar

that will consider your local state instead of remote state

Pierre-Yves avatar
Pierre-Yves

mhh exact and since I am moving a module it might requires it

1
Pierre-Yves avatar
Pierre-Yves

terraform init -backend-config="path=local_state.out" => The backend configuration argument “path” given on the command line is not expected for the selected backend type.

Pierre-Yves avatar
Pierre-Yves

seems better with explicitely having the file named terraform.tfstate

Pierre-Yves avatar
Pierre-Yves

so it seems terraform don’t like I have a backend configured in the main.tf even when specifying -state=localfile or init with a local terraform.tfstate file

If I want to work locally I had to remove the backend block and terraform will ask to unconfigure and copy the current state to the local backend

`

terraform init
Initializing modules...

Initializing the backend...
Terraform has detected you're unconfiguring your previously set "azurerm" backend.
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "azurerm" backend to the
  newly configured "local" backend. No existing state was found in the newly
  configured "local" backend. Do you want to copy this state to the new "local"
  backend? Enter "yes" to copy and "no" to start with an empty state.
Pierre-Yves avatar
Pierre-Yves

as a summary to move module resources on my laptop i have: • unconfigure the remote backend tfstate ( by commenting out the backend block • run terraform init • terraform propose then to copy the tfstate locally • I have move the resource, try and plan • re add the backend block for remote state • run terraform init and specify to copy back the state thanks for your help @aaratn an @maarten

1
Pierre-Yves avatar
Pierre-Yves

this was only needed because moving resource module requires a terraform init

aaratn avatar

and try to do plan

aaratn avatar

it should fix the issue

Igor avatar

Any suggested readings or words of wisdom for someone looking to get automated testing going for TF? We’re looking at terratest at the moment for the tool.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya that’s your best bet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Avoid testing things that terraform already covers in its own tests.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. creating a bucket results in bucket. It’s safe to skip this kind of test

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

80/20 rule applied to testing terraform: You get 80% of the benefit and catch 80% of the problems by just running plan/apply/destroy. You have to spend 80% more effort to test the remaining 20%

3
1
loren avatar

i still think this presentation is really great for getting folks started… https://www.infoq.com/presentations/automated-testing-terraform-docker-packer

Automated Testing for Terraform, Docker, Packer, Kubernetes, and More

Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.

Pierre-Yves avatar
Pierre-Yves

I have look as well to do terraform test and will experiment later the test from the terraform vscode extension which can do lint and “end to end” test . see the bottom of the page : https://docs.microsoft.com/en-us/azure/terraform/terraform-vscode-extension

Tutorial - Configure the Azure Terraform Visual Studio Code extension

Learn how to install and use the Azure Terraform extension in Visual Studio Code.

Igor avatar

Thanks, @Erik Osterman (Cloud Posse) @loren @Pierre-Yves

Pierre-Yves avatar
Pierre-Yves

the infoq video above mention as well “conftest” for gke : https://github.com/instrumenta/conftest/tree/master/examples/terraform

instrumenta/conftest

Write tests against structured configuration data using the Open Policy Agent Rego query language - instrumenta/conftest

Cloud Posse avatar
Cloud Posse
05:00:48 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Feb 19, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-02-11

Matt Gowie avatar
Matt Gowie

Anyone know of a way to use data.aws_ssm_parameter to pull a number of parameters given a path? I am trying to find a way to avoid supplying all the param names to my application through vars.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if any of your PRs for cloudposse repos are blocked in review, hit up our pal @Maxim Mironenko (Cloud Posse) to get help and speed up the review =)

Adam Crews avatar
Adam Crews

ugh, sorry about that, fixed and code pushed.

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

party_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@antonbabenko 11AM GMT is 6AM Toronto time, but that’s not going to stop 3 guys from my shop including myself from seeing your talk https://events.hashicorp.com/hashitalks2020

1
btai avatar

what kills efficiency the most working with terraform…. aws resource limits

1
aaratn avatar
awslabs/aws-limit-monitor

Customizable Lambda functions to proactively notify you when you are about to hit an AWS service limit. Requires Enterprise or Business level support to access Support API. - awslabs/aws-limit-monitor

Chris Fowles avatar
Chris Fowles

well if that’s your worst efficiency blocker i’d say you’re doing aye ok

Chris Fowles avatar
Chris Fowles

2020-02-12

johncblandii avatar
johncblandii

I didn’t see this posted yet, but TF Cloud is adding run triggers; in short, a way to build CI pipelines.

https://www.hashicorp.com/blog/creating-infrastructure-pipelines-with-terraform-cloud-run-triggers

Creating Infrastructure Pipelines With HashiCorp Terraform Cloud Run Triggers

Run triggers are useful anywhere you’d like to have distinct pieces of infrastructure automatically queue a run when a dependent piece of infrastructure is changed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s great

Chris Fowles avatar
Chris Fowles

that looks extremely useful

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Chris Fowles: @johncblandii does a live demo in our office hours today https://cloudposse.wistia.com/medias/g6p0zu4txy

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii you mentioned you had another video you recorded specifically demo’ing this functionality

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is that on youtube?

johncblandii avatar
johncblandii

YouTube is processing the 4K right now. Hopefully it’ll be done soon

johncblandii avatar
johncblandii

i’ll post when it is done

Chris Fowles avatar
Chris Fowles

awesome cheers

1
btai avatar

for those planning on using terraform cli workspaces with TFC (terraform cloud) because of @johncblandii’s awesome demo today, there is a tiny edge case caveat to getting it working in TFC. If you’re using the terraform.workspace value in your terraform code, that value will always be default in TFC so you won’t be able to use it to make logical decisions within your terraform code (I use it for naming conventions, tagging, environment/region specific scenarios). To work around this I’ve introduced a “workspace” variable (see pic) and you can set a local variable to workspace = "${var.workspace != "" ? var.workspace : terraform.workspace}"

The reason I am naming the variable workspace is so I can make minimal changes and it sounds like there is enough fuss from the community that this might not be an issue in the future.

More info here: https://github.com/hashicorp/terraform/issues/22131

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Consider writing instead:

workspace = coalesce(var.workspace, terraform.workspace)

https://www.terraform.io/docs/configuration/functions/coalesce.html

coalesce - Functions - Configuration Language - Terraform by HashiCorp

The coalesce function takes any number of arguments and returns the first one that isn’t null nor empty.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s annoying! Why the heck is terraform cloud overloading their own term for “workspace” making it mean one thing in the SaaS and a subtle but different thing in the terraform cli?

1
btai avatar

Tell me about it. They must’ve known it would cause a bunch of confusion

johncblandii avatar
johncblandii

TFC workspace is basically a project and locally you can pull in multiple projects to 1 code-base mapped to workspaces.

I completely forgot about this distinction until @btai brought it up.

2020-02-13

Gui Paiva avatar
Gui Paiva

Hey guys, I got a question about Terraform in AWS and its IAM role policy to create resources. At the moment I have attached the full admin policy to the role that terraform is using but I was wondering if there is a simpler way so terraform can create resources(not only ec2, vpc, buckets, etc) and at the same time be not so open with full admin access.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Use the special category of AWS managed policies to support common job functions.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but you pretty much need to be power user or admin

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

once you start on the path to take away permissions, you will soon realize that you need most of them to be able to provision AWS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

unless you provision a specific set of resources, then you can create a specific role with those permissions and give it to the user or to terraform aws provider

Gui Paiva avatar
Gui Paiva

but then every time you need to create a new set of resources, you need to remember to update the policy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we use Admin permissions

Gui Paiva avatar
Gui Paiva

yeah, I always go for the admin permission

Gui Paiva avatar
Gui Paiva

quite tricky

Gui Paiva avatar
Gui Paiva

those policies from their doc are quite interesting, not just for terraform but for other users too

Gui Paiva avatar
Gui Paiva

I am going to have a second thought about it. This is not a company requirement but just trying figure out if there are better ways to manage permissions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My current opinion after having worked with this stuff for over a decade is that IAM is best suited for your services and their capabilities (controlling what they can do or wha can access them), but is too low level for restricting who (human) can deploy what (resources). It’s hard to know exactly what policies you need as a human before you provision everything. Iterating and requesting permissions is a huge bottleneck. Instead the current best practice is to stick a VCS+CI/CD (gitops) pipeline in between the humans and the infrastructure. Get the humans out provisioning stuff directly as much as possible to eliminate the need for fine grained access. Then use something like the Open Policy Agent to define the higher order policies that run in your pipelines, combined with a code review approval process.

Just to be clear, these concepts are all relatively recent developments for IaC, but are repossess to all the the problems associated with how “least privilege” failed us from a practical perspective to achieve organizational efficiency.

2
Gui Paiva avatar
Gui Paiva

I am already doing VCS+CI/CD, PRs, etc which works great and also means not many people can actually do any harm as we have a process BUT, as humans can make mistakes, someone, including myself, could mistakely give access to Jenkins (both ssh or URL) using the wrong permissions which may allow them to get admin access to all AWS accounts.

It is actually a complicated situation because you don’t want to have a bottleneck by having to always update the IAM policy to allow an extra action but at the same time you want to avoid risks of someone being able to do something they shouldn’t be doing.

jose.amengual avatar
jose.amengual

@Erik Osterman (Cloud Posse) do yo guys use Open Policy Agent?

jose.amengual avatar
jose.amengual

one thing we are struggling now is the user management part and SSO in AWS, how to better and easier to manage user/group policies trough SSO or other means, it is a hot topic in our world right now

jose.amengual avatar
jose.amengual

a bit offtopic from this original thread

Gui Paiva avatar
Gui Paiva

I have had a look at ita while ago and it can get really complex… to be honest, my company does not have that many users that need to access AWS so I am not using SSO but I did have a look to integrate with GSuite and it is not like a “next next finish” IMHO

jose.amengual avatar
jose.amengual

we have about 300 users

jose.amengual avatar
jose.amengual

some of them cross group boundaries or have multiple account access etc

jose.amengual avatar
jose.amengual

it gets pretty complicated

Gui Paiva avatar
Gui Paiva

I can imagine it can get really complex, not just the SSO part but the Security side of things

jose.amengual avatar
jose.amengual

exactly

Gui Paiva avatar
Gui Paiva

I was once at an AWS event and s security team from a company was there and they were talking about it

Gui Paiva avatar
Gui Paiva

similar to what you have said

Gui Paiva avatar
Gui Paiva

and it was so complex that even the AWS SA was lost

Gui Paiva avatar
Gui Paiva

because you have all the security requirements too.. not just users/sign in

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual (Just to be clear, these concepts are all relatively recent developments for IaC) we haven’t had a chance to adopt it yet, but this is what we’re planning on incorporating to our latest pipelines we’re developing for a customer

jose.amengual avatar
jose.amengual

it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough

jose.amengual avatar
jose.amengual

if a hard problem but Active Directory solved it many many years ago

jose.amengual avatar
jose.amengual

their policies are incredible granular, although there is a lot of clicks involved

Gui Paiva avatar
Gui Paiva

I wish there was a simpler way to integrate and manage users like AD, like you said

Gui Paiva avatar
Gui Paiva

better having clicks involved than googling for a solution that we never found

jose.amengual avatar
jose.amengual

hahaha lol

jose.amengual avatar
jose.amengual

very true

Gui Paiva avatar
Gui Paiva

AD is probably the best service MS has ever done

Gui Paiva avatar
Gui Paiva

user management and gourp/users policieswork so so well

jose.amengual avatar
jose.amengual

agree

Joe Hosteny avatar
Joe Hosteny

@Gui Paiva SAML provider from GSuite to AWS works nicely. The only downsides are that you seemingly can’t attach policies to groups, only users individually. In practice, this is not so big of an issue for us since we are working on deploying GSuite users via ansible anyway. Also, I haven’t been able to determine yet how to add multiple GSuite apps for AWS yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual can you elaborate?

jose.amengual avatar
jose.amengual

Well if you look at the example of MS AD, they had this for years, fine grane policies , group, policies, identity, authentication and authorization

jose.amengual avatar
jose.amengual

what I’m talking is basically an AWS solution that is easy to use , easy to understand and that solves the use cases that people have and that has a good programatic API access that is easy to program

jose.amengual avatar
jose.amengual

IAM if far from being easy

jose.amengual avatar
jose.amengual

SSO in AWS is ok-ish but then you have issues where you can attach policies to groups and such

jose.amengual avatar
jose.amengual

there is alway quirks

jose.amengual avatar
jose.amengual

and there is many SaaS product that tackle to solve this problem for you

jose.amengual avatar
jose.amengual

the fact that there is that many, tells you that there is a need for something easier

jose.amengual avatar
jose.amengual

that is what I mean

2020-02-14

drexler avatar
drexler

Hello fellas i have a question regarding CORS policies on S3 buckets. Is there a way of adding such policies to existing S3 bucket via Terraform?

Igor Bronovskyi avatar
Igor Bronovskyi
resource "aws_s3_bucket" "bucket" {
  bucket_prefix = "project-name-"

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET"]
    allowed_origins = ["*"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }

  tags = {
      Name        = "test"
    }
}
drexler avatar
drexler

wont that destroy the existing bucket in order to recreate it with the cors policy?

Igor Bronovskyi avatar
Igor Bronovskyi

no

Igor Bronovskyi avatar
Igor Bronovskyi

if you create bucket previously with terraform

drexler avatar
drexler

cool. let me test it out..

loren avatar

Also no if you import an existing bucket into the config

drexler avatar
drexler

@Igor Bronovskyi verified it. Thanks.

1
drexler avatar
drexler

@Igor Bronovskyi i spoke too soon. This toggles the application of the CORS policy to the bucket on every other terraform apply. Just discovered it after releasing the code

drexler avatar
drexler

so removes it and appends it ….

loren avatar

that would be a bug, either in the tf aws provider, or in your config…

drexler avatar
drexler

I dont think i explained myself well. The problem was the the bucket created as part of a collection of resources. Eventually i ended up issuing a PR to create a new resource to handle this issue. https://github.com/terraform-providers/terraform-provider-aws/pull/12141

Add aws_s3_bucket_cors_configuration resource by drexler · Pull Request #12141 · terraform-providers/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

sweetops avatar
sweetops

I have a list, with one object in it, but I need to perform some functions on it and i’m trying to see if I’ve got this right…

sweetops avatar
sweetops

aliases = [ lower(substr(“${var.service}-${var.branch}.${var.stage}.${var.domain}“, 0, 32)) ]

sweetops avatar
sweetops

or would I run lower(substr()) on the outside of [] ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t have a list it looks like, you are constructing the list. If that’s the case, then the syntax is OK, you get substring from a string and put it into a list

sweetops avatar
sweetops

correct. perfect, thanks!

sweetops avatar
sweetops

@Andriy Knysh (Cloud Posse) I just realized that my list totally wouldn’t work because i’d be truncating the end of the dns name. So…

sweetops avatar
sweetops

[lower(substr(“${var.service}-${var.branch}“, 0, 32))“.${var.stage}.${var.domain}“]

sweetops avatar
sweetops

that hurts my head

sweetops avatar
sweetops

would that work?

sweetops avatar
sweetops

I think separating the quotes like that would make a list of two items, not one

sweetops avatar
sweetops

or maybe not, since there’s no comma

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make a local var in locals`

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then use it to put into the list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

more readable

sweetops avatar
sweetops

yeah, you’re right

sweetops avatar
sweetops

good call

Olivier avatar
Olivier

I am trying to use module: https://github.com/cloudposse/terraform-aws-elasticache-redis and I typed

 apply_immediately          = true

but does not seem to be part of the resource. so when I changed the parameter , it did not apply it immediately

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is a PR to fix that, and @Maxim Mironenko (Cloud Posse) is working on it

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

@Olivier fix is on the way, will let you know when ready

Olivier avatar
Olivier

thank you

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

@Olivier here it is: <https://github.com/cloudposse/terraform-aws-elasticache-redis/releases/tag/0.16.0>

Olivier avatar
Olivier

thanks

Brij S avatar

does anyone know how to use terraform-docs to automatically replace only the content in the readme which it generates? (providers, inputs,outputs). For example, if there is a title and some description text at the top, I wouldnt want it to replace that part

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think @antonbabenko has a github pre-commit hook for this.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Usually this is done with some kind of markers like <!-- terraform-docs begin --> and <!-- terraform-docs end --> and then using a sed+regex to replace the content inbetweeen

hiding1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

2020-02-17

julien M. avatar
julien M.

Hello, I’m currently stuck on an ALB and EC2 problem: currently I have a front-end ALB and EC2s in a Target Group that are created via an ASG + LC. (all managed with terraform) currently when I launch a deployment with terraform it causes a downtime… because the old EC2’s are already in drain mode before the new EC2’s are marked “Healthy” and can therefore receive traffic.

I looked further and the new EC2’s are in “initial” mode (Target registration is in progress) for a few seconds until they are considered healthy… and since at the same time the old instances are in drain mode I get timeouts (503 or 503) if I make calls at this time to the

Is there a way to make the old EC2s go into “drain” mode ONLY when the new EC2s are in “Healthy” mode and not “initial”? This would allow the old EC2s to be able to handle the traffic while the new EC2s are OK for TargetGroup.

marcinw avatar
marcinw

I’m also thinking you could do this with lifecycle hooks - https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html - though you will need an external entity (eg. a Lambda) to control the action that’s taken.

Learn about lifecycle hooks for Amazon EC2 Auto Scaling.

marcinw avatar
marcinw

One other thing I’d suggest looking at is https://aws.amazon.com/codedeploy/ - haven’t used it personally but it’s supposedly designed to handle deployments just like yours.

AWS CodeDeploy | Automated Software Deployment

AWS CodeDeploy is service that fully automates code deployments for a fast, reliable software deployment process.

Cloud Posse avatar
Cloud Posse
05:01:03 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Feb 26, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-02-18

Brij S avatar

Hey all, the docs for api gateway domain name(https://www.terraform.io/docs/providers/aws/r/api_gateway_domain_name.html) suggest that the var for a regional domain is regional_certificate_arn and for edge its certificate_arn . Im in the middle of creating a custom domain module for internal use. Is there a way to conditionally select regional_arn vs certificate_arn

AWS: aws_api_gateway_domain_name - Terraform by HashiCorp

Registers a custom domain name for use with AWS API Gateway.

Andrew Jeffree avatar
Andrew Jeffree

If you’re writing a module you do some conditional stuff around the endpoint configuration or lack thereof

Andrew Jeffree avatar
Andrew Jeffree

infact hmm

Andrew Jeffree avatar
Andrew Jeffree

no that won’t work I didn’t read the docs properly

Andrew Jeffree avatar
Andrew Jeffree

I guess you could have two resources in a module using count resources

Andrew Jeffree avatar
Andrew Jeffree

and have one that uses regional_certificate_arn and the other using just plain certificate_arn

Chris Fowles avatar
Chris Fowles

you can pass null to unset a value

Chris Fowles avatar
Chris Fowles

so just pass null to the one you don’t want to use and use a couple of locals to work that out

Andrew Jeffree avatar
Andrew Jeffree

ah yeah good point

Brij S avatar

oh really? so in the following

resource "aws_api_gateway_domain_name" "domain" {
  certificate_arn = var.certificate_arn
  regional_certificate_arn = ""

I can set one of those to null based on some conditional? @Chris Fowles

Chris Fowles avatar
Chris Fowles

0.12 + supports an actual keyword null

Chris Fowles avatar
Chris Fowles
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

Brij S avatar

@Chris Fowles, I attempted the following with no luck -any thoughts?

resource "aws_api_gateway_domain_name" "domain" {
  certificate_arn          = data.aws_api_gateway_rest_api.api.endpoint_configuration == "EDGE" ? var.certificate_arn : null
  regional_certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "REGIONAL" ? var.certificate_arn : null
  domain_name              = var.domain_name

  tags = var.tags
}

error:

Error: Error creating API Gateway Domain Name: BadRequestException: A certificate was not provided for the endpoint type EDGE.
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

Ryan avatar

Anyone here run into a case where you provision a module, remote state in S3, try to load that remote state in another module but TF can’t find it? Unable to find remote state. State file is where I expect it to be and other modules using remote state work fine. Terraform v0.12.18.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

could be many diff reasons, did you check workspace, workspace_key_prefix for terraform_remote_state? Also, if the state bucket is in diff account, might be wrong permissions (assume_role)

Ryan avatar

Well, this is really out of left field. I’ll check those things, looking through the remote config and the data sources to see if I have introduced a bug in there but:

• I don’t use workspaces

• State bucket is in the same account

• This same set of modules provisioned without issue in a different account

• I’m provisioning as an admin

• Other modules using remote state work fine, only modules using remote state from this specific module (my RDS module) are acting like the state file is non-existant

Ryan avatar

It will be the stupidest thing… always is.

Ryan avatar

Thanks for responding

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you check that subfolder in the AWS S3 console?

Ryan avatar

Yeah, and pulled the tfstate file. It is there, looks good. I can’t dig into this right now, have to get back to it in the morning. Might just tear down my sandbox and see if I can reproduce there first. thx.

creature avatar
creature

couple of tips:

From the source remote state: terraform output

From the state bucket that wants to consume the remote state: terraform console > data.terraform_remote_state.xxx.outputs

2020-02-19

Meb avatar
AWS CodeBuild support for EFS · Issue #11961 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

creature avatar
creature

anybody have any tool or project recommendations on querying terraform remote state outputs? Figured I’d ask before reinventing the wheel.

jose.amengual avatar
jose.amengual

why wold you need to look at the outputs ?

creature avatar
creature

to consume them for other shenanigans

jose.amengual avatar
jose.amengual

when you create stuff in TF then you can just do data lookup in your next TF

creature avatar
creature

I use remote state for TF related stuff. I want to consume those attributes for use with other tools / reporting. I know I can terraform output -json and parse from there, but thought maybe someone had already written something.

creature avatar
creature

what would be even better is a project that keeps outputs in an external store for querying. I believe Consul fills this gap, but we aren’t using it so maybe something more lightweight

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
data "terraform_remote_state" "eks" {
  backend   = "s3"
  workspace = terraform.workspace
  config = {
    bucket               = "my-state-bucket"
    workspace_key_prefix = "eks"
    key                  = "terraform.tfstate"
    region               = var.region
  }
}

data "terraform_remote_state" "dns" {
  backend   = "s3"
  workspace = terraform.workspace
  config = {
    bucket               = "my-state-bucket"
    workspace_key_prefix = "dns"
    key                  = "terraform.tfstate"
    region               = var.region
  }
}

locals {
  eks_cluster_identity_oidc_issuer = data.terraform_remote_state.eks.outputs.eks_cluster_identity_oidc_issuer
  zone_id                          = data.terraform_remote_state.dns.outputs.zone_id
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you are asking how to consume the remote state from other tools, not terraform ?

2
creature avatar
creature

yes

creature avatar
creature

I have a good grasp on remote state within TF (consuming upstream output, etc).

creature avatar
creature

what I’m looking at writing is a Python json parser that traverses all of my state buckets, grab output and store them somewhere that I can leverage for other things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

other things != terraform?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(another consideration is SSM, if you’re on AWS)

creature avatar
creature

yep, things other than TF that need to programmatically access outputs. I think I’m good now, thanks for all the responses folks. Great Slack community here.

1
loren avatar

i think it will be somewhat dependent on that “somewhere”… e.g. i think you could just run terraform output -json | aws s3 cp <s3://bucket/key> -

loren avatar

do that in your tf pipeline, now they’re in s3

creature avatar
creature

yep, that’s pretty lightweight. good idea on the pipeline also. thx!

2020-02-20

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

@here need some advice on how can I move forward with my requirement

I need to deploy multiple lambda across multiple accounts.
I use federated login to access aws.
Terraform is in github.
I need to know if there is a way to deploy module based on conditional basis (probably var.env=prod/stage/dev) instead of using multiple branches for each account.
Brij S avatar

so I attempted to do the following

data "aws_api_gateway_rest_api" "api" {
  name = var.api_name
}

resource "aws_api_gateway_domain_name" "domain" {
  certificate_arn          = data.aws_api_gateway_rest_api.api.endpoint_configuration == "EDGE" ? var.certificate_arn : null
  regional_certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "REGIONAL" ? var.certificate_arn : null
  domain_name              = var.domain_name

  tags = var.tags
}

but I get an error

Error: Error creating API Gateway Domain Name: BadRequestException: A certificate was not provided for the endpoint type EDGE.

any idea of this is possible using the null keyword? I am attempted to make a module which can create a regional domain name or edge

grv avatar

Is anyone using terraform-aws-rds module here of Cloudposse? i am runnign into a problem, thouhgh maybe small, have not been able to eliminate it. Hoping someone can help?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue?

grv avatar

Is it possible to pass remote state values to vpc_id variable in this module? I alwasy get Unsupported Argument error while trying to pass value of VPC in the vpc_id

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
data "terraform_remote_state" "vpc" {
  backend   = "s3"
  workspace = terraform.workspace
  config = {
    bucket               = "my-tfstate-bucket"
    workspace_key_prefix = "vpc"
    key                  = "terraform.tfstate"
    region               = var.region
  }
}

locals {
  vpc_id             = data.terraform_remote_state.vpc.outputs.vpc_id
}
2
grv avatar

Thanks. I figured out issue was on my end only (knew it was something stupid)

Brij S avatar

can we use locals in outputs, the way we can use vars? eg;

value       = var.aws_env == "prod" ? aws_iam_role.asset_distribution_cross_account.*.arn : "valid only for prod"
loren avatar

yes. you can use any valid expression in an output value, https://www.terraform.io/docs/configuration/expressions.html

Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

2020-02-21

Todd Lyons avatar
Todd Lyons

I’m trying to have an index count when looping through an array. The application is trying to construct the XML for an AWS MQ config file. This is what I’m doing now:

  <networkConnectors>
    %{~ for item in element(local.mq_network_brokers, count.index) ~}
    <networkConnector name="connector" userName="commonUser" uri="static:(${item})"/>
    %{~ endfor ~}
  </networkConnectors>
Todd Lyons avatar
Todd Lyons

The issue is that name="connector" must be a unique label. If I could do equivalent of i++ and then name="connector{{ i }}", it would solve my problem. But I’m struggling trying to find a way to have a counter in that loop.

Todd Lyons avatar
Todd Lyons

Any suggestions? Convert it to a map, use for_each, and use the key? maybe?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s welcome @scorebot into the mix!

scorebot avatar
scorebot
05:14:43 PM

@scorebot has joined the channel

scorebot avatar
scorebot
05:14:44 PM

Thanks for adding me emojis used in this channel are now worth points.

1
scorebot avatar
scorebot
05:14:46 PM

Wondering what I can do? try @scorebot help

ikar avatar

@scorebot help

scorebot avatar
scorebot
06:35:39 PM

You can ask me things like @scorebot my score - Shows your points @scorebot winning - Shows Leaderboard @scorebot medals - Shows all Slack reactions with values @scorebot = 40pts - Sets value of reaction

Matt Gowie avatar
Matt Gowie

Hey Cloudposse folks — It looks like https://github.com/cloudposse/terraform-aws-cloudfront-cdn hasn’t been updated in a while. Is that no longer supported or are you folks looking for people to take up the torch on PRs like https://github.com/cloudposse/terraform-aws-cloudfront-cdn/pull/29?

cloudposse/terraform-aws-cloudfront-cdn

Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn

Upgrade to 0.12 by geota · Pull Request #29 · cloudposse/terraform-aws-cloudfront-cdn

This is the remaining work to finish off rverma-nikiai fork in case. I submitted a PR to the upstream repo. Happy to close this and work through the existing PR if you prefer. I was able to get it …

Maxim Mironenko (Cloud Posse) avatar
Maxim Mironenko (Cloud Posse)

Hey @Matt Gowie! This repo is in my queue to convert to TF 0.12. It is not enough to just replace syntax but we have requirements to cover module with tests. If you want to contribute it in a proper way (see example: <https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/45>) you are most welcome! Give it a try, then ping me, so I could review and run tests. This will help us and speed up the process. Otherwise we all have to wait until it will be done as well as many other modules in queue

Matt Gowie avatar
Matt Gowie

@Maxim Mironenko (Cloud Posse) Got it — I’ll take a crack at it next week. Thanks for the info.

2020-02-22

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we tackled ~40 PRs from our community backlog this week

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s some 210+ in total and ~130 terraform related. It’ll take us another few weeks at this rate, but we’re getting there!

3

2020-02-23

Andrea avatar

Hi, we have different AWS accounts for dev/stage/production etc.. Can anyone recommend the best way to keep base (eg VPCs, subnets, security groups, K8s cluster etc) and application (RDS database, S3 buckets etc…) infrastructure, in sync across all the different AWS accounts/environments, please?

Andrea avatar

What I thought could be done for each item (eg subnets), is that subnet ranges (different per each account) would come from input variable files, and then the outputs would be fed into the application infrastructure. And so on for each item…

Andrea avatar

Can anyone please recommend whether this is achievable and/or it is the best way to go with IaC and Terraform?

2020-02-24

Andrea avatar

anyone?

Nikola Velkovski avatar
Nikola Velkovski

Hi Andrea, I think you are reffereing to remote states, yes that’s one way to do it, the best practice is to move away the fast moving parts from the slow ones, e.g. you won’t change the VPC reguraly but you might change the machine types etc.

Nikola Velkovski avatar
Nikola Velkovski

usually what is being done is to split the “parts” into folders ( states ) and to have some sort of hierarchy between them e.g VPC needs to be applied first, ecs after the vpc etc…

Nikola Velkovski avatar
Nikola Velkovski

Does this help ?

Andrea avatar

Hi, thanks. I was not talking about the remote state much (even though that is something I should do at some point, currently I commit everything to git)

Andrea avatar

so in the top directory I would have the VPC, subnets, security group etc…

Andrea avatar

while in subfolders specific resourses and/or applications

Andrea avatar

is there a command/script to make sure other colleagues will respect the hierarchy too?

Ognen Mitev avatar
Ognen Mitev

For this purpose and many other we have created a docker image. I will just share some things of the whole README file that we are using in my team:

Introduction

To keep the infrastructure operational we use some tools to provide changes to the infrastructure and manage the settings.

The development environment will make sure that you don’t need to install all the tools we need, but just the ones you already use:

  • Docker
  • Docker-Compose

We are using ASDF in a Docker Container to get a fully operational development environment with all the tools needed.

Getting started

$ ./runtask.sh init # to initialize the development environment
Building infrastructure-console
...
Successfully tagged aws_infrastructure-console:latest
$ ./runtask.sh console # to get a console on the development environment
asdf@a8da505fb2ab$
asdf@a8da505fb2ab$ packer version
Packer v1.1.3
asdf@a8da505fb2ab$ terraform version
Terraform v0.11.2

All necessary folders are linked as docker volumes into the running docker container via docker-compose.

Note: New folders in the projects main folder need to be added in the docker-compose.yml and require a fresh console in order to be available in the docker container.

Ognen Mitev avatar
Ognen Mitev

We basically use this docker image all the time for doing anything in the terraform and in general with the infra.

Ognen Mitev avatar
Ognen Mitev

In this case everyone is using the same hierarchy, same version tools, etc…

Andrea avatar

Hi @Ognen Mitev, thanks. We do something similar but not specific to Terraform only. Is this image available online anywhere? so that I can take a look…

Ognen Mitev avatar
Ognen Mitev

Hi Adrea, we also no not have it to terraform only, rather for everything that we use. https://hub.docker.com/r/zeppelinlab/ops_shell

Nikola Velkovski avatar
Nikola Velkovski

So the hierarchy, should be imposed by the GIT e.g. PRs, README.md and predefined folder structure.

Nikola Velkovski avatar
Nikola Velkovski

in a team you should also have “remote state backend”

Nikola Velkovski avatar
Nikola Velkovski

and do not check-in the state in SVN

Andrea avatar

ok about remote state and git/svn commits

Andrea avatar

how do you impose the hierarchy with git though?

Andrea avatar

also, and possibly lastly, what do you do when you have multiple AWS accounts/environment?

Nikola Velkovski avatar
Nikola Velkovski


how do you impose the hierarchy with git though

Nikola Velkovski avatar
Nikola Velkovski

You push the first commit with the hierarchy you want.

Andrea avatar

obviously you don’t want to copy and paste the whole folde/files hierarchy… per dev/test/prod etc…

Nikola Velkovski avatar
Nikola Velkovski

you can also create a repo template ( in the case of github)

Andrea avatar

oh I see (regarding the git commits)

Andrea avatar
Provider: Template - Terraform by HashiCorp

The Template provider is used to template strings for other Terraform resources.

Nikola Velkovski avatar
Nikola Velkovski

No that’

Provider: Template - Terraform by HashiCorp

The Template provider is used to template strings for other Terraform resources.

Nikola Velkovski avatar
Nikola Velkovski

’s something else

Nikola Velkovski avatar
Nikola Velkovski


also, and possibly lastly, what do you do when you have multiple AWS accounts/environment?
You could do folders per ENV, combined with workspaces Or workspaces with multiple providers

Nikola Velkovski avatar
Nikola Velkovski

or just folders for everything

Andrea avatar

a lot of food for thought! Thank you @Nikola Velkovski! I’ll investigate all of those…

Nikola Velkovski avatar
Nikola Velkovski

Karoline Pauls avatar
Karoline Pauls
> [ for k, v in [[1, 11], [2, 22], [3, 33]] : [k, v] ]
[
  [
    0,
    [
      1,
      11,
    ],
  ],
  [
    1,
    [
      2,
      22,
    ],
  ],
  [
    2,
    [
      3,
      33,
    ],
  ],
]

why do terraform language designers always pick the least intuitive behaviour for an idiom?

Karoline Pauls avatar
Karoline Pauls

real question, is it possible to zip 2 lists into a list of 2-tuples (like in Python)? Or do i have to use range and list1[count] list2[count] ?

Karoline Pauls avatar
Karoline Pauls

(Or a helper local variable that uses range and count to pack 2 lists into a list of maps)

Cloud Posse avatar
Cloud Posse
05:00:39 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Mar 04, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Francisco Montada avatar
Francisco Montada

Hi team started using the cloudposse/terraform-aws-cloudfront-s3-cdn module but I am getting this error https://gapcommerce.com/

Francisco Montada avatar
Francisco Montada

ERROR Failed to contact the origin.

Generated Mon, 24 Feb 2020 17:21:23 GMT
Request ID: _Gnok9D5N1_Cw1Lu7Ld44vXH78Pwj-l26vxQyWbUC6GzDIhMZxhp0w==
Francisco Montada avatar
Francisco Montada

namespace = local.workspace["namespace"] stage = local.workspace["stage"] name = local.workspace["name"] aliases = ["[gapcommerce.com](http://gapcommerce.com)", "[www.gapcommerce.com](http://www.gapcommerce.com)"] use_regional_s3_endpoint = true origin_force_destroy = true cors_allowed_headers = ["*"] cors_allowed_methods = ["GET", "HEAD", "PUT"] cors_allowed_origins = ["*.[gapcommerce.com](http://gapcommerce.com)"] cors_expose_headers = ["ETag"] compress = true

Francisco Montada avatar
Francisco Montada

this my config

Francisco Montada avatar
Francisco Montada

I am not sure what I am doing wrong

Francisco Montada avatar
Francisco Montada

Hi @MattyB

MattyB avatar

I don’t have long but IIRC this allows you to reference your gapcommerce site like a CDN through CloudFront. Give me just a minute and I’ll try to find the modules you’re looking for

MattyB avatar
cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

MattyB avatar

What’s the end goal for what you’re trying to do with the ClloudPosse modules? That’ll help the community properly figure out what to suggest

Francisco Montada avatar
Francisco Montada

@MattyB We am trying to use the module to automate the provisioning CloudFront + s3 Website deployment, gapcommerce.com is our company marketing website

Francisco Montada avatar
Francisco Montada
Francisco Montada avatar
Francisco Montada

We are not sure what we are doing wrong https://gapcommerce.com/

MattyB avatar

i just hit an issue with this a few weeks ago where the CORS rules on the s3 bucket were configured incorrectly. Can you post them here?

Francisco Montada avatar
Francisco Montada
MattyB avatar

in AWS console -> s3 bucket -> permissions -> CORS Configuration

MattyB avatar

do you see multiple AllowedOrigins?

Francisco Montada avatar
Francisco Montada
MattyB avatar

see the documentation link at the bottom? check it out -> lets turn this into a thread

MattyB avatar

@Francisco Montada

Francisco Montada avatar
Francisco Montada

@MattyB which btn ?

MattyB avatar

In Amazon S3, define a way for client web applications that are loaded in one domain to interact with resources in a different domain.

Francisco Montada avatar
Francisco Montada

I cannot have multiple origen ?

MattyB avatar

If you check out the documentation it suggests that you need multiple CORSRules to do what you want to do. I think this is a bug in the CloudPosse module. Let’s try something out. Delete lines 5 & 6 so there’s only 1 origin - *.gapcommerce.com

Francisco Montada avatar
Francisco Montada

ok

MattyB avatar

now go to cloudfront and invalidate your cache (you can do this 1000 times per month before they charge you)

Francisco Montada avatar
Francisco Montada

done

Francisco Montada avatar
Francisco Montada

ok

Francisco Montada avatar
Francisco Montada
MattyB avatar

yep

Francisco Montada avatar
Francisco Montada

did not help

Francisco Montada avatar
Francisco Montada

still showing Failed to contact the origin.

MattyB avatar

can you access your static assets using the cloudfront link?

Francisco Montada avatar
Francisco Montada

let me chck

Francisco Montada avatar
Francisco Montada

same

MattyB avatar

let’s go to DM if you don’t mind

MattyB avatar

what’s the last part of your ‘origin domain name and path’ in cloudfront?

MattyB avatar

i had to set bucket_domain_format = “%s.s3.${var.region}.amazonaws.com

Francisco Montada avatar
Francisco Montada

yes it is

Francisco Montada avatar
Francisco Montada

I added -website- and did not work

Francisco Montada avatar
Francisco Montada

@MattyB I noticed my s3 endpoint has -website- on it

Sreekumar avatar
Sreekumar

Hi Team, I am trying aws beanstalk using terraform. Saw the git repository https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment but in this, it is referring the module for neanstalk env creaton. From where i can get the module code as well ?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is what the terratests are based off of

Willie avatar

Is it not possible to enable AWS API gateway logging with just terraform? I’m getting “CloudWatch Logs role ARN must be set in account settings to enable logging” when trying to set logging_level in a aws_api_gateway_method_settings resource. The articles I’ve found on this suggest you need to paste a role ARN into the web console.

Sean Turner avatar
Sean Turner

Hey all, how would one deploy to multiple aws accounts at the same time? Any way to replicate the cloudformation stackset functionality?

loren avatar

terraform did this loooooong before cloudformation. create a provider block for each account using aliases, and pass the provider alias to the resource/module

loren avatar
Providers - Configuration Language - Terraform by HashiCorp

Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.

Sean Turner avatar
Sean Turner

Yeah, saw this. So one terraform apply would generate a diff for n number of accounts?

Arthur Burkart avatar
Arthur Burkart

well, it’s not as easy as defining a provider and it just working. You’d need to either declare a module per provider or declare resources that consume specific providers. This repo implements it about as cleanly as it gets.

https://github.com/nozaq/terraform-aws-secure-baseline/blob/2276b6db68b055bfb7c94023fe293e22c21cd19d/config_baselines.tf

nozaq/terraform-aws-secure-baseline

Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline

Sean Turner avatar
Sean Turner

This is cool. Thanks!

2020-02-25

Milosb avatar

Guys, I am trying to output huge json with local-exec. It complains about argument list too long. Is there any smooth way to go around this issue?

Karoline Pauls avatar
Karoline Pauls

is it an OS-level limit?

Milosb avatar

Yes

Karoline Pauls avatar
Karoline Pauls
echo "abc

...
"

is the same as

cat <<EOF
abc

...
EOF
Karoline Pauls avatar
Karoline Pauls

except the latter is a heredoc which acts as a (very slow) way to do standard input

Karoline Pauls avatar
Karoline Pauls

BTW, do you know what receives the “too long” argument list? The shell (local exec running sh -c) or echo ?

Milosb avatar
Karoline Pauls avatar
Karoline Pauls

yeah, sh

Karoline Pauls avatar
Karoline Pauls

isn’t it possible to render a local file as a state rather than writing it with a local-exec command?

Milosb avatar

Well I need that file for other purpose

Karoline Pauls avatar
Karoline Pauls
terraform-aws-modules/terraform-aws-eks

A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks

terraform-aws-modules/terraform-aws-eks

A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks

Milosb avatar

its swagger document that i use to import to api_gw with customized stuff, but in same time i need to manupilate with that file to use it in swagger online.

Milosb avatar

so its kinda tricky

Karoline Pauls avatar
Karoline Pauls

even if you cannot directly use resource "local_file" , you can write it with local_file, and then copy it with local-exec

2
Milosb avatar

let me try local_file

Milosb avatar

thanks

Milosb avatar

it worked as a charm

Milosb avatar

thanks so much

Milosb avatar

thats all i needed. I tried with file, and it didnt work

Dragos Andronache avatar
Dragos Andronache

Hi all! I have more like a general usage TF question. We are starting moving to IaaC using mostly Terraform Registry modules as root ones. The way we configured it is: From Terraform Registry github (i.e. vpc module - verified one) we make a wrap call to a private Bitbucket registry that we own; The wrap call is basically using the module structure as it’s usage sourcing to Terraform Registry module that is being locked on version. All arguments are pointing to defaulted vars; We make the final call to the wrapped module from our infra repo that is configuring our infrastructure by filling now the desired arguments. We have the following request: VPC registry module has predefined subnets and routes (i.e. database, redshift, elasticache etc). We would like to add to this one new subnets and routes arguments (i.e. elasticsearch, awx etc.) in the wrapped module so it will be available to all of our reusable configuration. Can you please let me know how can we achieve this? Thank you in advance for your answers!

Andrea avatar

Correct me if I’m wrong, but it seems like Terraform Worspaces is not part of the OpenSource version

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Confusingly - the way “workspaces” are define in Terraform Cloud & Enterprise do not map 1:1 to the workspaces in the terraform cli (e.g. terraform workspace select dev)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@johncblandii can shed some more light on the differences also, our #office-hours session from 2 weeks ago

1
Andrea avatar

thanks @Erik Osterman (Cloud Posse), I’ve started watching it…

Andrea avatar

the presenter does not seem in super favour of TF cloud for the first couple of minutes…

Andrea avatar

that’s fine by me, as I don’t think it’s an option for me at this stage

Sebastian Stadil avatar
Sebastian Stadil

Why is tf cloud not an option for you at this stage?

Andrea avatar

but I’m happy to learn more about it

Andrea avatar

what about Terragrunt? is that an option commonly used for managing multiple layers (eg VPC, subnets all the way down to the application)

Andrea avatar

for multiple environments too (QA, stage, prod etc)

Andrea avatar

oh, a guest mentioned terragrunt too, let’s wait and see what they say..

johncblandii avatar
johncblandii

Yeah so TFC is 1 terraform workspace, but locally you can use multiple remote TFC workspaces locally as TF workspaces

johncblandii avatar
johncblandii

Example: project-dev project-uat project-prod

^ TFC has them split as such. You target prefix = "project-" in your local backend config and you’ll have locally:

$ terraform workspace list
dev
uat
prod
Andrea avatar

ok, so workspaces in TFC and the OSS version are different things

Andrea avatar

is there some docs on how to get started with the OSS workspaces?

Andrea avatar

which requires TFC…

creature avatar
creature

I’m a fan of Terragrunt, fwiw. It solves a lot of the same problems as workspaces, and seems to be working for us so far. Keeps everything nice and DRY. (see infrastructure modules)

Andrea avatar

OK, +1 for terragrunt, thanks for your input

johncblandii avatar
johncblandii
State: Workspaces - Terraform by HashiCorp

Workspaces allow the use of multiple states with a single configuration directory.

Brij S avatar

I’m currently uploading files to a bucket as follows

resource "aws_s3_bucket_object" "object" {
  for_each      = fileset(var.directory, "**")
  ....
}

does anyone know of a clever way to output the ids of the files uploaded?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
output "objects" {
  value = fileset(var.directory, "**")
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is that not sufficient?

Brij S avatar

let me try that and let you know!

Brij S avatar

first time using for_each

2020-02-26

Laurynas avatar
Laurynas

Hi, do you think is a better Idea to get vpc_id in different terraform stack using output of vpc stack or simply by referencing data and finding vpc by it’s ID?

Pierre Humberdroz avatar
Pierre Humberdroz

@Laurynas you can access the state of the different deployment

Igor Bronovskyi avatar
Igor Bronovskyi

https://take.ms/cPTqM How to teach the editor to understand the syntax 0.12.x?

File "screencast 2020-02-26 14-13-47.mp4"

Monosnap — the best tool to take & share your screenshots.

Nikola Velkovski avatar
Nikola Velkovski

Which Editor @Igor Bronovskyi ?

Igor Bronovskyi avatar
Igor Bronovskyi

vscode

Nikola Velkovski avatar
Nikola Velkovski

it depends on the plugin I am using neovim and it works perfect.

Nikola Velkovski avatar
Nikola Velkovski

do you have the terraform plugin installed?

Igor Bronovskyi avatar
Igor Bronovskyi

neovim for vscode?

Nikola Velkovski avatar
Nikola Velkovski

vscode

creature avatar
creature

which terraform extension do you guys use with vscode? I’m using mauve.terraform and it’s fine, but everything for .12 doesn’t work perfectly. The official extension is garbage. Hoping someone has a recommendation.

Nikola Velkovski avatar
Nikola Velkovski

I’d say patch your editor to run terraform fmt upon save of a file that has a .tf extension. That should cover 80% of the issues: )

creature avatar
creature

my main issues with this extension are around syntax highlighting. After certain blocks of code, it just stops working. Intellisense and sniffing out errors before running would also be helpful. I like Terraform, but the user experience makes it feels very unfinished / unpolished.

Nikola Velkovski avatar
Nikola Velkovski

I am using neovim and it’s working just fine, what I miss is though the auto detection of missing variables like the pluging for inteliJ but IDK if it works with 0.12 as it’s supposed to.

1
loren avatar

i remember when such things didn’t work for terraform <=0.11. give it some time, these are community-managed plugins, not hashicorp managed. 0.12 was a big shift for hcl. it will get there

2
loren avatar

here’s a trick for supporting both 0.11 and 0.12 in vscode on a project-by-project basis. use the old plugin for 0.11 and the new language server for 0.12… https://github.com/mauve/vscode-terraform/issues/157#issuecomment-587125278

loren avatar

the language server is definitely the way to go for 0.12 though

creature avatar
creature

let me enable that language server…haven’t done that

creature avatar
creature

thx thx

Igor avatar

I have switched to using IntelliJ TF module. I use it just for TF and nothing else. It’s great for 0.12

Igor Bronovskyi avatar
Igor Bronovskyi

I installed Terraform forked for VSCode. And I`m satisfied

1
RB avatar

anyone here use private terraform modules at work? do you leverage terraform registry at all? would love to hear how you have it setup

Kendall Link avatar
Kendall Link

Super interested in this question as well. I’ve just started on my terraform journey and only have a short time to try to come up with a proof of concept. I’ve written and copied a lot of example code just to get a single working vpc up and running with both public / private subnets spanning multiple AZ’s. I keep coming back to this idea that I am spending un-necessary cycles re-inventing the wheel and could simply use the ones available in the terraform registry.

RB avatar

right now, we use some from the registry and it’s definitely a good idea. either official modules or cloudposse modules seem to be useful

2
RB avatar

the registry is great. im more curious about private modules that leverage the registry. so then devs would use the private modules with company centric defaults that then use the registry. does that make sense?

creature avatar
creature

@RB - if we find a module that we want to modify, we clone it, (check the license), remove .git and create a file that references the original module. Cloudposse is a great place to start, but if your requirements differ a lot, you’ll need to fork or clone.

1
creature avatar
creature

just keep in mind, forking is not a copy, so your infrastructure depends on another Github org or the registry

Joe Hosteny avatar
Joe Hosteny

We just reference our own private modules from a git tag. I definitely would love to have a true private artifact repository for terraform modules. We use Artifactory - I’m hoping https://www.jfrog.com/jira/browse/RTFACT-16117 gets traction.

RB avatar

so does each developer service use the private modules? do devs contribute to private modules? also how do you update all terraform that depends on your modules?

RB avatar

@Joe Hosteny there are some good alternatives at the bottom of that ticket. i was looking into the terrafrom-aws-tf-registry

Joe Hosteny avatar
Joe Hosteny

We aim to have private modules only near the root of the dependency graph. If we need something changed in a dependent module, we’ve been contributing those back to OSS modules. While PRs are open, we fork all the way down the graph through to the dependency, and run with the forks that point to our module.

Joe Hosteny avatar
Joe Hosteny

Not sure if that addresses your question. We have moved to using CloudPosse as much as possible, for standardization. The init-from-module works fine with the private repo as well.

RB avatar

interesting. but when the parent module, in this case your fork or cloudposse, are updated, do you have to manually apply all the terraform that depends on those modules? is it automated?

creature avatar
creature

manual, which is probably a good thing with most of our modules because it allows for a quick code review when you copy / merge. We have quite a bit of mods to some of these.

creature avatar
creature

we plan on automating the detection of updates soon. something like: if tag on upstream project is greater than X, throw an alert or log or something.

Joe Hosteny avatar
Joe Hosteny

During development, I generally reference the modules via filesystem path, so if I update any of them in the hierarchy at any point, I can just run make reset && make deps at the geodesic container for the desired AWS account, and all dependencies will get updated. Otherwise, yeah, you have to tag the intermediate repos and push the references up to the root.

Joe Hosteny avatar
Joe Hosteny

I don’t think it’s any worse than any other dependency chain, TBH. If I have multiple logical changes in a dependency, I’ll do the PRs in multiple branches, then create a local branch that merges both of those (only locally) so I can get all of them for test at a single time in the local filesystem.

Joe Hosteny avatar
Joe Hosteny

I did this pretty extensively for the CP ECS web app and its dependencies while we made a module for concourse

wattiez.morgan avatar
wattiez.morgan

We use private modules without registry, just pointing source to git refs (tags), no need for more yet… Didn’t investigate any registry

johncblandii avatar
johncblandii

I used Terraform Cloud for our private TF modules and it worked great

RB avatar

side question. because it’s difficult to find if an aws console resource is managed by terraform… i was thinking abotu creating a terraform tag but instead of a boolean, it could have the value of the git remote . thoughts on this? and thoughts on how to retrieve this or can it only be done using a null-resource / local-exec https://stackoverflow.com/a/49425731 ?

loren avatar

i’d been thinking about something similar lately. maybe a tag for the source location and for the tfstate location

Adam Crews avatar
Adam Crews

We leverage many of the Cloudposse modules, and pin most of our stuff around their label module: https://github.com/cloudposse/terraform-null-label When looking at the console it is very easy to see things that were named with the output from that module. We then pair that with tags on all resources that support tags, and it becomes very clear how a resource was created and how to find the source in our tf codebase.

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Meb avatar

What is the best practice in case of disaster to restore your RDS base setup using terraform? As current RDS restore from snapshot require a new base. Could you recreate your db from your snapshot? And then when you set null snapshot_identifier it will destroy the DB.

Chris Fowles avatar
Chris Fowles

disaster recovery should be treated as an abnormal event - i.e. don’t try and bake it into (at least the recovery bit) your infrastructure provisioning. you should be following a careful process with checks along the way during a disaster situation so as to avoid making the situation worse.

Chris Fowles avatar
Chris Fowles

you probably want to consider freezing all automation around a disaster event, as the system is in an unexpected state and you don’t want automation to propagate or exacerbate the issue

Meb avatar

but how to recover the restore in the new terraform state?

Chris Fowles avatar
Chris Fowles

terraform import to bring things back into statefile management

grv avatar

getting 404 on it

RB avatar
cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

grv avatar

Yep, was just wondering why its given in the readme section of this - https://github.com/cloudposse/terraform-aws-elasticache-redis

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

grv avatar

in the Examples tab

RB avatar

duuude put in a pr

1
1
grv avatar

hehe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Maxim Mironenko (Cloud Posse) is reviewing PRs like mad right now, so if you submit it, he’ll get to it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Btw, just created #pr-reviews so we can track requests better

1

2020-02-27

Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

Hi, does anyone have experience with this module: https://github.com/cloudposse/terraform-aws-eks-cluster?

I have issues while using two “worker groups”.

When I add a second “worker group” the networks gets crazy:

• on some pods I can’t resolve DNS records

• on some pods I don’t have network connectivity

• on EC2 nodes the network and DNS are fine

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

kskewes avatar
kskewes

We have 4 worker groups, 1 per AZ plus a special one. Are all the subnets tagged? Think both with cluster and shared. All security groups allowing all others?

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

kskewes avatar
kskewes

You could stand up a Debian pod in each node in each AZ and work out the combo

Michał Czeraszkiewicz avatar
Michał Czeraszkiewicz

We have the following tags:

  private_subnet_tags = {
    "kubernetes.io/cluster/eks-cluster" = "shared"
    "kubernetes.io/role/internal-elb"   = true
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/eks-cluster" = "shared"
  }

  vpc_tags = {
    "kubernetes.io/cluster/eks-cluster" = "shared"
  }

2020-02-28

RB avatar

anyone have luck with getting tflint to work with tf12 ?

RB avatar

recently compartmentalized some terraform into its own module and had to do over 20 import statements. wrote this up into a gist to make it faster.

https://gist.github.com/nitrocode/7c2f5386f144c7b06e38c2c38292889e

is there anything like this that has already been worked on? id rather not reinvent the wheel

Marcin Brański avatar
Marcin Brański

Good work! Did it so many times but didn’t automate it because I had few resources to move. I don’t know if there’s such tool so you should create a repo for it. If in future I will have such case I might PR with improvements

RB avatar

thanks. there is some funky logic in it but it made some module migrations a lot easier

Marcin Brański avatar
Marcin Brański

I’ve seen that comment

Marcin Brański avatar
Marcin Brański

You make repo, I’ll star it

1
1
Marcin Brański avatar
Marcin Brański

I used this script today. Had to rewrite it a little bit to work with GCP but other than that, pretty good.\

RB avatar

it’s a bit difficult to guess what the import statements are and it’s currently using stdout to parse instead of converting to json first so it can definitely be improved. if you folks like, i can make a repo, and we can all contribute to it.

RB avatar

id love to see your fork too @Marcin Brański

2020-02-29

bougyman avatar
bougyman

I’ve done similar, but in awk.

    keyboard_arrow_up