#terraform (2020-2)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-02-14

drexler avatar
drexler

Hello fellas i have a question regarding CORS policies on S3 buckets. Is there a way of adding such policies to existing S3 bucket via Terraform?

Igor Bronovskyi avatar
Igor Bronovskyi
resource "aws_s3_bucket" "bucket" {
  bucket_prefix = "project-name-"

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET"]
    allowed_origins = ["*"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }

  tags = {
      Name        = "test"
    }
}
drexler avatar
drexler

wont that destroy the existing bucket in order to recreate it with the cors policy?

Igor Bronovskyi avatar
Igor Bronovskyi

no

Igor Bronovskyi avatar
Igor Bronovskyi

if you create bucket previously with terraform

drexler avatar
drexler

cool. let me test it out..

loren avatar
loren

Also no if you import an existing bucket into the config

sweetops avatar
sweetops

I have a list, with one object in it, but I need to perform some functions on it and i’m trying to see if I’ve got this right…

sweetops avatar
sweetops

aliases = [ lower(substr(“${var.service}-${var.branch}.${var.stage}.${var.domain}“, 0, 32)) ]

sweetops avatar
sweetops

or would I run lower(substr()) on the outside of [] ?

aknysh avatar
aknysh

you don’t have a list it looks like, you are constructing the list. If that’s the case, then the syntax is OK, you get substring from a string and put it into a list

sweetops avatar
sweetops

correct. perfect, thanks!

sweetops avatar
sweetops

@aknysh I just realized that my list totally wouldn’t work because i’d be truncating the end of the dns name. So…

sweetops avatar
sweetops

[lower(substr(“${var.service}-${var.branch}“, 0, 32))“.${var.stage}.${var.domain}“]

sweetops avatar
sweetops

that hurts my head

sweetops avatar
sweetops

would that work?

sweetops avatar
sweetops

I think separating the quotes like that would make a list of two items, not one

sweetops avatar
sweetops

or maybe not, since there’s no comma

aknysh avatar
aknysh

make a local var in locals`

aknysh avatar
aknysh

then use it to put into the list

aknysh avatar
aknysh

more readable

sweetops avatar
sweetops

yeah, you’re right

sweetops avatar
sweetops

good call

Olivier avatar
Olivier

I am trying to use module: https://github.com/cloudposse/terraform-aws-elasticache-redis and I typed

 apply_immediately          = true

but does not seem to be part of the resource. so when I changed the parameter , it did not apply it immediately

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

aknysh avatar
aknysh

there is a PR to fix that, and @maxim is working on it

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

maxim avatar
maxim

@Olivier fix is on the way, will let you know when ready

Olivier avatar
Olivier

thank you

maxim avatar
maxim

@Olivier here it is: <https://github.com/cloudposse/terraform-aws-elasticache-redis/releases/tag/0.16.0>

Brij S avatar
Brij S

does anyone know how to use terraform-docs to automatically replace only the content in the readme which it generates? (providers, inputs,outputs). For example, if there is a title and some description text at the top, I wouldnt want it to replace that part

Erik Osterman avatar
Erik Osterman

I think @antonbabenko has a github pre-commit hook for this.

Erik Osterman avatar
Erik Osterman

Usually this is done with some kind of markers like <!-- terraform-docs begin --> and <!-- terraform-docs end --> and then using a sed+regex to replace the content inbetweeen

hiding1
Erik Osterman avatar
Erik Osterman
antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

2020-02-13

Gui Paiva avatar
Gui Paiva

Hey guys, I got a question about Terraform in AWS and its IAM role policy to create resources. At the moment I have attached the full admin policy to the role that terraform is using but I was wondering if there is a simpler way so terraform can create resources(not only ec2, vpc, buckets, etc) and at the same time be not so open with full admin access.

aknysh avatar
aknysh

Use the special category of AWS managed policies to support common job functions.

aknysh avatar
aknysh

but you pretty much need to be power user or admin

aknysh avatar
aknysh

once you start on the path to take away permissions, you will soon realize that you need most of them to be able to provision AWS

aknysh avatar
aknysh

unless you provision a specific set of resources, then you can create a specific role with those permissions and give it to the user or to terraform aws provider

Gui Paiva avatar
Gui Paiva

but then every time you need to create a new set of resources, you need to remember to update the policy

aknysh avatar
aknysh

that’s why we use Admin permissions

Gui Paiva avatar
Gui Paiva

yeah, I always go for the admin permission

Gui Paiva avatar
Gui Paiva

quite tricky

Gui Paiva avatar
Gui Paiva

those policies from their doc are quite interesting, not just for terraform but for other users too

Gui Paiva avatar
Gui Paiva

I am going to have a second thought about it. This is not a company requirement but just trying figure out if there are better ways to manage permissions

Erik Osterman avatar
Erik Osterman

My current opinion after having worked with this stuff for over a decade is that IAM is best suited for your services and their capabilities (controlling what they can do or wha can access them), but is too low level for restricting who (human) can deploy what (resources). It’s hard to know exactly what policies you need as a human before you provision everything. Iterating and requesting permissions is a huge bottleneck. Instead the current best practice is to stick a VCS+CI/CD (gitops) pipeline in between the humans and the infrastructure. Get the humans out provisioning stuff directly as much as possible to eliminate the need for fine grained access. Then use something like the Open Policy Agent to define the higher order policies that run in your pipelines, combined with a code review approval process.

Just to be clear, these concepts are all relatively recent developments for IaC, but are repossess to all the the problems associated with how “least privilege” failed us from a practical perspective to achieve organizational efficiency.

:--1:2
Gui Paiva avatar
Gui Paiva

I am already doing VCS+CI/CD, PRs, etc which works great and also means not many people can actually do any harm as we have a process BUT, as humans can make mistakes, someone, including myself, could mistakely give access to Jenkins (both ssh or URL) using the wrong permissions which may allow them to get admin access to all AWS accounts.

It is actually a complicated situation because you don’t want to have a bottleneck by having to always update the IAM policy to allow an extra action but at the same time you want to avoid risks of someone being able to do something they shouldn’t be doing.

PePe avatar

@Erik Osterman do yo guys use Open Policy Agent?

PePe avatar

one thing we are struggling now is the user management part and SSO in AWS, how to better and easier to manage user/group policies trough SSO or other means, it is a hot topic in our world right now

PePe avatar

a bit offtopic from this original thread

Gui Paiva avatar
Gui Paiva

I have had a look at ita while ago and it can get really complex… to be honest, my company does not have that many users that need to access AWS so I am not using SSO but I did have a look to integrate with GSuite and it is not like a “next next finish” IMHO

PePe avatar

we have about 300 users

PePe avatar

some of them cross group boundaries or have multiple account access etc

PePe avatar

it gets pretty complicated

Gui Paiva avatar
Gui Paiva

I can imagine it can get really complex, not just the SSO part but the Security side of things

PePe avatar

exactly

Gui Paiva avatar
Gui Paiva

I was once at an AWS event and s security team from a company was there and they were talking about it

Gui Paiva avatar
Gui Paiva

similar to what you have said

Gui Paiva avatar
Gui Paiva

and it was so complex that even the AWS SA was lost

Gui Paiva avatar
Gui Paiva

because you have all the security requirements too.. not just users/sign in

Erik Osterman avatar
Erik Osterman

@PePe (Just to be clear, these concepts are all relatively recent developments for IaC) we haven’t had a chance to adopt it yet, but this is what we’re planning on incorporating to our latest pipelines we’re developing for a customer

PePe avatar

it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough

PePe avatar

if a hard problem but Active Directory solved it many many years ago

PePe avatar

their policies are incredible granular, although there is a lot of clicks involved

Gui Paiva avatar
Gui Paiva

I wish there was a simpler way to integrate and manage users like AD, like you said

Gui Paiva avatar
Gui Paiva

better having clicks involved than googling for a solution that we never found

PePe avatar

hahaha lol

PePe avatar

very true

Gui Paiva avatar
Gui Paiva

AD is probably the best service MS has ever done

Gui Paiva avatar
Gui Paiva

user management and gourp/users policieswork so so well

PePe avatar

agree

Joe Hosteny avatar
Joe Hosteny

@Gui Paiva SAML provider from GSuite to AWS works nicely. The only downsides are that you seemingly can’t attach policies to groups, only users individually. In practice, this is not so big of an issue for us since we are working on deploying GSuite users via ansible anyway. Also, I haven’t been able to determine yet how to add multiple GSuite apps for AWS yet

Erik Osterman avatar
Erik Osterman


it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough

Erik Osterman avatar
Erik Osterman

@PePe can you elaborate?

PePe avatar

Well if you look at the example of MS AD, they had this for years, fine grane policies , group, policies, identity, authentication and authorization

PePe avatar

what I’m talking is basically an AWS solution that is easy to use , easy to understand and that solves the use cases that people have and that has a good programatic API access that is easy to program

PePe avatar

IAM if far from being easy

PePe avatar

SSO in AWS is ok-ish but then you have issues where you can attach policies to groups and such

PePe avatar

there is alway quirks

PePe avatar

and there is many SaaS product that tackle to solve this problem for you

PePe avatar

the fact that there is that many, tells you that there is a need for something easier

PePe avatar

that is what I mean

2020-02-12

johncblandii avatar
johncblandii

I didn’t see this posted yet, but TF Cloud is adding run triggers; in short, a way to build CI pipelines.

https://www.hashicorp.com/blog/creating-infrastructure-pipelines-with-terraform-cloud-run-triggers

Creating Infrastructure Pipelines With HashiCorp Terraform Cloud Run Triggers

Run triggers are useful anywhere you’d like to have distinct pieces of infrastructure automatically queue a run when a dependent piece of infrastructure is changed.

Erik Osterman avatar
Erik Osterman

That’s great

Chris Fowles avatar
Chris Fowles

that looks extremely useful

Erik Osterman avatar
Erik Osterman

@Chris Fowles: @johncblandii does a live demo in our office hours today https://cloudposse.wistia.com/medias/g6p0zu4txy

:--1:1
Erik Osterman avatar
Erik Osterman

@johncblandii you mentioned you had another video you recorded specifically demo’ing this functionality

Erik Osterman avatar
Erik Osterman

is that on youtube?

johncblandii avatar
johncblandii

YouTube is processing the 4K right now. Hopefully it’ll be done soon

johncblandii avatar
johncblandii

i’ll post when it is done

Chris Fowles avatar
Chris Fowles

awesome cheers

:--1:1
btai avatar

for those planning on using terraform cli workspaces with TFC (terraform cloud) because of @johncblandii’s awesome demo today, there is a tiny edge case caveat to getting it working in TFC. If you’re using the terraform.workspace value in your terraform code, that value will always be default in TFC so you won’t be able to use it to make logical decisions within your terraform code (I use it for naming conventions, tagging, environment/region specific scenarios). To work around this I’ve introduced a “workspace” variable (see pic) and you can set a local variable to workspace = "${var.workspace != "" ? var.workspace : terraform.workspace}"

The reason I am naming the variable workspace is so I can make minimal changes and it sounds like there is enough fuss from the community that this might not be an issue in the future.

More info here: https://github.com/hashicorp/terraform/issues/22131

:--1:1
Erik Osterman avatar
Erik Osterman

Consider writing instead:

workspace = coalesce(var.workspace, terraform.workspace)

https://www.terraform.io/docs/configuration/functions/coalesce.html

coalesce - Functions - Configuration Language - Terraform by HashiCorp

The coalesce function takes any number of arguments and returns the first one that isn’t null nor empty.

Erik Osterman avatar
Erik Osterman

That’s annoying! Why the heck is terraform cloud overloading their own term for “workspace” making it mean one thing in the SaaS and a subtle but different thing in the terraform cli?

:--1:1
btai avatar

Tell me about it. They must’ve known it would cause a bunch of confusion

johncblandii avatar
johncblandii

TFC workspace is basically a project and locally you can pull in multiple projects to 1 code-base mapped to workspaces.

I completely forgot about this distinction until @btai brought it up.

2020-02-11

Gowiem avatar
Gowiem

Anyone know of a way to use data.aws_ssm_parameter to pull a number of parameters given a path? I am trying to find a way to avoid supplying all the param names to my application through vars.

Erik Osterman avatar
Erik Osterman

if any of your PRs for cloudposse repos are blocked in review, hit up our pal @maxim to get help and speed up the review =)

Adam Crews avatar
Adam Crews

ugh, sorry about that, fixed and code pushed.

maxim avatar
maxim
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

party_parrot1
Erik Osterman avatar
Erik Osterman

@antonbabenko 11AM GMT is 6AM Toronto time, but that’s not going to stop 3 guys from my shop including myself from seeing your talk https://events.hashicorp.com/hashitalks2020

:point_up_2:1
btai avatar

what kills efficiency the most working with terraform…. aws resource limits

1
aaratn avatar
aaratn
awslabs/aws-limit-monitor

Customizable Lambda functions to proactively notify you when you are about to hit an AWS service limit. Requires Enterprise or Business level support to access Support API. - awslabs/aws-limit-monitor

Chris Fowles avatar
Chris Fowles

well if that’s your worst efficiency blocker i’d say you’re doing aye ok

Chris Fowles avatar
Chris Fowles

2020-02-10

Pierre-Yves avatar
Pierre-Yves

Hello, I am using terraform remote state and have to move one resource to its parent folder . I am trying to use terraform state mv to avoid recreation the resource.


\# download the state file
terraform state pull > local_state.out


\# change the state file
terraform state mv -state=local_state.out module.network.module.vpn1.azurerm_subnet.vpn1 module.network.azurerm_subnet.vpn1

Move "module.network.module.vpn1.azurerm_subnet.vpn1" to "module.network.azurerm_subnet.vpn1"
Successfully moved 1 object(s).

but then we I do the terraform plan -state=local_state.out Terraform still wants to delete the resource I have moved

do you have any hint on how to achieve this move ?

maarten avatar
maarten

can you copy-paste the output of plan here ?

aaratn avatar
aaratn

@Pierre-Yves you will need to upload the state again to remote backend

maarten avatar
maarten

he’s explicitely using a local state file local_state.out , so that’s not it.

aaratn avatar
aaratn

well, the question initially says that he is using remote backend

aaratn avatar
aaratn

I could be wrong, I will wait for his confirmation if the backend is local-file

maarten avatar
maarten

that’s not relevant, he posted his command line commands and he clearly pulls from remote to local file, and from that moment on uses the local state file terraform plan -state=local_state.out

aaratn avatar
aaratn

not sure if he did partial init in that case

Pierre-Yves avatar
Pierre-Yves

yes I have download the remote file with terraform state pull to mv everything and once the plan match my need I want to upload it back with terraform state push. then plan again to be sure and apply

aaratn avatar
aaratn

@Pierre-Yves did you terraform state push already before running terraform plan ?

Pierre-Yves avatar
Pierre-Yves

no I have specified -state=local_state.out

aaratn avatar
aaratn

You in order to use local state, you might need to do terraform init afaik

aaratn avatar
aaratn

with local state

aaratn avatar
aaratn

that will consider your local state instead of remote state

aaratn avatar
aaratn
Backends: Configuration - Terraform by HashiCorp

Backends are configured directly in Terraform files in the terraform section.

Pierre-Yves avatar
Pierre-Yves

mhh exact and since I am moving a module it might requires it

:--1:1
Pierre-Yves avatar
Pierre-Yves

terraform init -backend-config="path=local_state.out" => The backend configuration argument “path” given on the command line is not expected for the selected backend type.

Pierre-Yves avatar
Pierre-Yves

seems better with explicitely having the file named terraform.tfstate

Pierre-Yves avatar
Pierre-Yves

so it seems terraform don’t like I have a backend configured in the main.tf even when specifying -state=localfile or init with a local terraform.tfstate file

If I want to work locally I had to remove the backend block and terraform will ask to unconfigure and copy the current state to the local backend

`

terraform init
Initializing modules...

Initializing the backend...
Terraform has detected you're unconfiguring your previously set "azurerm" backend.
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "azurerm" backend to the
  newly configured "local" backend. No existing state was found in the newly
  configured "local" backend. Do you want to copy this state to the new "local"
  backend? Enter "yes" to copy and "no" to start with an empty state.
Pierre-Yves avatar
Pierre-Yves

as a summary to move module resources on my laptop i have:

• unconfigure the remote backend tfstate ( by commenting out the backend block

• run terraform init

• terraform propose then to copy the tfstate locally

• I have move the resource, try and plan

• re add the backend block for remote state

• run terraform init and specify to copy back the state thanks for your help @aaratn an @maarten

:--1:1
Pierre-Yves avatar
Pierre-Yves

this was only needed because moving resource module requires a terraform init

aaratn avatar
aaratn

and try to do plan

aaratn avatar
aaratn

it should fix the issue

imiltchman avatar
imiltchman

Any suggested readings or words of wisdom for someone looking to get automated testing going for TF? We’re looking at terratest at the moment for the tool.

Erik Osterman avatar
Erik Osterman

Ya that’s your best bet

Erik Osterman avatar
Erik Osterman

Avoid testing things that terraform already covers in its own tests.

Erik Osterman avatar
Erik Osterman

E.g. creating a bucket results in bucket. It’s safe to skip this kind of test

Erik Osterman avatar
Erik Osterman

80/20 rule applied to testing terraform: You get 80% of the benefit and catch 80% of the problems by just running plan/apply/destroy. You have to spend 80% more effort to test the remaining 20%

:--1:3
:100:1
loren avatar
loren

i still think this presentation is really great for getting folks started… https://www.infoq.com/presentations/automated-testing-terraform-docker-packer

Automated Testing for Terraform, Docker, Packer, Kubernetes, and More

Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.

Pierre-Yves avatar
Pierre-Yves

I have look as well to do terraform test and will experiment later the test from the terraform vscode extension which can do lint and “end to end” test . see the bottom of the page : https://docs.microsoft.com/en-us/azure/terraform/terraform-vscode-extension

Tutorial - Configure the Azure Terraform Visual Studio Code extension

Learn how to install and use the Azure Terraform extension in Visual Studio Code.

imiltchman avatar
imiltchman

Thanks, @Erik Osterman @loren @Pierre-Yves

Pierre-Yves avatar
Pierre-Yves

the infoq video above mention as well “conftest” for gke : https://github.com/instrumenta/conftest/tree/master/examples/terraform

instrumenta/conftest

Write tests against structured configuration data using the Open Policy Agent Rego query language - instrumenta/conftest

Cloud Posse avatar
Cloud Posse
05:00:48 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Feb 19, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-02-07

Dhrumil Patel avatar
Dhrumil Patel

Hello Guys I am new in terraform and stuck in problem to create elastic beanstalk application using terraform can you help me here ? Here is my code :

Dhrumil Patel avatar
Dhrumil Patel

resource “aws_elastic_beanstalk_application” “default” { name = var.application_name description = var.application_description } resource “aws_elastic_beanstalk_application_version” “default” { name = “${var.application_name}-v1” application = aws_elastic_beanstalk_application.default.name description = var.application_description bucket = var.bucket_id key = var.object_id } resource “aws_elastic_beanstalk_environment” “default” { depends_on = [aws_elastic_beanstalk_application_version.default] name = “${var.application_name}-env” application = aws_elastic_beanstalk_application.default.name solution_stack_name = “64bit Amazon Linux 2018.03 v2.9.5 running Python 3.6” version_label = “${var.application_name}-v1” dynamic “setting”{ for_each = {“ImageId” = var.ami, “InstanceType” = var.instance_type} content{ namespace = “awslaunchconfiguration” name = setting.key value = setting.value } } }

grv avatar

error message?

Dhrumil Patel avatar
Dhrumil Patel

here error message :

Dhrumil Patel avatar
Dhrumil Patel

Error: Error waiting for Elastic Beanstalk Environment (...) to become ready: 2 errors occurred: * 2020-02-07 09:25:38.663 +0000 UTC (...) : Stack named '..' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. * 2020-02-07 09:25:38.781 +0000 UTC (..) : LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support.

Dhrumil Patel avatar
Dhrumil Patel

I think elastic Beanstalk environment can’t communicate with instances.

Dhrumil Patel avatar
Dhrumil Patel

here creation log :

Dhrumil Patel avatar
Dhrumil Patel

2020-02-07 22:47:21 UTC+0530 INFO Launched environment: TestApp-007-env. However, there were issues during launch. See event log for details. 2020-02-07 22:47:19 UTC+0530 ERROR LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support. 2020-02-07 22:47:19 UTC+0530 ERROR Stack named ‘..’ aborted operation. Current state: ‘CREATE_FAILED’ Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. 2020-02-07 22:30:32 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 22:30:32 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 22:30:16 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 22:30:16 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 22:30:16 UTC+0530 INFO Waiting for EC2 instances to launch. This may take a few minutes. 2020-02-07 22:30:16 UTC+0530 INFO Created Auto Scaling group named: .. 2020-02-07 22:29:55 UTC+0530 INFO Adding instance.. to your environment. 2020-02-07 22:29:55 UTC+0530 INFO Added EC2 instance .. to Auto Scaling Group.. 2020-02-07 22:29:12 UTC+0530 INFO Created Auto Scaling launch configuration named: .. 2020-02-07 22:29:12 UTC+0530 INFO Created security group named: .. 2020-02-07 22:29:12 UTC+0530 INFO Created load balancer named: .. 2020-02-07 22:28:56 UTC+0530 INFO Created security group named: … 2020-02-07 22:28:34 UTC+0530 INFO Using … as Amazon S3 storage bucket for environment data. 2020-02-07 22:28:33 UTC+0530 INFO createEnvironment is starting.

aknysh avatar
aknysh
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

aknysh avatar
aknysh
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

aknysh avatar
aknysh
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Dhrumil Patel avatar
Dhrumil Patel

Ok thanks

imiltchman avatar
imiltchman

Sidenote, I would recommend that you take out your aws accounts ids before posting outputs. It’s best to keep those secret.

:--1:1
Dhrumil Patel avatar
Dhrumil Patel

Ya I forgot about it thanks for reminding me.

imiltchman avatar
imiltchman

You should be able to edit them out

Dhrumil Patel avatar
Dhrumil Patel

Is there any thing wrong with this code ?

Dhrumil Patel avatar
Dhrumil Patel

I need to create code as my internship assignment and my mentor told me that I can’t using public registry modules thats why I am asking ?

Joe Niland avatar
Joe Niland

Your EC2 instance is not launching correctly and/or not in time. Check for problems in /var/log/eb-activity.log

You also want to increase the Command timeout or disable health checks while you’re investigating.

Dhrumil Patel avatar
Dhrumil Patel

Ok

Dhrumil Patel avatar
Dhrumil Patel

Problem solved, I am providing AMI to autoscalling group and that AMI causing problem. Instance spawn using that AMI can’t communicate with elastic beanstalk. When I didn’t provide AMI in elastic beanstalk environment then it works perfectly fine. Don’t know why this is happennning any suggestion ?

Joe Niland avatar
Joe Niland

Would need to see your error logs, however you must specify an AMI. I think your custom AMI may have a launch error.

Dhrumil Patel avatar
Dhrumil Patel

Actually I am not using custom AMI I am using one of the ubuntu AMI from AMI store.

2020-02-06

Rich Allen avatar
Rich Allen

Hi, probably a dumb question but I would like to check what I’m trying to build/fix is possible. I’m using the following module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn

We have a root aws account which manages the hosted zone [example.com> , I am trying to create a static site in a child organization at <http://mysite.example.com mysite.example.com](http://example.com). I’ve gone ahead and created a certificate from certificate manager in the child account. The root account has validated the certificate via DNS and I have verified the the child account has the certificate validated.
I have also set a route53 CNAME entry in the root account [mysite.example.com> -> <http://ourCFDISTROID.cloudfront.net ourCFDISTROID.cloudfront.net](http://mysite.example.com)

I’m currently receiving an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Is what I’m trying to do going to work in aws? I’ve hit a wall an am not sure how to proceed.

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

aknysh avatar
aknysh

@Rich Allen please share your module invocation terraform code

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

:--1:1
Rich Allen avatar
Rich Allen
module "examplecom" {
  source           = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=0.20.0>"
  namespace                = var.namespace
  stage                    = var.stage
  name                     = var.name
  origin_force_destroy     = false
  default_root_object      = "index.html"
  acm_certificate_arn      = var.acm_certificate_arn
  parent_zone_id           = var.parent_zone_id // this references a zone id outside of the child organization. The root org controls [example.com](http://example.com)
  cors_allowed_origins     = ["[mysite.example.com](http://mysite.example.com)"]
  cors_allowed_headers     = ["GET", "HEAD"]
  cors_allowed_methods     = ["GET", "HEAD"]
}
Rich Allen avatar
Rich Allen

for what it is worth, I now do not think this is an ssl issue. If you turn off redirects, and navigate to http I received a origin access error.

aknysh avatar
aknysh

@Rich Allen did you request the certificate just for the parent domain, or for subdomains as well (*.[example.com](http://example.com))?

aknysh avatar
aknysh

one of the possible reasons for ERR_SSL_VERSION_OR_CIPHER_MISMATCH is cert name mismatch

Rich Allen avatar
Rich Allen

just the sub domain, not the bare domain

aknysh avatar
aknysh
How to Fix ERR_SSL_VERSION_OR_CIPHER_MISMATCH (Quick Steps) attachment image

The ERR_SSL_VERSION_OR_CIPHER_MISMATCH error is typically caused by problems with your SSL certificate or web server. Check out how to fix it.

Rich Allen avatar
Rich Allen

I will reprovision the cert using the bare domain + san

aknysh avatar
aknysh
The domain name alias is for a website whose name is different, but the alias was not included in the certificate
aknysh avatar
aknysh

if you using CNAME, this [ourCFDISTROID.cloudfront.net](http://ourCFDISTROID.cloudfront.net) should be included in SANs as well

aknysh avatar
aknysh

make sure the CNAME is included in the aliases for the distribution, like here for example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L106

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

aknysh avatar
aknysh

also, did you provision DNS zone delegation in the child account?

aknysh avatar
aknysh

since mysite.example.com is in diff account, you need to have a Route53 zone for it in the child account

aknysh avatar
aknysh

and add Name Servers to the child DNS zone pointing to the root DNS zone

Rich Allen avatar
Rich Allen
I did not, I provisioned [mysite.example.com> -> <http://cfid.cloudfront.net cfid.cloudfront.net](http://mysite.example.com) in the root account
Rich Allen avatar
Rich Allen

it was my understanding that a hosted zone, was unique to an account, so are you saying I should have a hosted zone in both the root and child account?

aknysh avatar
aknysh

yes

aknysh avatar
aknysh

and zone delegation

aknysh avatar
aknysh
What is DNS Delegation?

In an answer to my previous question I noticed these lines: It’s normally this last stage of delegation that is broken with most home user setups. They have gone through the process of buying …

aknysh avatar
aknysh

otherwise DNS resolution will not work

aknysh avatar
aknysh

I mean, you can provision everything (master zone, sun-domain zone) in the root account and it will work

aknysh avatar
aknysh

but if you are using child accounts, you prob want to provision everything related to the sub-account in it

aknysh avatar
aknysh

might be not your case, just throwing out ideas

aknysh avatar
aknysh

so I think you need to check the following:

aknysh avatar
aknysh

if you provision the site/CDN in the child account, you need to have the certificate provisioned in the same child account and assigned to the CloudFront distribution

aknysh avatar
aknysh

do you have two certificates, in root and child accounts?

aknysh avatar
aknysh

then the CNAME must be added to aliases for the distribution

aknysh avatar
aknysh

so here is the thing: if you created the SSL cert only in the root account and created the sub-domain DNS record in the root account, then the CloudFront distribution URL ourCFDISTROID.cloudfront.net must be added to the SANs of the certificate

aknysh avatar
aknysh

the module will not work cross-account, it will not create alias in the parent zone which is in diff account https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L254

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

aknysh avatar
aknysh

so you have to set var.parent_zone_id = ""

Rich Allen avatar
Rich Allen

no I must have misspoke, the ssl cert is only provisioned in the child account. The dns validation record/(ACME) was set on the root account.

Rich Allen avatar
Rich Allen

FYI appreciate the help here, I’m working through a few of these just running a bit behind with your advice haha!

aknysh avatar
aknysh

ok

aknysh avatar
aknysh

anyway, you need to add the distribution URL to the SANs

aknysh avatar
aknysh

and CNAME must be added to aliases for the distribution

Rich Allen avatar
Rich Allen
Okay so for now, I don’t have multi account dns resolution set up, and I think I would have to authorize and test that change a bit more as affects my scope here. Knowing that is staying the same right now, it seems like I need to do the following: add a SANS record in our certificate for the cf distribution. I must manually validate the ACME challenge, and then I must manually create the [mysite.example.com> CNAME <http://CFDistro.cloudfront.net CFDistro.cloudfront.net](http://mysite.example.com) record. I should ignore the alias key (as that will not work cross account and I’m manually setting it for now until I can research multi-account dns resolution).
aknysh avatar
aknysh

yes

aknysh avatar
aknysh

it’s different for multi-account

aknysh avatar
aknysh

btw, you can set the alias to the CNAME since the cert is in the same account. As long as CloudFront sees the cert for sub-domain, it will allow you to add CNAME aliases to the distribution

johncblandii avatar
johncblandii

Hey folks, I’m starting to push out some videos around different devops/engineering topics. I’d love some feedback and even suggestions/requests for topics.

I’ll add links to the first few in this thread.

imiltchman avatar
imiltchman

Is there any way to work around the following errors in TF:

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.

or

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.

If TF knows the resourceA count, why wouldn’t I be able to then use length(resourceA) on resourceB count…

aknysh avatar
aknysh

you think terraform knows the count because in your head you know how many instances you want. But TF is not so smart as you

aknysh avatar
aknysh

there is not a good way of dealing with that

aknysh avatar
aknysh

in many cases, we ended up adding a new var count_of_xxx and explicitly providing the count to TF

aknysh avatar
aknysh

(it’s relatively old, was created for TF 0.11. TF 0.12 is much smarter, but still can’t do it in all cases)

imiltchman avatar
imiltchman

Makes sense, I was grasping at straws here, though I think I knew the answer all along

aknysh avatar
aknysh

so yea, in those cases we were not able to “fix” it, we ended up adding explicit count OR splitting the code into two (or more) folders and using remote state

imiltchman avatar
imiltchman

The annoying part with it, is sometimes it works when you add a new resource to the existing statefile, but then when you run from scratch, you hit this.

imiltchman avatar
imiltchman

Makes sense why that is, but it’d be good if TF at least gave a warning in those cases

aknysh avatar
aknysh

We also used one other way of fixing it. If for example, the count depends on some resource IDs, the IDs are not know before those are created

aknysh avatar
aknysh

Try to use names for example

aknysh avatar
aknysh

Or any other attributes that you provide

imiltchman avatar
imiltchman

If I understand correctly, I think you are referring to a different issue; the depends_on one.

imiltchman avatar
imiltchman

Very similar though in level of frustration;)

aknysh avatar
aknysh

No, if in the count expression you use resources IDs, those are not known before the resources are created

aknysh avatar
aknysh

But let’s say you provide resources names to terraform

aknysh avatar
aknysh

Those are known before the resources are created

aknysh avatar
aknysh

If you use names in the count expression, it might work

aknysh avatar
aknysh

But not always

aknysh avatar
aknysh

obviously it will work if the names are in an input variable

aknysh avatar
aknysh

what I’m referring to, you can reference ResourceA.name in the count, and it could work in some cases even before the ResourceA are created since terraform could figure it out

imiltchman avatar
imiltchman

Oh I see what you mean

imiltchman avatar
imiltchman

Good tip

aknysh avatar
aknysh

for example:

aknysh avatar
aknysh

\# aws_organizations_account.default["prod"] will be created
  + resource "aws_organizations_account" "default" {
      + arn                        = (known after apply)
      + email                      = "xxxxxxxx"
      + iam_user_access_to_billing = "DENY"
      + id                         = (known after apply)
      + joined_method              = (known after apply)
      + joined_timestamp           = (known after apply)
      + name                       = "prod"
      + parent_id                  = (known after apply)
      + status                     = (known after apply)
    }

  # aws_organizations_account.default["staging"] will be created
  + resource "aws_organizations_account" "default" {
      + arn                        = (known after apply)
      + email                      = "xxxxxxxxx"
      + iam_user_access_to_billing = "DENY"
      + id                         = (known after apply)
      + joined_method              = (known after apply)
      + joined_timestamp           = (known after apply)
      + name                       = "staging"
      + parent_id                  = (known after apply)
      + status                     = (known after apply)
    }
aknysh avatar
aknysh

all those (known after apply) you can’t use in counts

aknysh avatar
aknysh

name - you can, and TF would figure it out

aknysh avatar
aknysh

e.g. count = lenght(aws_organizations_account.default.*.name) might work in some cases

aknysh avatar
aknysh

count = lenght(aws_organizations_account.default.*.id) will never work

imiltchman avatar
imiltchman

Brilliant, thanks

Gabe avatar

Hey everyone, does anyone have advice on how to best manage terraform with micro services? Do you use a monorepo with for all of the terraform? Put the terraform with the service? Why did you decide that and how has it worked out?

PePe avatar

we use a repo + remote state for each microservice

PePe avatar

so we can independently change that service config without having to commit to a big repo

PePe avatar

repos are environment agnostic

Gabe avatar

thanks @PePe, how do you handle changes that apply to every microservice?

PePe avatar

pr to the repo, review and once approve terraform apply

PePe avatar

you can use different methos to run terraform

Gabe avatar

does that become cumbersome when you have a lot of micro services? right now we have a monorepo with ~40 microservices… anytime we need to make a change that impacts all of them it is a huge PITA to plan and apply terraform everywhere

Gabe avatar

we use atlantis… but it ends up being 3 (environments) *40 plans and applies

Gabe avatar

trying to see if there is a better way

PePe avatar

well we have 4 so is not much for us

PePe avatar

I guess if you have one project that calls all the other microservices TFs as modules you will endup having VERY LONG plan runs

PePe avatar

now I will argue that not for every software deployment you will have to change infrastructure every time

PePe avatar

but I do not know your needs

marcinw avatar
marcinw

I think with Terraform Cloud/Enterprise you can point workspaces to track individual folders, so if you have one workspace per microservice, a monorepo could work.

marcinw avatar
marcinw

It will soon be possible with Spacelift, though using a policy-based approach.

marcinw avatar
marcinw

BTW I’d probably rather avoid having a separate project for each microservice, and would try to group them by product area - ie. responsible org/team/tribe.

2020-02-05

Brij S avatar
Brij S

can outputs not have conditional count like resources do?

on ../outputs.tf line 7, in output "distribution_cross_account_role_arn":
   7:   count       = var.aws_env == "prod" ? 1 : 0

An argument named "count" is not expected here.
Adrian avatar
Adrian

what for? if you use count with “prod” in resource you will have output for prod

Brij S avatar
Brij S

well a resource is only created if aws_env == prod, otherwise not

Brij S avatar
Brij S

so in tht output, it only needs to output if the aws_env is prod, otherwise the resource wouldnt exist in the first place

Adrian avatar
Adrian

so you will have output if aws_env ==prod otherwise it will be empty

Brij S avatar
Brij S

exactly

Brij S avatar
Brij S

since that resource wouldnt exist if aws_env != prod

Adrian avatar
Adrian

e. g.

output "slack_channel" {
  value = var.enabled ? var.slack_channel : "UNSET"
}
Adrian avatar
Adrian

put some fancy text instead of “UNSET”, “No output for this env” :P

Adrian avatar
Adrian

or “Valid only for prod”

Brij S avatar
Brij S

so value = var.aws_env == "prod" ? aws_iam_role…… : "UNSET" ?

Adrian avatar
Adrian
locals {
  aws_env = "prod"
}

output "test" {
   value = local.aws_env == "prod" ? "This is prod" : "UNSET"
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

test = This is prod
Adrian avatar
Adrian

so yes

Rhawnk avatar
Rhawnk

Does anyone know if terraform .12.x allows for_each to loop through regions. Im attempting to create global dynamo tables in aws and figured I could save keystrokes if i use a for_each and pass the value into provider.

resource "aws_dynamodb_table" "table" {
    
  for_each = toset(var.table_regions)
  
  provider = aws.each.key
Rhawnk avatar
Rhawnk

i get “invalid attribute name” after plan

creature avatar
creature

try to use a dynamic . this is ripped from the Terraform Up & Running book.

creature avatar
creature
resource "aws_autoscaling_group" "example" { launch_configuration = aws_launch_configuration . example . name vpc_zone_identifier = data . aws_subnet_ids . default . ids target_group_arns = [ aws_lb_target_group . asg . arn ] health_check_type = "ELB" min_size = var . min_size max_size = var . max_size tag { key = "Name" value = var . cluster_name propagate_at_launch = true } dynamic "tag" { for_each = var . custom_tags content { key = tag . key value = tag . value propagate_at_launch = true } } } 

Brikman, Yevgeniy. Terraform: Up & Running (Kindle Locations 3300-3316). O'Reilly Media. Kindle Edition. 
creature avatar
creature

sorry that paste sucks coming from PDF. It’s chapter 6 tips and tricks

Rhawnk avatar
Rhawnk

thanks, ill give it a try

Rhawnk avatar
Rhawnk

actually i dont think that will work, as that will loop though an element “like tags” within the resource, i want it to loop the entire resource and change the provider (i.e. region)

Rhawnk avatar
Rhawnk

I heard there was work on getting for_each to work for modules, that is likely the limitation im hitting here as well

creature avatar
creature

I’m not an expert, but I suspect you might be right.

Rhawnk avatar
Rhawnk

Ha, nor am I, but thanks for the input

creature avatar
creature

check this issue and see if any of the workarounds might help.

https://github.com/hashicorp/terraform/issues/17519

count and for_each for modules · Issue #17519 · hashicorp/terraform
Is it possible to dynamically select map variable, e.g? Currently I am doing this: [vars.tf> locals { map1 = { name1 = &quot;foo&quot; name2 = &quot;bar&quot; } } <http://main.tf main.tf](http://vars.tf) module &quot;x1&quot; { sour…
Rhawnk avatar
Rhawnk
count and for_each for modules · Issue #17519 · hashicorp/terraform
Is it possible to dynamically select map variable, e.g? Currently I am doing this: [vars.tf> locals { map1 = { name1 = &quot;foo&quot; name2 = &quot;bar&quot; } } <http://main.tf main.tf](http://vars.tf) module &quot;x1&quot; { sour…
creature avatar
creature

awesome

Rhawnk avatar
Rhawnk

guess ill be writing in triplicate for now, until the module support comes out

:--1:1
Erik Osterman avatar
Erik Osterman

Also it depends on what your reasons are for going multi region, but from an HA perspective sharing the same state bucket across regions could be limiting true HA if the failed region happens to be where you store terraform state

creature avatar
creature

do you recommend splitting the state per region typically Erik?

Rhawnk avatar
Rhawnk

i was looking to set up global tables, based on the tf documentation you create all 3 individual tables, then tie them together with aws_dynamodb_global_table resource

Rhawnk avatar
Rhawnk

but i do see your point, i am using workspaces for ecs clusters that would be reading from same table, suppose i would stick to my same process, and just keep the global table state to single bucket

Rhawnk avatar
Rhawnk

this is my first jump into multi-region, so im used to all my statefile eggs in the same basket of us-east-1

marcinw avatar
marcinw


until the module support comes out
This will be a loooooong wait.

marcinw avatar
marcinw

But, you can generate Terraform programmatically in which case you get for-each in modules for free.

marcinw avatar
marcinw

Here’s one possible approach - https://github.com/mjuenema/python-terrascript though anything that can generate JSON will do - https://www.terraform.io/docs/configuration/syntax-json.html#json-file-structure

mjuenema/python-terrascript

Create Terraform files using Python scripts. Contribute to mjuenema/python-terrascript development by creating an account on GitHub.

JSON Configuration Syntax - Configuration Language - Terraform by HashiCorp

In addition to the native syntax that is most commonly used with Terraform, the Terraform language can also be expressed in a JSON-compatible syntax.

Rhawnk avatar
Rhawnk

Thanks I’ll give it a look

Erik Osterman avatar
Erik Osterman
[@ljmsc> <https://twitter.com/pamasaur @pamasaur](https://twitter.com/ljmsc) Soon, we still expect that in another 0.12.x.
nyan_parrot1
Erik Osterman avatar
Erik Osterman


do you recommend splitting the state per region typically Erik?
I do, if you can stomach the extra complexity of managing an additional state bucket. It also depends on how mission critical this stuff is and if your organization has the (human) resources to manage it. Also, realize these things trickle down to things like DNS zones and service discovery as well. If you’re managing DNS entries for resources in a specific region with a different state backend, then the zone should also be managed in that region.

Erik Osterman avatar
Erik Osterman

so from strictly an architectural POV, I think it’s the right way to go. But when considered in light of the management trade offs, then maybe not worth it.

:--1:1
PePe avatar

I will strongly recommend not to use a single state bucket for multi region and will strongly recommend to run terraform for how many regions you have by way of using a variable instead of going trough a loop

PePe avatar

So you will end up with state buckets per region that is resilient to full region failure

creature avatar
creature

thank you for the answer

PePe avatar

Plus you need to keep in mind naming convention for all resources that global like iam

:--1:1
PePe avatar

So add the region to the name of every resource

PePe avatar

We just went through all this and we are now multi region and we learned a few lessons

creature avatar
creature

just getting started here, so really appreciate all the knowledge to make my journey more pleasant

PePe avatar

It is painful, I can tell you that much

creature avatar
creature

I’ve been in the game over 20 years. Can’t be as painful as a bunch of engineers turning wrenches by hand.

PePe avatar

You will see….

1
PePe avatar

Soon enough

Richy de la cuadra avatar
Richy de la cuadra

anybody can review my pull request? https://github.com/cloudposse/terraform-aws-rds/pull/54

identifier of the CA certificate for the DB instance was added by fedemzcor · Pull Request #54 · cloudposse/terraform-aws-rds

new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md

aknysh avatar
aknysh

thanks @Richy de la cuadra we’ll review it ASAP

identifier of the CA certificate for the DB instance was added by fedemzcor · Pull Request #54 · cloudposse/terraform-aws-rds

new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md

Richy de la cuadra avatar
Richy de la cuadra

i did it with a lots of love

2020-02-03

Prasanna Pawar avatar
Prasanna Pawar

@here how to do vpc peering using multiple natgateway with terraform ?

Igor Bronovskyi avatar
Igor Bronovskyi
resource "aws_internet_gateway" "main_gw_1" {
  vpc_id = aws_vpc.main.id
}

resource "aws_internet_gateway" "main_gw_2" {
  vpc_id = aws_vpc.main.id
}
Cloud Posse avatar
Cloud Posse
05:00:19 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Feb 12, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-02-02

2020-02-01

pianoriko2 avatar
pianoriko2

Thanks @Erik Osterman this is helpful.

    keyboard_arrow_up