#terraform (2020-02)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-02-01
Thanks @Erik Osterman (Cloud Posse) this is helpful.
2020-02-02
2020-02-03
@here how to do vpc peering using multiple natgateway with terraform ?
resource "aws_internet_gateway" "main_gw_1" {
vpc_id = aws_vpc.main.id
}
resource "aws_internet_gateway" "main_gw_2" {
vpc_id = aws_vpc.main.id
}
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Feb 12, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-02-05
can outputs not have conditional count like resources do?
on ../outputs.tf line 7, in output "distribution_cross_account_role_arn":
7: count = var.aws_env == "prod" ? 1 : 0
An argument named "count" is not expected here.
what for? if you use count with “prod” in resource you will have output for prod
well a resource is only created if aws_env == prod, otherwise not
so in tht output, it only needs to output if the aws_env is prod, otherwise the resource wouldnt exist in the first place
so you will have output if aws_env ==prod otherwise it will be empty
exactly
since that resource wouldnt exist if aws_env != prod
e. g.
output "slack_channel" {
value = var.enabled ? var.slack_channel : "UNSET"
}
put some fancy text instead of “UNSET”, “No output for this env” :P
or “Valid only for prod”
so value = var.aws_env == "prod" ? aws_iam_role…… : "UNSET"
?
locals {
aws_env = "prod"
}
output "test" {
value = local.aws_env == "prod" ? "This is prod" : "UNSET"
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
test = This is prod
so yes
Does anyone know if terraform .12.x allows for_each to loop through regions. Im attempting to create global dynamo tables in aws and figured I could save keystrokes if i use a for_each and pass the value into provider.
resource "aws_dynamodb_table" "table" {
for_each = toset(var.table_regions)
provider = aws.each.key
i get “invalid attribute name” after plan
try to use a dynamic
. this is ripped from the Terraform Up & Running book.
resource "aws_autoscaling_group" "example" { launch_configuration = aws_launch_configuration . example . name vpc_zone_identifier = data . aws_subnet_ids . default . ids target_group_arns = [ aws_lb_target_group . asg . arn ] health_check_type = "ELB" min_size = var . min_size max_size = var . max_size tag { key = "Name" value = var . cluster_name propagate_at_launch = true } dynamic "tag" { for_each = var . custom_tags content { key = tag . key value = tag . value propagate_at_launch = true } } }
Brikman, Yevgeniy. Terraform: Up & Running (Kindle Locations 3300-3316). O'Reilly Media. Kindle Edition.
sorry that paste sucks coming from PDF. It’s chapter 6 tips and tricks
thanks, ill give it a try
actually i dont think that will work, as that will loop though an element “like tags” within the resource, i want it to loop the entire resource and change the provider (i.e. region)
I heard there was work on getting for_each to work for modules, that is likely the limitation im hitting here as well
I’m not an expert, but I suspect you might be right.
Ha, nor am I, but thanks for the input
check this issue and see if any of the workarounds might help.
Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = "foo" name2 = "bar" } } main.tf module "x1" { sour…
ha, my exact issue: https://github.com/hashicorp/terraform/issues/17519#issuecomment-550003810
Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = "foo" name2 = "bar" } } main.tf module "x1" { sour…
awesome
Also it depends on what your reasons are for going multi region, but from an HA perspective sharing the same state bucket across regions could be limiting true HA if the failed region happens to be where you store terraform state
do you recommend splitting the state per region typically Erik?
i was looking to set up global tables, based on the tf documentation you create all 3 individual tables, then tie them together with aws_dynamodb_global_table
resource
but i do see your point, i am using workspaces for ecs clusters that would be reading from same table, suppose i would stick to my same process, and just keep the global table state to single bucket
this is my first jump into multi-region, so im used to all my statefile eggs in the same basket of us-east-1
until the module support comes out
This will be a loooooong wait.
But, you can generate Terraform programmatically in which case you get for-each in modules for free.
Here’s one possible approach - https://github.com/mjuenema/python-terrascript though anything that can generate JSON will do - https://www.terraform.io/docs/configuration/syntax-json.html#json-file-structure
Create Terraform files using Python scripts. Contribute to mjuenema/python-terrascript development by creating an account on GitHub.
In addition to the native syntax that is most commonly used with Terraform, the Terraform language can also be expressed in a JSON-compatible syntax.
Thanks I’ll give it a look
This will be a loooooong wait. https://twitter.com/mitchellh/status/1157288848654647298
do you recommend splitting the state per region typically Erik?
I do, if you can stomach the extra complexity of managing an additional state bucket. It also depends on how mission critical this stuff is and if your organization has the (human) resources to manage it. Also, realize these things trickle down to things like DNS zones and service discovery as well. If you’re managing DNS entries for resources in a specific region with a different state backend, then the zone should also be managed in that region.
so from strictly an architectural POV, I think it’s the right way to go. But when considered in light of the management trade offs, then maybe not worth it.
I will strongly recommend not to use a single state bucket for multi region and will strongly recommend to run terraform for how many regions you have by way of using a variable instead of going trough a loop
So you will end up with state buckets per region that is resilient to full region failure
thank you for the answer
Plus you need to keep in mind naming convention for all resources that global like iam
So add the region to the name of every resource
We just went through all this and we are now multi region and we learned a few lessons
just getting started here, so really appreciate all the knowledge to make my journey more pleasant
It is painful, I can tell you that much
I’ve been in the game over 20 years. Can’t be as painful as a bunch of engineers turning wrenches by hand.
Soon enough
anybody can review my pull request? https://github.com/cloudposse/terraform-aws-rds/pull/54
new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md
thanks @Richy de la cuadra we’ll review it ASAP
new variable ca_cert_identifier default value for ca_cert_identifier is rds-ca-2019 ca_cert_identifier setting on rds instances “make” commands were executed to generate readme.md
i did it with a lots of love
2020-02-06
Hi, probably a dumb question but I would like to check what I’m trying to build/fix is possible. I’m using the following module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn
We have a root aws account which manages the hosted zone example.com , I am trying to create a static site in a child organization at mysite.example.com. I’ve gone ahead and created a certificate from certificate manager in the child account. The root account has validated the certificate via DNS and I have verified the the child account has the certificate validated.
I have also set a route53 CNAME entry in the root account mysite.example.com -> ourCFDISTROID.cloudfront.net
I’m currently receiving an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Is what I’m trying to do going to work in aws? I’ve hit a wall an am not sure how to proceed.
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
@Rich Allen please share your module invocation terraform code
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
module "examplecom" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=0.20.0>"
namespace = var.namespace
stage = var.stage
name = var.name
origin_force_destroy = false
default_root_object = "index.html"
acm_certificate_arn = var.acm_certificate_arn
parent_zone_id = var.parent_zone_id // this references a zone id outside of the child organization. The root org controls example.com
cors_allowed_origins = ["mysite.example.com"]
cors_allowed_headers = ["GET", "HEAD"]
cors_allowed_methods = ["GET", "HEAD"]
}
for what it is worth, I now do not think this is an ssl issue. If you turn off redirects, and navigate to http I received a origin access error.
@Rich Allen did you request the certificate just for the parent domain, or for subdomains as well (*.[example.com](http://example.com)
)?
one of the possible reasons for ERR_SSL_VERSION_OR_CIPHER_MISMATCH
is cert name mismatch
just the sub domain, not the bare domain
The ERR_SSL_VERSION_OR_CIPHER_MISMATCH error is typically caused by problems with your SSL certificate or web server. Check out how to fix it.
I will reprovision the cert using the bare domain + san
The domain name alias is for a website whose name is different, but the alias was not included in the certificate
if you using CNAME, this [ourCFDISTROID.cloudfront.net](http://ourCFDISTROID.cloudfront.net)
should be included in SANs as well
make sure the CNAME is included in the aliases for the distribution, like here for example https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L106
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
also, did you provision DNS zone delegation in the child account?
since mysite.example.com is in diff account, you need to have a Route53 zone for it in the child account
and add Name Servers to the child DNS zone pointing to the root DNS zone
I did not, I provisioned mysite.example.com -> cfid.cloudfront.net in the root account
it was my understanding that a hosted zone, was unique to an account, so are you saying I should have a hosted zone in both the root and child account?
yes
and zone delegation
In an answer to my previous question I noticed these lines: It’s normally this last stage of delegation that is broken with most home user setups. They have gone through the process of buying …
otherwise DNS resolution will not work
I mean, you can provision everything (master zone, sun-domain zone) in the root account and it will work
but if you are using child accounts, you prob want to provision everything related to the sub-account in it
might be not your case, just throwing out ideas
so I think you need to check the following:
if you provision the site/CDN in the child account, you need to have the certificate provisioned in the same child account and assigned to the CloudFront distribution
do you have two certificates, in root and child accounts?
then the CNAME must be added to aliases for the distribution
so here is the thing: if you created the SSL cert only in the root account and created the sub-domain DNS record in the root account, then the CloudFront distribution URL ourCFDISTROID.cloudfront.net must be added to the SANs of the certificate
the module will not work cross-account, it will not create alias in the parent zone which is in diff account https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L254
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
so you have to set var.parent_zone_id = ""
no I must have misspoke, the ssl cert is only provisioned in the child account. The dns validation record/(ACME) was set on the root account.
FYI appreciate the help here, I’m working through a few of these just running a bit behind with your advice haha!
ok
anyway, you need to add the distribution URL to the SANs
and CNAME must be added to aliases for the distribution
Okay so for now, I don’t have multi account dns resolution set up, and I think I would have to authorize and test that change a bit more as affects my scope here. Knowing that is staying the same right now, it seems like I need to do the following: add a SANS record in our certificate for the cf distribution. I must manually validate the ACME challenge, and then I must manually create the mysite.example.com CNAME CFDistro.cloudfront.net record. I should ignore the alias key (as that will not work cross account and I’m manually setting it for now until I can research multi-account dns resolution).
yes
it’s different for multi-account
btw, you can set the alias to the CNAME since the cert is in the same account. As long as CloudFront sees the cert for sub-domain, it will allow you to add CNAME aliases to the distribution
Hey folks, I’m starting to push out some videos around different devops/engineering topics. I’d love some feedback and even suggestions/requests for topics.
I’ll add links to the first few in this thread.
Is there any way to work around the following errors in TF:
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
or
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
If TF knows the resourceA count, why wouldn’t I be able to then use length(resourceA) on resourceB count…
you think terraform knows the count because in your head you know how many instances you want. But TF is not so smart as you
there is not a good way of dealing with that
in many cases, we ended up adding a new var count_of_xxx
and explicitly providing the count to TF
also we have some docs on that https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/
(it’s relatively old, was created for TF 0.11. TF 0.12 is much smarter, but still can’t do it in all cases)
Makes sense, I was grasping at straws here, though I think I knew the answer all along
so yea, in those cases we were not able to “fix” it, we ended up adding explicit count OR splitting the code into two (or more) folders and using remote state
The annoying part with it, is sometimes it works when you add a new resource to the existing statefile, but then when you run from scratch, you hit this.
Makes sense why that is, but it’d be good if TF at least gave a warning in those cases
We also used one other way of fixing it. If for example, the count depends on some resource IDs, the IDs are not know before those are created
Try to use names for example
Or any other attributes that you provide
If I understand correctly, I think you are referring to a different issue; the depends_on one.
Very similar though in level of frustration;)
No, if in the count expression you use resources IDs, those are not known before the resources are created
But let’s say you provide resources names to terraform
Those are known before the resources are created
If you use names in the count expression, it might work
But not always
obviously it will work if the names are in an input variable
what I’m referring to, you can reference ResourceA.name
in the count, and it could work in some cases even before the ResourceA are created since terraform could figure it out
Oh I see what you mean
Good tip
for example:
# aws_organizations_account.default["prod"] will be created
+ resource "aws_organizations_account" "default" {
+ arn = (known after apply)
+ email = "xxxxxxxx"
+ iam_user_access_to_billing = "DENY"
+ id = (known after apply)
+ joined_method = (known after apply)
+ joined_timestamp = (known after apply)
+ name = "prod"
+ parent_id = (known after apply)
+ status = (known after apply)
}
# aws_organizations_account.default["staging"] will be created
+ resource "aws_organizations_account" "default" {
+ arn = (known after apply)
+ email = "xxxxxxxxx"
+ iam_user_access_to_billing = "DENY"
+ id = (known after apply)
+ joined_method = (known after apply)
+ joined_timestamp = (known after apply)
+ name = "staging"
+ parent_id = (known after apply)
+ status = (known after apply)
}
all those (known after apply)
you can’t use in counts
name
- you can, and TF would figure it out
e.g. count = lenght(aws_organizations_account.default.*.name)
might work in some cases
count = lenght(aws_organizations_account.default.*.id)
will never work
Brilliant, thanks
Hey everyone, does anyone have advice on how to best manage terraform with micro services? Do you use a monorepo with for all of the terraform? Put the terraform with the service? Why did you decide that and how has it worked out?
we use a repo + remote state for each microservice
so we can independently change that service config without having to commit to a big repo
repos are environment agnostic
thanks @jose.amengual, how do you handle changes that apply to every microservice?
pr to the repo, review and once approve terraform apply
you can use different methos to run terraform
does that become cumbersome when you have a lot of micro services? right now we have a monorepo with ~40 microservices… anytime we need to make a change that impacts all of them it is a huge PITA to plan and apply terraform everywhere
we use atlantis… but it ends up being 3 (environments) *40 plans and applies
trying to see if there is a better way
well we have 4 so is not much for us
I guess if you have one project that calls all the other microservices TFs as modules you will endup having VERY LONG plan runs
now I will argue that not for every software deployment you will have to change infrastructure every time
but I do not know your needs
I think with Terraform Cloud/Enterprise you can point workspaces to track individual folders, so if you have one workspace per microservice, a monorepo could work.
It will soon be possible with Spacelift, though using a policy-based approach.
BTW I’d probably rather avoid having a separate project for each microservice, and would try to group them by product area - ie. responsible org/team/tribe.
2020-02-07
Hello Guys I am new in terraform and stuck in problem to create elastic beanstalk application using terraform can you help me here ? Here is my code :
resource “aws_elastic_beanstalk_application” “default” { name = var.application_name description = var.application_description } resource “aws_elastic_beanstalk_application_version” “default” { name = “${var.application_name}-v1” application = aws_elastic_beanstalk_application.default.name description = var.application_description bucket = var.bucket_id key = var.object_id } resource “aws_elastic_beanstalk_environment” “default” { depends_on = [aws_elastic_beanstalk_application_version.default] name = “${var.application_name}-env” application = aws_elastic_beanstalk_application.default.name solution_stack_name = “64bit Amazon Linux 2018.03 v2.9.5 running Python 3.6” version_label = “${var.application_name}-v1” dynamic “setting”{ for_each = {“ImageId” = var.ami, “InstanceType” = var.instance_type} content{ namespace = “awslaunchconfiguration” name = setting.key value = setting.value } } }
error message?
here error message :
Error: Error waiting for Elastic Beanstalk Environment (...) to become ready: 2 errors occurred:
* 2020-02-07 09:25:38.663 +0000 UTC (...) : Stack named '..' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition].
* 2020-02-07 09:25:38.781 +0000 UTC (..) : LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support.
I think elastic Beanstalk environment can’t communicate with instances.
here creation log :
2020-02-07 2221 UTC+0530 INFO Launched environment: TestApp-007-env. However, there were issues during launch. See event log for details. 2020-02-07 2219 UTC+0530 ERROR LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support. 2020-02-07 2219 UTC+0530 ERROR Stack named ‘..’ aborted operation. Current state: ‘CREATE_FAILED’ Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition]. 2020-02-07 2232 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 2232 UTC+0530 INFO Created CloudWatch alarm named: .. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group policy named: .. 2020-02-07 2216 UTC+0530 INFO Waiting for EC2 instances to launch. This may take a few minutes. 2020-02-07 2216 UTC+0530 INFO Created Auto Scaling group named: .. 2020-02-07 2255 UTC+0530 INFO Adding instance.. to your environment. 2020-02-07 2255 UTC+0530 INFO Added EC2 instance .. to Auto Scaling Group.. 2020-02-07 2212 UTC+0530 INFO Created Auto Scaling launch configuration named: .. 2020-02-07 2212 UTC+0530 INFO Created security group named: .. 2020-02-07 2212 UTC+0530 INFO Created load balancer named: .. 2020-02-07 2256 UTC+0530 INFO Created security group named: … 2020-02-07 2234 UTC+0530 INFO Using … as Amazon S3 storage bucket for environment data. 2020-02-07 2233 UTC+0530 INFO createEnvironment is starting.
@Dhrumil Patel take a look at this module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
complete working example https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Terratest for the example (it provisions it on real AWS account) https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/test/src/examples_complete_test.go
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Ok thanks
Sidenote, I would recommend that you take out your aws accounts ids before posting outputs. It’s best to keep those secret.
Ya I forgot about it thanks for reminding me.
You should be able to edit them out
Is there any thing wrong with this code ?
I need to create code as my internship assignment and my mentor told me that I can’t using public registry modules thats why I am asking ?
Your EC2 instance is not launching correctly and/or not in time. Check for problems in /var/log/eb-activity.log
You also want to increase the Command timeout or disable health checks while you’re investigating.
Ok
Problem solved, I am providing AMI to autoscalling group and that AMI causing problem. Instance spawn using that AMI can’t communicate with elastic beanstalk. When I didn’t provide AMI in elastic beanstalk environment then it works perfectly fine. Don’t know why this is happennning any suggestion ?
Would need to see your error logs, however you must specify an AMI. I think your custom AMI may have a launch error.
Actually I am not using custom AMI I am using one of the ubuntu AMI from AMI store.
2020-02-10
Hello,
I am using terraform remote state and have to move one resource to its parent folder .
I am trying to use terraform state mv
to avoid recreation the resource.
# download the state file
terraform state pull > local_state.out
# change the state file
terraform state mv -state=local_state.out module.network.module.vpn1.azurerm_subnet.vpn1 module.network.azurerm_subnet.vpn1
Move "module.network.module.vpn1.azurerm_subnet.vpn1" to "module.network.azurerm_subnet.vpn1"
Successfully moved 1 object(s).
but then we I do the terraform plan -state=local_state.out
Terraform still wants to delete the resource I have moved
do you have any hint on how to achieve this move ?
can you copy-paste the output of plan here ?
@Pierre-Yves you will need to upload the state again to remote backend
he’s explicitely using a local state file local_state.out
, so that’s not it.
well, the question initially says that he is using remote backend
I could be wrong, I will wait for his confirmation if the backend is local-file
that’s not relevant, he posted his command line commands and he clearly pulls from remote to local file, and from that moment on uses the local state file
terraform plan -state=local_state.out
not sure if he did partial init in that case
yes I have download the remote file with terraform state pull
to mv everything and once the plan match my need I want to upload it back with terraform state push
. then plan again to be sure and apply
@Pierre-Yves did you terraform state push
already before running terraform plan ?
no I have specified -state=local_state.out
You in order to use local state, you might need to do terraform init afaik
with local state
that will consider your local state instead of remote state
Backends are configured directly in Terraform files in the terraform
section.
terraform init -backend-config="path=local_state.out"
=> The backend configuration argument “path” given on the command line is not
expected for the selected backend type.
seems better with explicitely having the file named terraform.tfstate
so it seems terraform don’t like I have a backend configured in the main.tf even when specifying -state=localfile or init with a local terraform.tfstate file
If I want to work locally I had to remove the backend block and terraform will ask to unconfigure and copy the current state to the local backend
`
terraform init
Initializing modules...
Initializing the backend...
Terraform has detected you're unconfiguring your previously set "azurerm" backend.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "azurerm" backend to the
newly configured "local" backend. No existing state was found in the newly
configured "local" backend. Do you want to copy this state to the new "local"
backend? Enter "yes" to copy and "no" to start with an empty state.
as a summary to move module resources
on my laptop i have:
• unconfigure the remote backend tfstate ( by commenting out the backend block
• run terraform init
• terraform propose then to copy the tfstate locally
• I have move the resource, try and plan
• re add the backend block for remote state
• run terraform init
and specify to copy back the state
thanks for your help @aaratn an @maarten
this was only needed because moving resource module requires a terraform init
and try to do plan
it should fix the issue
Any suggested readings or words of wisdom for someone looking to get automated testing going for TF? We’re looking at terratest at the moment for the tool.
Ya that’s your best bet
Avoid testing things that terraform already covers in its own tests.
E.g. creating a bucket results in bucket. It’s safe to skip this kind of test
80/20 rule applied to testing terraform: You get 80% of the benefit and catch 80% of the problems by just running plan/apply/destroy. You have to spend 80% more effort to test the remaining 20%
i still think this presentation is really great for getting folks started… https://www.infoq.com/presentations/automated-testing-terraform-docker-packer
Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.
I have look as well to do terraform test and will experiment later the test from the terraform vscode extension which can do lint and “end to end” test . see the bottom of the page : https://docs.microsoft.com/en-us/azure/terraform/terraform-vscode-extension
Learn how to install and use the Azure Terraform extension in Visual Studio Code.
Thanks, @Erik Osterman (Cloud Posse) @loren @Pierre-Yves
the infoq video above mention as well “conftest” for gke : https://github.com/instrumenta/conftest/tree/master/examples/terraform
Write tests against structured configuration data using the Open Policy Agent Rego query language - instrumenta/conftest
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Feb 19, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-02-11
Anyone know of a way to use data.aws_ssm_parameter
to pull a number of parameters given a path? I am trying to find a way to avoid supplying all the param names to my application through vars.
if any of your PRs for cloudposse repos are blocked in review, hit up our pal @Maxim Mironenko (Cloud Posse) to get help and speed up the review =)
ugh, sorry about that, fixed and code pushed.
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
@antonbabenko 11AM GMT is 6AM Toronto time, but that’s not going to stop 3 guys from my shop including myself from seeing your talk https://events.hashicorp.com/hashitalks2020
what kills efficiency the most working with terraform…. aws resource limits
This might come handy
Customizable Lambda functions to proactively notify you when you are about to hit an AWS service limit. Requires Enterprise or Business level support to access Support API. - awslabs/aws-limit-monitor
well if that’s your worst efficiency blocker i’d say you’re doing aye ok
2020-02-12
I didn’t see this posted yet, but TF Cloud is adding run triggers; in short, a way to build CI pipelines.
https://www.hashicorp.com/blog/creating-infrastructure-pipelines-with-terraform-cloud-run-triggers
Run triggers are useful anywhere you’d like to have distinct pieces of infrastructure automatically queue a run when a dependent piece of infrastructure is changed.
That’s great
that looks extremely useful
@Chris Fowles: @johncblandii does a live demo in our office hours today https://cloudposse.wistia.com/medias/g6p0zu4txy
@johncblandii you mentioned you had another video you recorded specifically demo’ing this functionality
is that on youtube?
YouTube is processing the 4K right now. Hopefully it’ll be done soon
i’ll post when it is done
for those planning on using terraform cli workspaces with TFC (terraform cloud) because of @johncblandii’s awesome demo today, there is a tiny edge case caveat to getting it working in TFC. If you’re using the terraform.workspace
value in your terraform code, that value will always be default
in TFC so you won’t be able to use it to make logical decisions within your terraform code (I use it for naming conventions, tagging, environment/region specific scenarios). To work around this I’ve introduced a “workspace” variable (see pic) and you can set a local variable to workspace = "${var.workspace != "" ? var.workspace : terraform.workspace}"
The reason I am naming the variable workspace is so I can make minimal changes and it sounds like there is enough fuss from the community that this might not be an issue in the future.
More info here: https://github.com/hashicorp/terraform/issues/22131
Consider writing instead:
workspace = coalesce(var.workspace, terraform.workspace)
https://www.terraform.io/docs/configuration/functions/coalesce.html
The coalesce function takes any number of arguments and returns the first one that isn’t null nor empty.
That’s annoying! Why the heck is terraform cloud overloading their own term for “workspace” making it mean one thing in the SaaS and a subtle but different thing in the terraform
cli?
Tell me about it. They must’ve known it would cause a bunch of confusion
TFC workspace
is basically a project
and locally you can pull in multiple projects
to 1 code-base mapped to workspaces
.
I completely forgot about this distinction until @btai brought it up.
2020-02-13
Hey guys, I got a question about Terraform in AWS and its IAM role policy to create resources. At the moment I have attached the full admin policy to the role that terraform is using but I was wondering if there is a simpler way so terraform can create resources(not only ec2, vpc, buckets, etc) and at the same time be not so open with full admin access.
you can use https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_developer-power-user
Use the special category of AWS managed policies to support common job functions.
but you pretty much need to be power user or admin
once you start on the path to take away permissions, you will soon realize that you need most of them to be able to provision AWS
unless you provision a specific set of resources, then you can create a specific role with those permissions and give it to the user or to terraform aws provider
but then every time you need to create a new set of resources, you need to remember to update the policy
that’s why we use Admin permissions
yeah, I always go for the admin permission
quite tricky
those policies from their doc are quite interesting, not just for terraform but for other users too
I am going to have a second thought about it. This is not a company requirement but just trying figure out if there are better ways to manage permissions
My current opinion after having worked with this stuff for over a decade is that IAM is best suited for your services and their capabilities (controlling what they can do or wha can access them), but is too low level for restricting who (human) can deploy what (resources). It’s hard to know exactly what policies you need as a human before you provision everything. Iterating and requesting permissions is a huge bottleneck. Instead the current best practice is to stick a VCS+CI/CD (gitops) pipeline in between the humans and the infrastructure. Get the humans out provisioning stuff directly as much as possible to eliminate the need for fine grained access. Then use something like the Open Policy Agent to define the higher order policies that run in your pipelines, combined with a code review approval process.
Just to be clear, these concepts are all relatively recent developments for IaC, but are repossess to all the the problems associated with how “least privilege” failed us from a practical perspective to achieve organizational efficiency.
I am already doing VCS+CI/CD, PRs, etc which works great and also means not many people can actually do any harm as we have a process BUT, as humans can make mistakes, someone, including myself, could mistakely give access to Jenkins (both ssh or URL) using the wrong permissions which may allow them to get admin access to all AWS accounts.
It is actually a complicated situation because you don’t want to have a bottleneck by having to always update the IAM policy to allow an extra action but at the same time you want to avoid risks of someone being able to do something they shouldn’t be doing.
@Erik Osterman (Cloud Posse) do yo guys use Open Policy Agent?
one thing we are struggling now is the user management part and SSO in AWS, how to better and easier to manage user/group policies trough SSO or other means, it is a hot topic in our world right now
a bit offtopic from this original thread
I have had a look at ita while ago and it can get really complex… to be honest, my company does not have that many users that need to access AWS so I am not using SSO but I did have a look to integrate with GSuite and it is not like a “next next finish” IMHO
we have about 300 users
some of them cross group boundaries or have multiple account access etc
it gets pretty complicated
I can imagine it can get really complex, not just the SSO part but the Security side of things
exactly
I was once at an AWS event and s security team from a company was there and they were talking about it
similar to what you have said
and it was so complex that even the AWS SA was lost
because you have all the security requirements too.. not just users/sign in
@jose.amengual (Just to be clear, these concepts are all relatively recent developments for IaC) we haven’t had a chance to adopt it yet, but this is what we’re planning on incorporating to our latest pipelines we’re developing for a customer
it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough
if a hard problem but Active Directory solved it many many years ago
their policies are incredible granular, although there is a lot of clicks involved
I wish there was a simpler way to integrate and manage users like AD, like you said
better having clicks involved than googling for a solution that we never found
hahaha lol
very true
AD is probably the best service MS has ever done
user management and gourp/users policieswork so so well
agree
@Gui Paiva SAML provider from GSuite to AWS works nicely. The only downsides are that you seemingly can’t attach policies to groups, only users individually. In practice, this is not so big of an issue for us since we are working on deploying GSuite users via ansible anyway. Also, I haven’t been able to determine yet how to add multiple GSuite apps for AWS yet
it is incredible to be that there isn’t a simple solution for this yet, it is still a bit rough
@jose.amengual can you elaborate?
Well if you look at the example of MS AD, they had this for years, fine grane policies , group, policies, identity, authentication and authorization
what I’m talking is basically an AWS solution that is easy to use , easy to understand and that solves the use cases that people have and that has a good programatic API access that is easy to program
IAM if far from being easy
SSO in AWS is ok-ish but then you have issues where you can attach policies to groups and such
there is alway quirks
and there is many SaaS product that tackle to solve this problem for you
the fact that there is that many, tells you that there is a need for something easier
that is what I mean
2020-02-14
Hello fellas i have a question regarding CORS policies on S3 buckets. Is there a way of adding such policies to existing S3 bucket via Terraform?
resource "aws_s3_bucket" "bucket" {
bucket_prefix = "project-name-"
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET"]
allowed_origins = ["*"]
expose_headers = ["ETag"]
max_age_seconds = 3000
}
tags = {
Name = "test"
}
}
wont that destroy the existing bucket in order to recreate it with the cors policy?
no
if you create bucket previously with terraform
cool. let me test it out..
Also no if you import an existing bucket into the config
@Igor Bronovskyi i spoke too soon. This toggles the application of the CORS policy to the bucket on every other terraform apply. Just discovered it after releasing the code
so removes it and appends it ….
that would be a bug, either in the tf aws provider, or in your config…
I dont think i explained myself well. The problem was the the bucket created as part of a collection of resources. Eventually i ended up issuing a PR to create a new resource to handle this issue. https://github.com/terraform-providers/terraform-provider-aws/pull/12141
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
I have a list, with one object in it, but I need to perform some functions on it and i’m trying to see if I’ve got this right…
aliases = [ lower(substr(“${var.service}-${var.branch}.${var.stage}.${var.domain}“, 0, 32)) ]
or would I run lower(substr()) on the outside of [] ?
you don’t have a list it looks like, you are constructing the list. If that’s the case, then the syntax is OK, you get substring from a string and put it into a list
correct. perfect, thanks!
@Andriy Knysh (Cloud Posse) I just realized that my list totally wouldn’t work because i’d be truncating the end of the dns name. So…
[lower(substr(“${var.service}-${var.branch}“, 0, 32))“.${var.stage}.${var.domain}“]
that hurts my head
would that work?
I think separating the quotes like that would make a list of two items, not one
or maybe not, since there’s no comma
make a local var in locals
`
then use it to put into the list
more readable
yeah, you’re right
good call
I am trying to use module: https://github.com/cloudposse/terraform-aws-elasticache-redis and I typed
apply_immediately = true
but does not seem to be part of the resource. so when I changed the parameter , it did not apply it immediately
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
there is a PR to fix that, and @Maxim Mironenko (Cloud Posse) is working on it
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
@Olivier fix is on the way, will let you know when ready
thank you
@Olivier here it is: <https://github.com/cloudposse/terraform-aws-elasticache-redis/releases/tag/0.16.0>
thanks
does anyone know how to use terraform-docs
to automatically replace only the content in the readme which it generates? (providers, inputs,outputs). For example, if there is a title and some description text at the top, I wouldnt want it to replace that part
I think @antonbabenko has a github pre-commit hook for this.
Usually this is done with some kind of markers like <!-- terraform-docs begin -->
and <!-- terraform-docs end -->
and then using a sed
+regex to replace the content inbetweeen
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
2020-02-17
Hello, I’m currently stuck on an ALB and EC2 problem: currently I have a front-end ALB and EC2s in a Target Group that are created via an ASG + LC. (all managed with terraform) currently when I launch a deployment with terraform it causes a downtime… because the old EC2’s are already in drain mode before the new EC2’s are marked “Healthy” and can therefore receive traffic.
I looked further and the new EC2’s are in “initial” mode (Target registration is in progress) for a few seconds until they are considered healthy… and since at the same time the old instances are in drain mode I get timeouts (503 or 503) if I make calls at this time to the
Is there a way to make the old EC2s go into “drain” mode ONLY when the new EC2s are in “Healthy” mode and not “initial”? This would allow the old EC2s to be able to handle the traffic while the new EC2s are OK for TargetGroup.
Isn’t this what you’re looking for - https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#wait_for_elb_capacity ?
Provides an AutoScaling Group resource.
I’m also thinking you could do this with lifecycle hooks - https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html - though you will need an external entity (eg. a Lambda) to control the action that’s taken.
Learn about lifecycle hooks for Amazon EC2 Auto Scaling.
One other thing I’d suggest looking at is https://aws.amazon.com/codedeploy/ - haven’t used it personally but it’s supposedly designed to handle deployments just like yours.
AWS CodeDeploy is service that fully automates code deployments for a fast, reliable software deployment process.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Feb 26, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-02-18
Hey all, the docs for api gateway domain name(https://www.terraform.io/docs/providers/aws/r/api_gateway_domain_name.html) suggest that the var for a regional domain is regional_certificate_arn
and for edge its certificate_arn
. Im in the middle of creating a custom domain module for internal use. Is there a way to conditionally select regional_arn vs certificate_arn
Registers a custom domain name for use with AWS API Gateway.
If you’re writing a module you do some conditional stuff around the endpoint configuration or lack thereof
infact hmm
no that won’t work I didn’t read the docs properly
I guess you could have two resources in a module using count resources
and have one that uses regional_certificate_arn
and the other using just plain certificate_arn
you can pass null to unset a value
so just pass null to the one you don’t want to use and use a couple of locals to work that out
ah yeah good point
oh really? so in the following
resource "aws_api_gateway_domain_name" "domain" {
certificate_arn = var.certificate_arn
regional_certificate_arn = ""
I can set one of those to null based on some conditional? @Chris Fowles
0.12 + supports an actual keyword null
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
@Chris Fowles, I attempted the following with no luck -any thoughts?
resource "aws_api_gateway_domain_name" "domain" {
certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "EDGE" ? var.certificate_arn : null
regional_certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "REGIONAL" ? var.certificate_arn : null
domain_name = var.domain_name
tags = var.tags
}
error:
Error: Error creating API Gateway Domain Name: BadRequestException: A certificate was not provided for the endpoint type EDGE.
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Anyone here run into a case where you provision a module, remote state in S3, try to load that remote state in another module but TF can’t find it? Unable to find remote state
. State file is where I expect it to be and other modules using remote state work fine. Terraform v0.12.18
.
could be many diff reasons, did you check workspace
, workspace_key_prefix
for terraform_remote_state
? Also, if the state bucket is in diff account, might be wrong permissions (assume_role
)
Well, this is really out of left field. I’ll check those things, looking through the remote config and the data sources to see if I have introduced a bug in there but:
• I don’t use workspaces
• State bucket is in the same account
• This same set of modules provisioned without issue in a different account
• I’m provisioning as an admin
• Other modules using remote state work fine, only modules using remote state from this specific module (my RDS module) are acting like the state file is non-existant
It will be the stupidest thing… always is.
Thanks for responding
did you check that subfolder in the AWS S3 console?
Yeah, and pulled the tfstate file. It is there, looks good. I can’t dig into this right now, have to get back to it in the morning. Might just tear down my sandbox and see if I can reproduce there first. thx.
couple of tips:
From the source remote state:
terraform output
From the state bucket that wants to consume the remote state:
terraform console
> data.terraform_remote_state.xxx.outputs
2020-02-19
Help please add on this github issue https://github.com/terraform-providers/terraform-provider-aws/issues/11961
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
anybody have any tool or project recommendations on querying terraform remote state outputs? Figured I’d ask before reinventing the wheel.
why wold you need to look at the outputs ?
to consume them for other shenanigans
when you create stuff in TF then you can just do data lookup in your next TF
I use remote state for TF related stuff. I want to consume those attributes for use with other tools / reporting. I know I can terraform output -json
and parse from there, but thought maybe someone had already written something.
what would be even better is a project that keeps outputs in an external store for querying. I believe Consul fills this gap, but we aren’t using it so maybe something more lightweight
data "terraform_remote_state" "eks" {
backend = "s3"
workspace = terraform.workspace
config = {
bucket = "my-state-bucket"
workspace_key_prefix = "eks"
key = "terraform.tfstate"
region = var.region
}
}
data "terraform_remote_state" "dns" {
backend = "s3"
workspace = terraform.workspace
config = {
bucket = "my-state-bucket"
workspace_key_prefix = "dns"
key = "terraform.tfstate"
region = var.region
}
}
locals {
eks_cluster_identity_oidc_issuer = data.terraform_remote_state.eks.outputs.eks_cluster_identity_oidc_issuer
zone_id = data.terraform_remote_state.dns.outputs.zone_id
}
or you are asking how to consume the remote state from other tools, not terraform ?
yes
I have a good grasp on remote state within TF (consuming upstream output, etc).
what I’m looking at writing is a Python json parser that traverses all of my state buckets, grab output and store them somewhere that I can leverage for other things
other things != terraform?
(another consideration is SSM, if you’re on AWS)
yep, things other than TF that need to programmatically access outputs. I think I’m good now, thanks for all the responses folks. Great Slack community here.
i think it will be somewhat dependent on that “somewhere”… e.g. i think you could just run terraform output -json | aws s3 cp <s3://bucket/key> -
do that in your tf pipeline, now they’re in s3
yep, that’s pretty lightweight. good idea on the pipeline also. thx!
2020-02-20
@here need some advice on how can I move forward with my requirement
I need to deploy multiple lambda across multiple accounts.
I use federated login to access aws.
Terraform is in github.
I need to know if there is a way to deploy module based on conditional basis (probably var.env=prod/stage/dev) instead of using multiple branches for each account.
so I attempted to do the following
data "aws_api_gateway_rest_api" "api" {
name = var.api_name
}
resource "aws_api_gateway_domain_name" "domain" {
certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "EDGE" ? var.certificate_arn : null
regional_certificate_arn = data.aws_api_gateway_rest_api.api.endpoint_configuration == "REGIONAL" ? var.certificate_arn : null
domain_name = var.domain_name
tags = var.tags
}
but I get an error
Error: Error creating API Gateway Domain Name: BadRequestException: A certificate was not provided for the endpoint type EDGE.
any idea of this is possible using the null
keyword? I am attempted to make a module which can create a regional domain name or edge
Is anyone using terraform-aws-rds
module here of Cloudposse? i am runnign into a problem, thouhgh maybe small, have not been able to eliminate it. Hoping someone can help?
what’s the issue?
Is it possible to pass remote state values to vpc_id
variable in this module? I alwasy get Unsupported Argument
error while trying to pass value of VPC in the vpc_id
data "terraform_remote_state" "vpc" {
backend = "s3"
workspace = terraform.workspace
config = {
bucket = "my-tfstate-bucket"
workspace_key_prefix = "vpc"
key = "terraform.tfstate"
region = var.region
}
}
locals {
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
}
Thanks. I figured out issue was on my end only (knew it was something stupid)
can we use locals in outputs, the way we can use vars? eg;
value = var.aws_env == "prod" ? aws_iam_role.asset_distribution_cross_account.*.arn : "valid only for prod"
yes. you can use any valid expression in an output value, https://www.terraform.io/docs/configuration/expressions.html
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Output values are the return values of a Terraform module.
2020-02-21
I’m trying to have an index count when looping through an array. The application is trying to construct the XML for an AWS MQ config file. This is what I’m doing now:
<networkConnectors>
%{~ for item in element(local.mq_network_brokers, count.index) ~}
<networkConnector name="connector" userName="commonUser" uri="static:(${item})"/>
%{~ endfor ~}
</networkConnectors>
The issue is that name="connector"
must be a unique label. If I could do equivalent of i++
and then name="connector{{ i }}"
, it would solve my problem. But I’m struggling trying to find a way to have a counter in that loop.
Any suggestions? Convert it to a map, use for_each, and use the key? maybe?
Let’s welcome @scorebot into the mix!
@scorebot has joined the channel
Thanks for adding me emojis used in this channel are now worth points.
Wondering what I can do? try @scorebot help
@scorebot help
You can ask me things like @scorebot my score - Shows your points @scorebot winning - Shows Leaderboard @scorebot medals - Shows all Slack reactions with values @scorebot = 40pts - Sets value of reaction
Hey Cloudposse folks — It looks like https://github.com/cloudposse/terraform-aws-cloudfront-cdn hasn’t been updated in a while. Is that no longer supported or are you folks looking for people to take up the torch on PRs like https://github.com/cloudposse/terraform-aws-cloudfront-cdn/pull/29?
Terraform Module that implements a CloudFront Distribution (CDN) for a custom origin. - cloudposse/terraform-aws-cloudfront-cdn
This is the remaining work to finish off rverma-nikiai fork in case. I submitted a PR to the upstream repo. Happy to close this and work through the existing PR if you prefer. I was able to get it …
Hey @Matt Gowie! This repo is in my queue to convert to TF 0.12. It is not enough to just replace syntax but we have requirements to cover module with tests. If you want to contribute it in a proper way (see example: <https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/45>
) you are most welcome! Give it a try, then ping me, so I could review and run tests. This will help us and speed up the process. Otherwise we all have to wait until it will be done as well as many other modules in queue
@Maxim Mironenko (Cloud Posse) Got it — I’ll take a crack at it next week. Thanks for the info.
2020-02-22
I think we tackled ~40 PRs from our community backlog this week
There’s some 210+ in total and ~130 terraform related. It’ll take us another few weeks at this rate, but we’re getting there!
2020-02-23
Hi, we have different AWS accounts for dev/stage/production etc.. Can anyone recommend the best way to keep base (eg VPCs, subnets, security groups, K8s cluster etc) and application (RDS database, S3 buckets etc…) infrastructure, in sync across all the different AWS accounts/environments, please?
What I thought could be done for each item (eg subnets), is that subnet ranges (different per each account) would come from input variable files, and then the outputs would be fed into the application infrastructure. And so on for each item…
Can anyone please recommend whether this is achievable and/or it is the best way to go with IaC and Terraform?
2020-02-24
anyone?
Hi Andrea, I think you are reffereing to remote states, yes that’s one way to do it, the best practice is to move away the fast moving parts from the slow ones, e.g. you won’t change the VPC reguraly but you might change the machine types etc.
usually what is being done is to split the “parts” into folders ( states ) and to have some sort of hierarchy between them e.g VPC needs to be applied first, ecs after the vpc etc…
Does this help ?
Hi, thanks. I was not talking about the remote state much (even though that is something I should do at some point, currently I commit everything to git)
so in the top directory I would have the VPC, subnets, security group etc…
while in subfolders specific resourses and/or applications
is there a command/script to make sure other colleagues will respect the hierarchy too?
For this purpose and many other we have created a docker image. I will just share some things of the whole README file that we are using in my team:
Introduction
To keep the infrastructure operational we use some tools to provide changes to the infrastructure and manage the settings.
The development environment will make sure that you don’t need to install all the tools we need, but just the ones you already use:
- Docker
- Docker-Compose
We are using ASDF in a Docker Container to get a fully operational development environment with all the tools needed.
Getting started
$ ./runtask.sh init # to initialize the development environment
Building infrastructure-console
...
Successfully tagged aws_infrastructure-console:latest
$ ./runtask.sh console # to get a console on the development environment
asdf@a8da505fb2ab$
asdf@a8da505fb2ab$ packer version
Packer v1.1.3
asdf@a8da505fb2ab$ terraform version
Terraform v0.11.2
All necessary folders are linked as docker volumes into the running docker container via docker-compose.
Note: New folders in the projects main folder need to be added in the docker-compose.yml
and require a fresh console in order to be available in the docker container.
We basically use this docker image all the time for doing anything in the terraform and in general with the infra.
In this case everyone is using the same hierarchy, same version tools, etc…
Hi @Ognen Mitev, thanks. We do something similar but not specific to Terraform only. Is this image available online anywhere? so that I can take a look…
Hi Adrea, we also no not have it to terraform only, rather for everything that we use. https://hub.docker.com/r/zeppelinlab/ops_shell
So the hierarchy, should be imposed by the GIT e.g. PRs, README.md and predefined folder structure.
in a team you should also have “remote state backend”
and do not check-in the state in SVN
ok about remote state and git/svn commits
how do you impose the hierarchy with git though?
also, and possibly lastly, what do you do when you have multiple AWS accounts/environment?
how do you impose the hierarchy with git though
You push the first commit with the hierarchy you want.
obviously you don’t want to copy and paste the whole folde/files hierarchy… per dev/test/prod etc…
you can also create a repo template ( in the case of github)
oh I see (regarding the git commits)
do you mean this for templating? https://www.terraform.io/docs/providers/template/index.html
The Template provider is used to template strings for other Terraform resources.
No that’
The Template provider is used to template strings for other Terraform resources.
’s something else
also, and possibly lastly, what do you do when you have multiple AWS accounts/environment?
You could do folders per ENV, combined with workspaces
Or workspaces with multiple providers
or just folders for everything
a lot of food for thought! Thank you @Nikola Velkovski! I’ll investigate all of those…
> [ for k, v in [[1, 11], [2, 22], [3, 33]] : [k, v] ]
[
[
0,
[
1,
11,
],
],
[
1,
[
2,
22,
],
],
[
2,
[
3,
33,
],
],
]
why do terraform language designers always pick the least intuitive behaviour for an idiom?
real question, is it possible to zip 2 lists into a list of 2-tuples (like in Python)? Or do i have to use range and list1[count]
list2[count]
?
(Or a helper local variable that uses range and count to pack 2 lists into a list of maps)
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 04, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi team started using the cloudposse/terraform-aws-cloudfront-s3-cdn module but I am getting this error https://gapcommerce.com/
ERROR
Failed to contact the origin.
Generated Mon, 24 Feb 2020 17:21:23 GMT
Request ID: _Gnok9D5N1_Cw1Lu7Ld44vXH78Pwj-l26vxQyWbUC6GzDIhMZxhp0w==
namespace = local.workspace["namespace"]
stage = local.workspace["stage"]
name = local.workspace["name"]
aliases = ["[gapcommerce.com](http://gapcommerce.com)", "[www.gapcommerce.com](http://www.gapcommerce.com)"]
use_regional_s3_endpoint = true
origin_force_destroy = true
cors_allowed_headers = ["*"]
cors_allowed_methods = ["GET", "HEAD", "PUT"]
cors_allowed_origins = ["*.[gapcommerce.com](http://gapcommerce.com)"]
cors_expose_headers = ["ETag"]
compress = true
this my config
I am not sure what I am doing wrong
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Provides a S3 bucket resource.
Hi @MattyB
I don’t have long but IIRC this allows you to reference your gapcommerce site like a CDN through CloudFront. Give me just a minute and I’ll try to find the modules you’re looking for
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website
What’s the end goal for what you’re trying to do with the ClloudPosse modules? That’ll help the community properly figure out what to suggest
@MattyB We am trying to use the module to automate the provisioning CloudFront + s3 Website deployment, gapcommerce.com is our company marketing website
@MattyB yes that is the module https://github.com/cloudposse/terraform-aws-s3-website
We are not sure what we are doing wrong https://gapcommerce.com/
i just hit an issue with this a few weeks ago where the CORS rules on the s3 bucket were configured incorrectly. Can you post them here?
in AWS console -> s3 bucket -> permissions -> CORS Configuration
do you see multiple AllowedOrigins?
see the documentation link at the bottom? check it out -> lets turn this into a thread
@Francisco Montada
@MattyB which btn ?
In Amazon S3, define a way for client web applications that are loaded in one domain to interact with resources in a different domain.
I cannot have multiple origen ?
If you check out the documentation it suggests that you need multiple CORSRules to do what you want to do. I think this is a bug in the CloudPosse module. Let’s try something out. Delete lines 5 & 6 so there’s only 1 origin - *.gapcommerce.com
ok
now go to cloudfront and invalidate your cache (you can do this 1000 times per month before they charge you)
done
ok
yep
did not help
still showing Failed to contact the origin.
can you access your static assets using the cloudfront link?
let me chck
same
let’s go to DM if you don’t mind
what’s the last part of your ‘origin domain name and path’ in cloudfront?
i had to set bucket_domain_format = “%s.s3.${var.region}.amazonaws.com”
yes it is
I added -website- and did not work
@MattyB I noticed my s3 endpoint has -website- on it
Hi Team, I am trying aws beanstalk using terraform. Saw the git repository https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment but in this, it is referring the module for neanstalk env creaton. From where i can get the module code as well ?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Have you seen the working example: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/examples/complete/main.tf
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
this is what the terratests are based off of
Is it not possible to enable AWS API gateway logging with just terraform? I’m getting “CloudWatch Logs role ARN must be set in account settings to enable logging” when trying to set logging_level
in a aws_api_gateway_method_settings
resource. The articles I’ve found on this suggest you need to paste a role ARN into the web console.
Looks like this will do the trick: https://www.terraform.io/docs/providers/aws/r/api_gateway_account.html
Provides a settings of an API Gateway Account.
Hey all, how would one deploy to multiple aws accounts at the same time? Any way to replicate the cloudformation stackset functionality?
terraform did this loooooong before cloudformation. create a provider block for each account using aliases, and pass the provider alias to the resource/module
Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.
Yeah, saw this. So one terraform apply
would generate a diff for n
number of accounts?
well, it’s not as easy as defining a provider and it just working. You’d need to either declare a module per provider or declare resources that consume specific providers. This repo implements it about as cleanly as it gets.
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline
This is cool. Thanks!
2020-02-25
Guys, I am trying to output huge json with local-exec. It complains about argument list too long. Is there any smooth way to go around this issue?
is it an OS-level limit?
Yes
echo "abc
...
"
is the same as
cat <<EOF
abc
...
EOF
except the latter is a heredoc which acts as a (very slow) way to do standard input
BTW, do you know what receives the “too long” argument list? The shell (local exec running sh -c
) or echo
?
yeah, sh
isn’t it possible to render a local file as a state rather than writing it with a local-exec command?
Well I need that file for other purpose
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/data.tf#L68 - rendered template as data source
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/kubectl.tf#L1 - written file as a state
A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks
A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS. - terraform-aws-modules/terraform-aws-eks
its swagger document that i use to import to api_gw with customized stuff, but in same time i need to manupilate with that file to use it in swagger online.
so its kinda tricky
even if you cannot directly use resource "local_file"
, you can write it with local_file
, and then copy it with local-exec
let me try local_file
thanks
it worked as a charm
thanks so much
thats all i needed. I tried with file, and it didnt work
Hi all! I have more like a general usage TF question. We are starting moving to IaaC using mostly Terraform Registry modules as root ones. The way we configured it is: From Terraform Registry github (i.e. vpc module - verified one) we make a wrap call to a private Bitbucket registry that we own; The wrap call is basically using the module structure as it’s usage sourcing to Terraform Registry module that is being locked on version. All arguments are pointing to defaulted vars; We make the final call to the wrapped module from our infra repo that is configuring our infrastructure by filling now the desired arguments. We have the following request: VPC registry module has predefined subnets and routes (i.e. database, redshift, elasticache etc). We would like to add to this one new subnets and routes arguments (i.e. elasticsearch, awx etc.) in the wrapped module so it will be available to all of our reusable configuration. Can you please let me know how can we achieve this? Thank you in advance for your answers!
Correct me if I’m wrong, but it seems like Terraform Worspaces is not part of the OpenSource version
Confusingly - the way “workspaces” are define in Terraform Cloud & Enterprise do not map 1:1 to the workspaces
in the terraform
cli (e.g. terraform workspace select dev
)
@johncblandii can shed some more light on the differences also, our #office-hours session from 2 weeks ago
thanks @Erik Osterman (Cloud Posse), I’ve started watching it…
the presenter does not seem in super favour of TF cloud for the first couple of minutes…
that’s fine by me, as I don’t think it’s an option for me at this stage
Why is tf cloud not an option for you at this stage?
but I’m happy to learn more about it
what about Terragrunt? is that an option commonly used for managing multiple layers (eg VPC, subnets all the way down to the application)
for multiple environments too (QA, stage, prod etc)
oh, a guest mentioned terragrunt too, let’s wait and see what they say..
Yeah so TFC is 1 terraform workspace, but locally you can use multiple remote TFC workspaces locally as TF workspaces
Example: project-dev project-uat project-prod
^ TFC has them split as such. You target prefix = "project-"
in your local backend config and you’ll have locally:
$ terraform workspace list
dev
uat
prod
ok, so workspaces in TFC and the OSS version are different things
is there some docs on how to get started with the OSS workspaces?
because in the docs I’ve only found this: https://www.terraform.io/docs/cloud/getting-started/workspaces.html
Terraform by HashiCorp
which requires TFC…
I’m a fan of Terragrunt, fwiw. It solves a lot of the same problems as workspaces, and seems to be working for us so far. Keeps everything nice and DRY. (see infrastructure modules)
OK, +1 for terragrunt, thanks for your input
Workspaces allow the use of multiple states with a single configuration directory.
I’m currently uploading files to a bucket as follows
resource "aws_s3_bucket_object" "object" {
for_each = fileset(var.directory, "**")
....
}
does anyone know of a clever way to output the id
s of the files uploaded?
output "objects" {
value = fileset(var.directory, "**")
}
is that not sufficient?
let me try that and let you know!
first time using for_each
2020-02-26
Hi, do you think is a better Idea to get vpc_id in different terraform stack using output of vpc stack or simply by referencing data and finding vpc by it’s ID?
@Laurynas you can access the state of the different deployment
https://take.ms/cPTqM How to teach the editor to understand the syntax 0.12.x?
Monosnap — the best tool to take & share your screenshots.
Which Editor @Igor Bronovskyi ?
vscode
it depends on the plugin I am using neovim and it works perfect.
do you have the terraform plugin installed?
neovim for vscode?
vscode
which terraform extension do you guys use with vscode? I’m using mauve.terraform
and it’s fine, but everything for .12 doesn’t work perfectly. The official extension is garbage. Hoping someone has a recommendation.
I’d say patch your editor to run terraform fmt
upon save of a file that has a .tf extension. That should cover 80% of the issues: )
my main issues with this extension are around syntax highlighting. After certain blocks of code, it just stops working. Intellisense and sniffing out errors before running would also be helpful. I like Terraform, but the user experience makes it feels very unfinished / unpolished.
I am using neovim and it’s working just fine, what I miss is though the auto detection of missing variables like the pluging for inteliJ but IDK if it works with 0.12 as it’s supposed to.
i remember when such things didn’t work for terraform <=0.11. give it some time, these are community-managed plugins, not hashicorp managed. 0.12 was a big shift for hcl. it will get there
here’s a trick for supporting both 0.11 and 0.12 in vscode on a project-by-project basis. use the old plugin for 0.11 and the new language server for 0.12… https://github.com/mauve/vscode-terraform/issues/157#issuecomment-587125278
Hi! Is there any plans to implement hcl2 support?
the language server is definitely the way to go for 0.12 though
let me enable that language server…haven’t done that
thx thx
I have switched to using IntelliJ TF module. I use it just for TF and nothing else. It’s great for 0.12
anyone here use private terraform modules at work? do you leverage terraform registry at all? would love to hear how you have it setup
Super interested in this question as well. I’ve just started on my terraform journey and only have a short time to try to come up with a proof of concept. I’ve written and copied a lot of example code just to get a single working vpc up and running with both public / private subnets spanning multiple AZ’s. I keep coming back to this idea that I am spending un-necessary cycles re-inventing the wheel and could simply use the ones available in the terraform registry.
right now, we use some from the registry and it’s definitely a good idea. either official modules or cloudposse modules seem to be useful
the registry is great. im more curious about private modules that leverage the registry. so then devs would use the private modules with company centric defaults that then use the registry. does that make sense?
@RB - if we find a module that we want to modify, we clone it, (check the license), remove .git
and create a file that references the original module. Cloudposse is a great place to start, but if your requirements differ a lot, you’ll need to fork or clone.
just keep in mind, forking is not a copy, so your infrastructure depends on another Github org or the registry
We just reference our own private modules from a git tag. I definitely would love to have a true private artifact repository for terraform modules. We use Artifactory - I’m hoping https://www.jfrog.com/jira/browse/RTFACT-16117 gets traction.
so does each developer service use the private modules? do devs contribute to private modules? also how do you update all terraform that depends on your modules?
@Joe Hosteny there are some good alternatives at the bottom of that ticket. i was looking into the terrafrom-aws-tf-registry
We aim to have private modules only near the root of the dependency graph. If we need something changed in a dependent module, we’ve been contributing those back to OSS modules. While PRs are open, we fork all the way down the graph through to the dependency, and run with the forks that point to our module.
Not sure if that addresses your question. We have moved to using CloudPosse as much as possible, for standardization. The init-from-module works fine with the private repo as well.
interesting. but when the parent module, in this case your fork or cloudposse, are updated, do you have to manually apply all the terraform that depends on those modules? is it automated?
manual, which is probably a good thing with most of our modules because it allows for a quick code review when you copy / merge. We have quite a bit of mods to some of these.
we plan on automating the detection of updates soon. something like: if tag on upstream project is greater than X, throw an alert or log or something.
During development, I generally reference the modules via filesystem path, so if I update any of them in the hierarchy at any point, I can just run make reset && make deps
at the geodesic container for the desired AWS account, and all dependencies will get updated. Otherwise, yeah, you have to tag the intermediate repos and push the references up to the root.
I don’t think it’s any worse than any other dependency chain, TBH. If I have multiple logical changes in a dependency, I’ll do the PRs in multiple branches, then create a local branch that merges both of those (only locally) so I can get all of them for test at a single time in the local filesystem.
I did this pretty extensively for the CP ECS web app and its dependencies while we made a module for concourse
We use private modules without registry, just pointing source to git refs (tags), no need for more yet… Didn’t investigate any registry
I used Terraform Cloud for our private TF modules and it worked great
side question. because it’s difficult to find if an aws console resource is managed by terraform… i was thinking abotu creating a terraform
tag but instead of a boolean, it could have the value of the git remote
. thoughts on this? and thoughts on how to retrieve this or can it only be done using a null-resource / local-exec https://stackoverflow.com/a/49425731 ?
i’d been thinking about something similar lately. maybe a tag for the source
location and for the tfstate
location
We leverage many of the Cloudposse modules, and pin most of our stuff around their label module: https://github.com/cloudposse/terraform-null-label When looking at the console it is very easy to see things that were named with the output from that module. We then pair that with tags on all resources that support tags, and it becomes very clear how a resource was created and how to find the source in our tf codebase.
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
What is the best practice in case of disaster to restore your RDS base setup using terraform? As current RDS restore from snapshot require a new base. Could you recreate your db from your snapshot? And then when you set null snapshot_identifier it will destroy the DB.
disaster recovery should be treated as an abnormal event - i.e. don’t try and bake it into (at least the recovery bit) your infrastructure provisioning. you should be following a careful process with checks along the way during a disaster situation so as to avoid making the situation worse.
you probably want to consider freezing all automation around a disaster event, as the system is in an unexpected state and you don’t want automation to propagate or exacerbate the issue
but how to recover the restore in the new terraform state?
terraform import to bring things back into statefile management
Can someone please update this link - https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/examples/simple
getting 404 on it
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
Yep, was just wondering why its given in the readme section of this - https://github.com/cloudposse/terraform-aws-elasticache-redis
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
in the Examples
tab
hehe
@Maxim Mironenko (Cloud Posse) is reviewing PRs like mad right now, so if you submit it, he’ll get to it.
2020-02-27
Hi, does anyone have experience with this module: https://github.com/cloudposse/terraform-aws-eks-cluster?
I have issues while using two “worker groups”.
When I add a second “worker group” the networks gets crazy:
• on some pods I can’t resolve DNS records
• on some pods I don’t have network connectivity
• on EC2 nodes the network and DNS are fine
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
We have 4 worker groups, 1 per AZ plus a special one. Are all the subnets tagged? Think both with cluster and shared. All security groups allowing all others?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
You could stand up a Debian pod in each node in each AZ and work out the combo
We have the following tags:
private_subnet_tags = {
"kubernetes.io/cluster/eks-cluster" = "shared"
"kubernetes.io/role/internal-elb" = true
}
public_subnet_tags = {
"kubernetes.io/cluster/eks-cluster" = "shared"
}
vpc_tags = {
"kubernetes.io/cluster/eks-cluster" = "shared"
}
2020-02-28
anyone have luck with getting tflint to work with tf12 ?
recently compartmentalized some terraform into its own module and had to do over 20 import statements. wrote this up into a gist to make it faster.
https://gist.github.com/nitrocode/7c2f5386f144c7b06e38c2c38292889e
is there anything like this that has already been worked on? id rather not reinvent the wheel
Good work! Did it so many times but didn’t automate it because I had few resources to move. I don’t know if there’s such tool so you should create a repo for it. If in future I will have such case I might PR with improvements
thanks. there is some funky logic in it but it made some module migrations a lot easier
I’ve seen that comment
I used this script today. Had to rewrite it a little bit to work with GCP but other than that, pretty good.\
it’s a bit difficult to guess what the import statements are and it’s currently using stdout to parse instead of converting to json first so it can definitely be improved. if you folks like, i can make a repo, and we can all contribute to it.
id love to see your fork too @Marcin Brański
2020-02-29
I’ve done similar, but in awk.