#terraform-0_12 (2019-11)
Discuss upgrading to terraform 0.12
Archive: https://archive.sweetops.com/terraform-0_12/
2019-11-01
I’m working with the git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=0.17.0 module and when global databases are used Aurora changes the replication_source_identifier
of the secondary cluster so every time we apply is trying to do and update in place, is it possible to add some sort of ignore if global databases are configured?
just to clarify, Global cluster in aurora do not allow to be created with replication_source_identifier
populated the Global engine changes the replication_source_identifier
after the secondary cluster joins the global cluster so that is why TF sees a drift in the state
the workaround is to add the replication_source_identifier
after the global cluster is created and the secondary is active
It will be nice if there was some sort of lifestyle event that TF will ignore those changes
We haven’t used it in this manner before (awesome that it kind’a works)
not sure the best option. i doubt lifecycle blocks support interpolation in 0.12 (they didn’t in 0.11)
yes, I’m not sure either
I will try to create a new cluster an add the replication_source_identifier
and see what happens and if I get the same error ( most probably I will) I will file a bug
2019-11-02
@Erik Osterman (Cloud Posse) latest terraform-docs.awk
fix: https://github.com/cloudposse/build-harness/pull/174
Fix description key inside type This PR fixes a when a key named description is inside the type block of a variable section: variable "ingress_cidr_blocks" { description = "Bzzzzz&…
You rock! Thanks
Fix description key inside type This PR fixes a when a key named description is inside the type block of a variable section: variable "ingress_cidr_blocks" { description = "Bzzzzz&…
@Andriy Knysh (Cloud Posse)
terraform is replacing the instance while enabling ebs encryption after creation of the instance. Is this expected behaviour?
2019-11-04
2019-11-06
another pull request : https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/40
This option is for the cases where ECS launch type is EC2 and the network mode is host and there is no alb fronting the application. One could argue that this module has morf so much that the alb i…
left a comment @Andriy Knysh (Cloud Posse)
This option is for the cases where ECS launch type is EC2 and the network mode is host and there is no alb fronting the application. One could argue that this module has morf so much that the alb i…
2019-11-07
2019-11-08
do you guys know if is possible to extract from a provider alias the region? like aws.secondary.region
?
Would a data source be sufficient? https://www.terraform.io/docs/providers/aws/d/region.html
Provides details about a specific service region
I have two provides for different regions in the same file
so the resource block have a provider = aws.primary
that is on a specific region
Use the provider in a data resource to fetch the region?
I think I’m just going to add an additional variable
this thing runs in one region but spins up multiple dependent resources in multiple regions
What happens if one of those regions is having availability issues? Could you keep each region a separate state?
this is for Aurora Global DBs
so I might not have access to the state but we do not have to touch the state even if the region is down
I think I will separate it later
2019-11-12
Ok, the documentation for dynamic
leaves a lot of useful examples out of the equation.
How do I do nested dynamic
blocks? We have a module for aws_elasticsearch_domain
that takes a cluster_config
var. This can have a nested block inside.. https://www.terraform.io/docs/providers/aws/r/elasticsearch_domain.html#cluster_config
Terraform resource for managing an AWS Elasticsearch Domain.
I found some good examples on their user forum… https://discuss.hashicorp.com/c/terraform-core
Terraform resource for managing an AWS Elasticsearch Domain.
ty
I need to only sometimes supply zone_awareness_config
..
The dynamic/for_each syntax in 0.12 is complete trash, and the documentation is complete trash
This is so non-obvious and poorly explained
I see Cloudposse has eschewed using dynamic blocks here https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/main.tf#L121-L132
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
@mrwacky examples of nested dynamic blocks ^
ha. I just gave up and passed a bunch of string variables to the module
thanks though
It was confusing at first, but now I use it for everything
2019-11-14
Hey all, new here and still new to Terraform - I’m trying to use Terraform to configure an AWS CodePipeline. It will plan
and apply
just fine, but the pipeline fails in the real world every time at the source stage. It seems to need additional S3 permissions and I haven’t yet figured out how to provide them. The error is Insufficient permissions The provided role does not have permissions to perform this action. Underlying error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:...
I’ve tried a blanket S3 allow-all permission policy on both the pipeline’s associated role and the codebuild’s associated role (desperation) to no avail. - anyone got any advice?
Sounds more like an AWS question. Try the IAM policy simulator? https://policysim.aws.amazon.com/ also you can check the access advisor for the policy you have created
@gabethexton maybe these examples could help https://github.com/cloudposse/terraform-aws-ecs-codepipeline/blob/master/main.tf#L112
Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
those pipelines use GitHub as Source (not S3), but they are working so might be of some help
Thanks @Andriy Knysh (Cloud Posse) - it turns out the KMS encryption key was causing the failure, once I disabled that it ran just fine. I’ll keep these handy though! Off to other errors!
2019-11-15
2019-11-18
@Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/17 I redid the mixed type scaling based on what was done for 11. There’s a bug filed on there as well regarding the variable type constraint (which tbh is easier fixed by removal / I didn’t remove it from my repo and PR it due to the existing pr )
Does as the label says; adds an example using it which I used to test that it works as expected. The "make && make init" keeps trying to install and setup terraform 0.11 which is …
thanks @chrism, will review
Does as the label says; adds an example using it which I used to test that it works as expected. The "make && make init" keeps trying to install and setup terraform 0.11 which is …
2019-11-20
What’s up with the list/set type differences in TF0.12? Is having both actually beneficial? Seems to cause more grief than good.
Lists can have duplicate items and are ordered, sets cannot have duplicates (and I don’t think are ordered)
I understand how they’re different, but it’s hard to work with both at the same time (mixing for_each and count, or having to convert toset or tolist to use the correct functions)
2019-11-21
well, it’s go-based. so it is strongly typed. can’t really have one object type with different properties like that
for_each requires a map or a set. that’s because the value/key is used as the resource id, and the resource id must be unique. so lists are not appropriate, or would result in late failures (during apply, instead of plan)
count works with lists because the index is used as the resource id, rather than the value, so it does not matter if there are duplicate values in the list
personally, i’m abandoning count anywhere i can, and using a map with for_each instead of a set
@loren Thanks, that helps. So I would just use each.key
instead of count.index
?
Seems odd that it would use the key as the resource id, since the key can be an arbitrary string
yep
one goal with for_each is to address the problem with count where modifying the list items (changing order, removing an item, etc), would cause resources to be deleted and recreated because their index changed
for_each addresses that by mapping the resource to the value instead of the index
Thanks for the explanation. Makes sense.
2019-11-24
Hi All,
Is there a best-practice way of create a list of maps to feed into a secrets_manager_secret resource for example? I have a few maps that represent their own respective secret, with each map will keys that detail “name”, “description” and “secret”?
Something like this: [ { name = secret_name description = secret_description value = secret_value }, { name = secret_name description = secret_description value = secret_value } ]
If I was feeding this into a variable, how would I achieve this given the TYPE constraints? I thought of creating an object for this but it seems messy to have a attribute id for each map.
Be keen to understand if anyone else has done something like this in TF 12+
Looks to me like list(map(string))
?
Can you do this? Wasn’t aware you could have collection types constructed in this way?
A list of objects would be pretty clean for that example, also… If you definitely wanted every item to require those three keys…
Yeah, sure, lists of maps are great
On my phone, or I’d write out the object code for you
This is great @loren. I’ll give it a go and come back to you on this
You’re an animal
Here’s an example, using objects, https://github.com/plus3it/terraform-aws-tardigrade-iam-principals/blob/master/variables.tf#L63
Terraform module to create IAM users/roles. Contribute to plus3it/terraform-aws-tardigrade-iam-principals development by creating an account on GitHub.
2019-11-27
Hi, is it possible to do
data "aws_vpc" "main_vpc" {
tags = {
provisioning = "terraform"
environment != "prod"
}
provider = aws.primary
}
actually what I need is to find a vpc that does not have an specific tag
I have one vpc that have a tag shared = true
and I need to find the other vpc that have the same tags except for that one
2019-11-28
Why not add the tag shared = false on the other lot and look for that…
I thought about that too
I think that is easier