#terraform (2019-08)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-08-01
any workaround for
The "count" value depends on resource attributes that cannot be determined
I want to use output of MSK bootstrap servers to create R53 CNAME entries
don’t reference an output/attribute in count? can only reference vars and locals (and the locals must be fully deterministic in advance, i.e. cannot themselves rely on outputs/attributes)
@loren so is it possible to get output from e.g. resource “aws_msk_cluster” and use as count in another resource e.g. “aws_route53_record”
not as the count, no, not as far as i know, you’ll always get count cannot be determined errors
you can use the output in an attribute on another resource, but not in count
and you can set the count some other way where the length is fully deterministic from a var or local without relying on an attr of a resource/data source
say you pass in a var that determines the number of nodes in your cluster… you can use that var to count aws_route53_record
, and then reference the attrs of aws_msk_cluster
in aws_route53_record
attrs
ye I saw workaround with bash echo
# Verify that the count matches the list
resource "null_resource" "verify_list_count" {
provisioner "local-exec" {
command = <<SH
if [ ${var.topic_arns_count} -ne ${length(var.topic_arns)} ]; then
echo "var.topic_arns_count must match the actual length of var.topic_arns";
exit 1;
fi
SH
}
smth like this
here’s the issue with all the fun details, https://github.com/hashicorp/terraform/issues/12570
I was using terraform modules for IAM user creation, add multiple inline policies and multiple policy_arn's to the user after creation. But now I got an issue where I create an IAM_POLICY and g…
situation is somewhat improved in tf 0.12, so may run into the error less frequently, but it’s still a problem
thnx for answer
2019-08-02
2019-08-03
hello @here anyone knows and can recommend good libvirt provider other then this one here > https://github.com/dmacvicar/terraform-provider-libvirt
Terraform provider to provision infrastructure with Linux’s KVM using libvirt - dmacvicar/terraform-provider-libvirt
I’m using this with good results. What are you wanting to do?
Terraform provider to provision infrastructure with Linux’s KVM using libvirt - dmacvicar/terraform-provider-libvirt
hey @kskewes i have an issue with referencing local image on the server where libvirtd is running
# We fetch the latest ubuntu release image from their mirrors
resource "libvirt_volume" "ubuntu-qcow2" {
name = "ubuntu-qcow2"
pool = libvirt_pool.default.name
# path = "/home/ivano/ubuntu-qcow2"
source = "<http://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img>"
format = "qcow2"
}
this is what i mean, if you look at the example from terraform-provider-libvirt/examples/v0.12/ubuntu
if i try ‘path’ its always local to the server where terraform is started and spinning up webserver on the remote end so i can use source
rather then path is also not ideal, any other ideas?
Took a little bit to work out cloud init gotchas. https://gitlab.com/kskewes/k8s-with-gitlab/tree/master/terraform/env-dev/libvirt-k8s
Create and maintain a multi-arch Kubernetes cluster utilizing Gitlab CI/CD tools where possible.
sweet , thats elegant, thx
Sorry your other messages didn’t show before I replied but looks like mine will work for you huh :) I haven’t looked at repo on a few months and should look at what can do better with 0.12. any suggestions appreciated, otherwise enjoy!
thanks @kskewes will check it out. think i found already few issues e.g ‚count‘ as var name cant be used as it conflicts with the modul and there is also few glitches as to how it works in 0.12 , lastly you still pull the source via http…
Thanks! Will change count. Plan was to turn it into a module anyway. Re source, I use a local file on the kvm server, per example tfvars.
2019-08-04
2019-08-05
Does anybody know how to reference an instance created via a google_compute_instance_group_manager
? I’m creating a route via google_compute_route
and I need to have the name of the instance created via the group so I can set next_hop_instance
.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Aug 14, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Will it be recorded and shared post the event, as well?
we’ve just published our first EMR module (by @Andriy Knysh (Cloud Posse)) https://registry.terraform.io/modules/cloudposse/emr-cluster/aws/0.1.0
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
We’ve been inconsistent about recording them
if we do, they’ll be posted to #office-hours
NP. Thanks!
I am facing an issue with provisioning multiple Windows EC2 instances using Terraform.
This is an overview of my TF template (Oversimplified with syntax ignored):
Template Overview
Resource EC2 {
count = variable
connection
type = "winrm"
host = "${self.private_ip}"
user = "${var.username}"
password = "${var.admin_password}"
timeout = "${var.timeout_tf}"
provisioner remote-exec inline
1. powershell.exe rename-computer -Machine is rebooted once this is run.
provisioner remote-exec inline
2. powershell copy platform code from s3 bucket
3. powershell.exe run DomainAdd.ps1 - Machine is rebooted once this is run.
provisioner remote-exec inline
4. powershell.exe run PreDeploy.ps1 (DSC script)
}
- If I set the count of the instance to 1. All the above #1, #2, #3 and #4 provisioning steps work fine.
Issue:
If I set the count of the instance to anything other than 1 (e.g. 2), Terraform successfully runs #1, #2 and #3 on both the instances and runs #4 on ONLY ONE of the instances.
Observations:
- After running #3 on both the instances, the remote-exec is able to eastablish the connection with both the instances successfully however runs #4 on only one of the instances.
- Even after running #4 on one instance, it keeps on showing the following output unless I force TF to stop.
aws_instance.ec2instance[1]: Still creating... [7m0s elapsed] aws_instance.ec2instance[0]: Still creating... [7m0s elapsed] aws_instance.ec2instance[1]: Still creating... [7m10s elapsed] aws_instance.ec2instance[0]: Still creating... [7m10s elapsed] aws_instance.ec2instance[0]: Still creating... [7m20s elapsed] aws_instance.ec2instance[1]: Still creating... [7m20s elapsed] aws_instance.ec2instance[1]: Still creating... [7m30s elapsed] aws_instance.ec2instance[0]: Still creating... [7m30s elapsed] aws_instance.ec2instance[0]: Still creating... [7m40s elapsed] aws_instance.ec2instance[1]: Still creating... [7m40s elapsed] aws_instance.ec2instance[1]: Still creating... [7m50s elapsed]
Why is Terraform behaving inconsistently when the instance count is set to anything other than 1? Is there something I might be missing? Any suggestions/pointers will be greatly appreciated! TF_LOG are not showing anything useful.
2019-08-06
Need help in accessing values as a list element inside map values in terraform
variable "controller_name" {
type = "list"
default = [{
z1 = ["EKS-controller1"]
z2 = []
z3 = ["EKS-controller1","EKS-controller2"]
z4 = []
}]
}
you can use element() or […] to access the list items
for maps, The syntax is var.MAP[“KEY”]. For example, ${var.amis[“us-east-1”]} would get the value of the us-east-1 key within the amis map variable.
for lists the syntax is “${var.LIST}“. For example, “${var.subnets}” would get the value of the subnets list, as a list. You can also return list elements by index: ${var.subnets[idx]}
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
this is how the variable looks like
@mmarseglia we converted the module to TF 0.12 https://github.com/cloudposse/terraform-aws-ecr/releases/tag/0.7.0
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
and this example are automatically tested on CI/CD https://github.com/cloudposse/terraform-aws-ecr/tree/master/examples/complete
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
can you try with TF 0.12?
i’m not sure all the modules I’m using have been converted. they weren’t, last I checked.
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
i built a manifest using elasticbeanstalk app w/ a multidocker container.
i am using 7 different modules. i think the elasticbeakstalk ones haven’t been converted yet?
no, benstalk has not been converted yet
i mean that we converted https://github.com/cloudposse/terraform-aws-ecr/releases/tag/0.7.0 to TF 0.12 a few weeks ago and it did not throw any policy errors
yes, i would like to use the new one. you have done great work converting them in a short time.
i’ll figure out a way around this in the short term and look to upgrade that module to 0.7.0
@mmarseglia try to delete the statement = []
from https://github.com/cloudposse/terraform-aws-ecr/blob/0.6.1/main.tf#L124
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
we don’t have it in TF 0.12 version https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L120
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
(and we tested only 0.12 version recently)
If someone from Cloudposse gets a chance to review a PR: https://github.com/cloudposse/terraform-aws-datadog-integration/pull/11
Some minor updates to allow the module to work with Terraform 0.12
@sweetops thanks for the PR, looks good, please see the comments https://github.com/cloudposse/terraform-aws-datadog-integration/pull/11#pullrequestreview-272305121
Some minor updates to allow the module to work with Terraform 0.12
2019-08-07
Guys, I am looking how can I tag resources dynamically, and without repeating same block of code with one change.
# Resource 1
tags = merge(
var.tags,
map(
"Name", format("dev-bastion-0%s.${var.domain}",count.index+1),
"type", "bastion"
)
)
# Resource 2
tags = merge(
var.tags,
map(
"Name", format("dev-app-0%s.${var.domain}",count.index+1),
"type", "app"
)
)
any suggestions are welcome, thanks. Okay I see now that I could just abstract that with module
i am not sure, but, i guess you could do follow this way: example:
data "null_data_source" "tags" {
count = "${length(keys(var.tags))}"
inputs = {
key = "${select(keys(var.tags), count.index)}"
value = "${select(values(var.tags), count.index)}"
propagate_at_launch = true
}
}
resource "aws_autoscaling_group" "asg" {
...
tags = "${data.null_data_source.tags.*.outputs}"
}
terraform 0.12 you can do it with foreach operator
dynamic "tag" {
foreach = "${local.common_tags}"
content {
key = "${dynamic.foreach.name}"
value = "${dynamic.foreach.value}"
}
}
i hope it’s helpfull for you
#office-hours hours starting in 15m https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8
2019-08-08
I wonder what the backstory is here…
a) why did they launch with HCL support
They had not yet been purchased by MSFT
b) why did they drop HCL support
MSFT didn’t want to dilute the brand with a bush league API…
haha, possibly!
strange that they even started with HCL support
maybe the devs working on that feature were also supporting the terraform github provider, and thought, wouldn’t it be neat if…?
Hi all, this is probably a very dumb and novice questions, but I’m having a hard time understanding what I’m doing wrong here. As far as I can tell, when I try to attach a policy document using a role from the CP role repository, the base module is expecting a string?
data "aws_iam_policy_document" "s3_full_access" {
statement {
sid = "FullAccess"
effect = "Allow"
resources = [
"arn:aws:s3:::${module.static-app.s3_bucket}",
"arn:aws:s3:::${module.static-app.s3_bucket}/*"
]
actions = [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload",
]
}
}
module "s3-write-role" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-role.git?ref=0.4.0>"
enabled = "true"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
policy_description = "Allow S3 FullAccess"
role_description = "IAM role with permissions to perform actions on S3 resources"
policy_documents = ["${data.aws_iam_policy_document.s3_full_access.json}"]
}
I’m wondering if I’m missing something ovbious here, or not understanding how to use this module?
Follow up error
Error: Incorrect attribute value type
on .terraform/modules/s3-write-role.aggregated_assume_policy/main.tf line 23, in data "aws_iam_policy_document" "zero":
23: override_json = "${element(local.policies, 0)}"
Inappropriate value for attribute "override_json": string required.
You are missing something there
Why error says aggregated_assume_policy ?
Oh, your source is a git tag
I’m sorry, I’m not understanding what the source of confusion is? Could you possibly rephrase?
@Rich Allen can you look at the example https://github.com/cloudposse/terraform-aws-iam-role/blob/master/example/main.tf
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
(I personally did not test the latest changes to the module, so can’t just say what’s the exact issue is)
That is the example I’m working from. I’ve looked through it several times. And can’t see a difference. From what I can tell, my only real difference is I don’t have an outputs.tf
is is required I have that file, so that the module exposes the outputs to the consumer?
ok I see the issue https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator/blob/master/main.tf#L6
Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator
it does not work with one item in the list
since it checks only for >1
well so, I can provide and update, and I’m using 2
so the base + full admin, as the example suggests
policy_documents = [“${data.aws_iam_policy_document.s3_full_access.json}“]
the previous was to simplify, I will post an updated stanza
I’m now using 2 as the example suggests
provider "aws" {
region = "${var.region}"
}
module "static-app" {
source = "git::<https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn.git?ref=0.10.0>"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
aliases = "${var.aliases}"
parent_zone_name = "${var.parent_zone_name}"
default_root_object = "${var.default_root_object}"
acm_certificate_arn = "${var.acm_certificate_arn}"
cors_allowed_headers = ["GET", "HEAD"]
cors_allowed_methods = ["GET", "HEAD"]
cors_allowed_origins = ["*"]
}
data "aws_iam_policy_document" "s3_full_access" {
statement {
sid = "FullAccess"
effect = "Allow"
resources = ["arn:aws:s3:::${module.static-app.s3_bucket}/*"]
actions = [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation",
"s3:AbortMultipartUpload",
]
}
}
data "aws_iam_policy_document" "base" {
statement {
sid = "BaseS3Access"
actions = [
"s3:ListBucket",
"s3:ListBucketVersions",
]
resources = ["*"]
effect = "Allow"
}
}
module "s3-write-role" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-role.git?ref=0.4.0>"
enabled = "true"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
policy_description = "Allow S3 FullAccess"
role_description = "IAM role with permissions to perform actions on S3 resources"
policy_documents = [
"${data.aws_iam_policy_document.base.json}",
"${data.aws_iam_policy_document.s3_full_access.json}"
]
}
working for you?
no this is the result of that update
17:04 $ terraform plan
Error: Incorrect attribute value type
on .terraform/modules/s3-write-role.aggregated_assume_policy/main.tf line 23, in data "aws_iam_policy_document" "zero":
23: override_json = "${element(local.policies, 0)}"
Inappropriate value for attribute "override_json": string required.
Error: Incorrect attribute value type
on .terraform/modules/s3-write-role.aggregated_policy/main.tf line 23, in data "aws_iam_policy_document" "zero":
23: override_json = "${element(local.policies, 0)}"
Inappropriate value for attribute "override_json": string required.
what TF version are you using?
17:04 $ terraform -v Terraform v0.12.2
- provider.aws v2.22.0
- provider.local v1.3.0
- provider.null v2.1.2
- provider.template v2.1.2
Ok, I’m seeing something nasty
I hope it’s not something that dumb like I forgot brew update
Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator
this module has not been updated to TF 0.12 yet https://github.com/cloudposse/terraform-aws-iam-role
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
that’s why the errors
In some point you are using this source: https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator.git?ref=tags/0.1.2
Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator
Which is this https://github.com/cloudposse/terraform-aws-iam-role/blob/05d1734bc40a73d6f21387b58c1f3204dbbe09aa/main.tf
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
Main file
so I’m a noobie here, @Andriy Knysh (Cloud Posse) where is that documented? I’m not seeing it but that could be true.
At line 23, you will see the error
both modules were not converted to TF 0.12 yet
that’s why TF 0.12 throws the errors
we’ll get to it soon
@Joan Hermida I’m not sure I understand the context there
Downgrade to the latest 0.11 version
@Andriy Knysh (Cloud Posse) could you just point out where I can see that? To avoid version mismatches in the future?
so for CloudPosse modules, the ones that were converted to TF 0.12 have hcl2
tag in the repo, e.g. https://github.com/cloudposse/terraform-aws-rds
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
and you can tell TF 0.11 from TF 0.12 by the syntax
TF 0.12 does not use any string interpolations https://github.com/cloudposse/terraform-aws-rds/blob/master/outputs.tf
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
and uses real first-class types like bool
, number
, list(string)
, map(string)
instead if strings like “string” and “list” https://github.com/cloudposse/terraform-aws-rds/blob/master/variables.tf#L19
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
ty, that’s very helpful, I’m getting a few new errors related to the downgrade but I’ll fix these up. Much appreciate the help.
Hello, I’m using : “git://github.com/cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms.git?ref=tags/0.6.1>” and I notice when using the newer alb module that target_group_name = “${module.alb.target_group_name}” and target_group_arn_suffix = “${module.alb.target_group_arn_suffix}” are not valid outputs anymore so it can’t be use with this cloudwatch-sns module, are you guys deprecating the use of the cloudwatch-sns-alarms or recommend something else ?
and now I think I found a bug , I’m getting this :
Error: Error running plan: 1 error occurred:
* module.alb_ingress.local.target_group_arn: local.target_group_arn: Resource 'aws_lb_target_group.default' not found for variable 'aws_lb_target_group.default.arn'
locals {
target_group_enabled = "${var.target_group_arn == "" ? "true" : "false"}"
target_group_arn = "${local.target_group_enabled == "true" ? aws_lb_target_group.default.arn : var.target_group_arn}"
}
If I pass the ARN or the module output still fails
unless I’m doing something really wrong
I made a copy of the module , removed the data resource and it does work, so I’m guessing this lies : https://github.com/cloudposse/terraform-aws-alb-ingress/blob/0.7.0/main.tf#L6
don’t really need to be there ? why making a lookup of something I’m already passing and creating that data resource that is not used in that tf ?
I guess is to check that the target group exists before continuing but in my case I used -target module.alb to make sure I had everything before continuing but somehow still fails
2019-08-09
Hi everyone - new to the channel but was hoping to find a solution to a problem I am running into dealing with output variables from modules
I use the aws vpc terraform module, and there is a specific output that that gets created as a list, specifically the database subnet output. I am trying to reference this output as an input variable for an rds module
does anyone know how to properly reference an output list generated by a module, as an input variable for another module?
any guidance or direction would be sincerely appreciated
outputs types are the same regardless whether it’s a list or a string
for a working example see https://github.com/cloudposse/terraform-aws-emr-cluster/blob/master/examples/complete/main.tf
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
and another example where the list output from the subnet module goes into the elasticsearch module https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/examples/complete/main.tf#L33
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
same for RDS https://github.com/cloudposse/terraform-aws-rds/blob/master/examples/complete/main.tf#L45
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
and RDS Aurora https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/examples/complete/main.tf#L40
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
bingo!
thank you @Andriy Knysh (Cloud Posse)
any ideas on this ?
and now I think I found a bug , I’m getting this :
Error: Error running plan: 1 error occurred:
* module.alb_ingress.local.target_group_arn: local.target_group_arn: Resource 'aws_lb_target_group.default' not found for variable 'aws_lb_target_group.default.arn'
I know the posts from yesterday usually get buried….
what module are you using? can you share the code
and now I think I found a bug , I’m getting this :
Error: Error running plan: 1 error occurred:
* module.alb_ingress.local.target_group_arn: local.target_group_arn: Resource 'aws_lb_target_group.default' not found for variable 'aws_lb_target_group.default.arn'
mmm that did not work, my complete post is just a higher
@jose.amengual here is how we use aws-alb-ingress
https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L34
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
aws-ecs-web-app
is used here https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L78
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
which itself is used here (working example) https://github.com/cloudposse/terraform-root-modules/blob/master/aws/ecs/atlantis.tf#L173
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
that is exactly where I got the example :
module "alb_ingress" {
#source = "git::<https://github.com/cloudposse/terraform-aws-alb-ingress.git?ref=tags/0.7.0>"
source = "../terraform-aws-alb-ingress"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = "${var.attributes}"
vpc_id = "${var.vpc_id}"
port = "${var.container_port}"
health_check_path = "${var.health_check_path}"
target_group_arn = "${module.alb.default_target_group_arn}"
# Without authentication, both HTTP and HTTPS endpoints are supported
unauthenticated_listener_arns = ["${module.alb.listener_arns}"]
unauthenticated_listener_arns_count = 1
# All paths are unauthenticated
unauthenticated_paths = ["/*"]
unauthenticated_priority = "100"
}
locals {
target_group_enabled = "${var.target_group_arn == "" ? "true" : "false"}"
target_group_arn = "${local.target_group_enabled == "true" ? aws_lb_target_group.default.arn : var.target_group_arn}"
}
when this evaluation happens , for some reason this :
data "aws_lb_target_group" "default" {
arn = "${local.target_group_arn}"
}
can’t find the ALB
and I’m 100% sure the value is correct
I mean the arn
but that data resource is the one that fails but is not anywhere else int he code
Did you provision the ALB?
module "alb" {
source = "git::<https://github.com/cloudposse/terraform-aws-alb.git?ref=tags/0.5.0>"
name = "${var.name}"
namespace = "${var.namespace}"
stage = "${var.stage}"
attributes = ["${compact(concat(var.attributes, list("alb")))}"]
vpc_id = "${var.vpc_id}"
ip_address_type = "ipv4"
subnet_ids = "${var.subnet_ids}"
security_group_ids = [""]
access_logs_region = "${var.region}"
http_enabled = "true"
https_enabled = "false"
http_ingress_cidr_blocks = ["0.0.0.0/0"]
https_ingress_cidr_blocks = ["0.0.0.0/0"]
certificate_arn = "${var.certificate_arn}"
health_check_interval = "60"
health_check_path = "${var.health_check_path}"
}
that is right before
yes first
Look at the root modules ECS folder
Did you try to run terraform apply second time?
yes
Sometimes there race conditions
I used target module.alb
then I run the rest
Where some resources are not created yet
I understand
I run target module.alb twice
then run the rest
Try running the rest twice
I did too
I’m destroying everything again right now
the thing is : even if I set
target_group_arn = "${module.alb.default_target_group_arn}"
to the target group arn itself
it does not work
I ran this project https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecs about 35 times, but never saw it could not find the target group
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
I saw in many cases that the ALB was not ready yet (it’s slow), so it could not attach the target group
ALB created :
module.alb.aws_lb.default: Creation complete after 2m16s (ID: arn:aws:elasticloadbalancing:us-east-1:...ging-demo-droneio-alb/0ff8d366761cb319)
module.alb.aws_lb_listener.http: Creating...
arn: "" => "<computed>"
default_action.#: "" => "1"
default_action.0.order: "" => "<computed>"
default_action.0.target_group_arn: "" => "arn:aws:elasticloadbalancing:us-east-1:234234234234:targetgroup/staging-demo-droneio-alb-default/a11e7a7298308db8"
default_action.0.type: "" => "forward"
load_balancer_arn: "" => "arn:aws:elasticloadbalancing:us-east-1:234234234234:loadbalancer/app/staging-demo-droneio-alb/0ff8d366761cb319"
port: "" => "80"
protocol: "" => "HTTP"
ssl_policy: "" => "<computed>"
module.alb.aws_lb_listener.http: Creation complete after 1s (ID: arn:aws:elasticloadbalancing:us-east-1:...-alb/0ff8d366761cb319/fbc88868db406827)
target group is there
and same error :
aws-vault exec hds-admin -- terraform apply -target module.alb_ingress 34s Fri 9 Aug 11:20:21 2019
null_resource.default: Refreshing state... (ID: 938128705396491941)
aws_lb_target_group.default: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...o-droneio-alb-default/c5d32c308b13dee7)
aws_security_group.default: Refreshing state... (ID: sg-0dca458073974ca29)
data.aws_elb_service_account.default: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
aws_s3_bucket.default: Refreshing state... (ID: staging-demo-droneio-alb-alb-access-logs)
aws_lb.default: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...ging-demo-droneio-alb/61f67535cafe2201)
aws_lb_listener.http: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...-alb/61f67535cafe2201/8e7ac22fe675d697)
Error: Error running plan: 1 error occurred:
* module.alb_ingress.local.target_group_arn: local.target_group_arn: Resource 'aws_lb_target_group.default' not found for variable 'aws_lb_target_group.default.arn'
`
that is after running 3 times the alb creation
after running the whole thing second time, the ALB is already ready, and it finishes provisioning the rest
yes, ALBs are really slow
I’m running this thing again just to make sure I did’t do anything stupid
just tried from zero and run the alb target 3 times, waited few minutes and tun module.alb_ingress and I get the same error
null_resource.default: Refreshing state... (ID: 938128705396491941)
aws_security_group.default: Refreshing state... (ID: sg-0dca458073974ca29)
aws_lb_target_group.default: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...o-droneio-alb-default/c5d32c308b13dee7)
data.aws_elb_service_account.default: Refreshing state...
data.aws_iam_policy_document.default: Refreshing state...
aws_s3_bucket.default: Refreshing state... (ID: staging-demo-droneio-alb-alb-access-logs)
aws_lb.default: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...ging-demo-droneio-alb/61f67535cafe2201)
aws_lb_listener.http: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...-alb/61f67535cafe2201/8e7ac22fe675d697)
Error: Error running plan: 1 error occurred:
* module.alb_ingress.local.target_group_arn: local.target_group_arn: Resource 'aws_lb_target_group.default' not found for variable 'aws_lb_target_group.default.arn'
`
so I can consistently reproduce the issue
I can run this :
terraform apply -target data.aws_lb_target_group.default
aws_lb_target_group.default: Refreshing state... (ID: arn:aws:elasticloadbalancing:us-east-1:...o-droneio-alb-default/c5d32c308b13dee7)
with same code is failing in the module and it does not fail outside of it
so, something is happening here :
locals {
target_group_enabled = "${var.target_group_arn == "" ? "true" : "false"}"
target_group_arn = "${local.target_group_enabled == "true" ? aws_lb_target_group.default.arn : var.target_group_arn}"
}
what TF version are you using?
Terraform v0.11.14
+ provider.aws v2.23.0
+ provider.local v1.3.0
+ provider.null v2.1.2
+ provider.random v2.2.0
+ provider.template v2.1.2
+ provider.tls v2.0.1
i think you are running into some race conditions
where this https://github.com/cloudposse/terraform-aws-alb-ingress/blob/master/main.tf#L21 has not been created yet
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
but this https://github.com/cloudposse/terraform-aws-alb-ingress/blob/master/main.tf#L6 is already being used in the outputs
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
but
resource "aws_lb_target_group" "default" {
count = "${local.target_group_enabled == "true" ? 1 : 0}"
will only be created if target_group_arn is = “”
base on the locals evaluation
so it should not even be trying to create this resource ?
so yes, you either provide one, or the module will create it
in my case I’m providing one
so you think somehow the evaluation is failing ?
even if I pass the raw arn value still fails
ok yes, there is a bug in that flow. we always created target group in the module (did not test when you provide one)
the bug is…
this https://github.com/cloudposse/terraform-aws-alb-ingress/blob/master/main.tf#L21 uses count
so any output from it is a list
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
target_group_arn = "${local.target_group_enabled == "true" ? join("", aws_lb_target_group.default.*.arn) : var.target_group_arn}"
you need to exampling me this a bit more
are you saying that the data resource : aws_lb_target_group.default output is a list ?
any output from it is a list
since it has count
even with count=1, it’s a list with one item
ahhhh yes….exactly
but I thought L3 was referencing :
data "aws_lb_target_group" "default" {
and not
resource "aws_lb_target_group" "default" {
this is where I’m confused
is there a order of preference ?
your error was referencing resource “aws_lb_target_group” “default”
which itself is used in the locals
which itself is used in data “aws_lb_target_group” “default”
but local.target_group_enabled is false since I’m passing the target group arn
so it should have taken the value of var.target_group_arn
TF always evaluates both sides of ternary operator
maybe this is some basic terraform I’m missing
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
thanks a lot @jose.amengual
this is my first PR where I do not understand the fix lol
haha
I’m reading the docs again that evaluation thing is screwing with my head
TF parses both sides of ?
operator in case of true
or false
, does not matter
I don’t get the reasoning behind evaluating when the if-statement is already false
since the target group resource is disabled by providing your own target group, it does not have any outputs
when TF tries to get an output on non existing resource, it fails
join(“”, xxx.*.yyy) works because even with non existing resource, it returns an empty string
I don’t get the reasoning behind evaluating when the if-statement is already false
blame TF
that’s how they did it 20 years ago in the mainstream languages
ok so since the resource has count the output instead of being a string is a list and since we are evaluating strings it fails
something like that ?
when you start a new parser/compiler, you have to go through all of that again, and it’s not easy
I always blame TF lol
ohhh wait the join is basically acting like a try catch so even is the output is empty it return a sane value
it returns an empty string even if the list is empty or NULL
please tell me this is better in 0.12
please…..
you are lucky
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The post this week is on the improvements to conditional operator…
by the way, everything is working perfectly now
want to open a PR?
for sure
thanks for finding it
this thing has been killing me, I though I was doing something wrong
I know I have to run few commands before I create the PR
are those documented somewhere ?
you mean to rebuild README?
make init
make readme/deps
make readme
you need that if you change any variable or outputs or README.yaml (not in this case where you just fix the code)
but run it anyway
I see ok
Has anyone here used terraform as the CD portion of the CI/CD pipeline? Currently, I’m deploying docker images to ECR with Gitlab, but I’m running into an issue where I need to somehow tell terraform to update it’s workspaces once a new image comes out. Was curious if anyone else had run into this / figured this out
@Mike Nock you mean that terraform is somehow constantly running and waiting for commands? Or it’s a manual process?
if manual, you can use https://www.terraform.io/docs/providers/aws/d/ecr_image.html to retrieve information about images
Provides details about an ECR Image
Yea, currently the pipeline is Gitlab > Docker > ECR for CI, and then manually deploying the images by going into TFE and updating the workspace so teraform sees the new image (the CI pipeline retags the image with :production and removes that tag from the old image when deploying to ECR, and terraform is set to only use the image with that tag). I’d like to automate that. So, when the developers push a new feature to master, it build, deposits the image, and then either sends an API call to terraform to update (prefer not doing it this way), or preferably someway of having terraform monitor the tags, and deploy once the tag is removed?
Also, for backstory, the reason I prefer not doing the api calls to update, is we are doing self-service terraform where client environments are being created regularly and dynamically, so I wouldn’t have a list of all the workspaces to send individual API calls to, and you can’t send 1 api call to update all workspaces, same as you can’t send 1 api call with a map of variables for the workspace but have to list out each one by one in separate calls.
that’s interesting (but we at CloudPosse did not use TFE)
Understandable, just figured it was worth asking if anyone else had run into it. Thanks!
1 error occurred:
* module.s3cdn-dev.aws_route53_record.cert_validation: At column 19, line 1: list "local.dvo" does not have any elements so cannot determine type. in:
${lookup(local.dvo[count.index], "resource_record_value")}
I was looking at : https://github.com/cloudposse/terraform-aws-rds-cluster/blob/0.11/master/examples/basic/main.tf#L6 what is all that witchcraft ?
I’m guessing that is only available in providers and should not be used with other terraform code except rds ?
those are attributes on the provider which you can use to disable some checks if you want to run it faster
has nothing to do with RDS
used in some tests
but not necessary at all
RDS takes so long I was thinking on adding them to my big TF but I was not sure if they could break something
2019-08-10
Do you have module which covers ACM Certificate for CloudFront if Route 53 HostedZone is not in us-east-1?
it’s a provider thing https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm-cloudfront/main.tf#L12
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Thanks @Andriy Knysh (Cloud Posse) Of course, its provider thing, I may miss something, but I don’t see how this reference address my use case
If I see well it presume that Hosted Zone is in same Region
or i take that back
sorry
Hosted Zones are global
Yes hosted zones are global
You create a certificate in us-east-1 using different provider
Then reference the cert ARN when you create a distribution
@Andriy Knysh (Cloud Posse) Thank, I did that, but i messed up something else which made me problem
provisioning atm, everything looks good so far
regarding the HCL deprecation on github action
is very likely due to the recent news that github action will have built-in CI/CD
and given github is owned by microsoft
is very likely the backend is azuredevops
and until next tuesday, automation for azuredevops pipeline is in yaml
@Andriy Knysh (Cloud Posse) after much time, upcoming tuesday is the decided to release the azuredevops terraform provider
decided day*
2019-08-11
Are there any known bugs with the ec2 module, release 0.11?
deployed an ec2 instance using the ec2 module, and had associate_public_ip_address set to true. Changed it to false from true, which then prompted a redeploy, now it fails with the error: value = coalesce( aws_eip.default is empty tuple aws_instance.default is tuple with 1 element Call to function “coalesce” failed: no non-null, non-empty-string arguments.
I cant destroy or apply any updates to my entire deployment
looks like there is a pull request for this specific issue: https://github.com/cloudposse/terraform-aws-ec2-instance/pull/45
Terraform changed handling of coalesce function to error out when there are no non-null non-empty elements on the list. This results in an error while configuring an instance with no EIP assigned t…
does anyone know if there is a temp workaround for this?
@LeoGmad you can fork the branch of the PR and use that one ?
I will try, thank you @maarten
I was able to successful fork the PR but issue still persists. I would be interested if anyone has been successful with this PR or finding a workaround
the only way to resolve is to set all instances “associate_public_ip_address = true”
which I guess isnt a big deal as long as they’re deployed behind a NAT or limited ACL
@Leonard Wood ok let me run the example with associate_public_ip_address set to false, and see what can be done.
@Leonard Wood see if it works with the new pr
I tried with the new PR but no luck. I am deploying 2 ec2 instances, one set to true and one to set false - and thats when the issue occurs. All ec2 instance deployments have to be set to ‘true’ for it to deploy.
@Leonard Wood make sure to clean your cache. I’m running ‘examples/complete’ with instance_enabled = false
and that works
Interesting - why the instance_enabled = false flag though?
I did delete the .terraform directory and re init so that should have cleared the cache
thanks again @maarten for looking into this too
sure, ah I’ve tried differnent options, including setting instance_enabled
to false, which was the problem with the original PR.
but if that flag is still to false, will the instance deploy?
“Flag to control the instance creation. Set to false if it is necessary to skip instance creation”
set* to false
2019-08-12
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Aug 21, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Hello! I want to share with you resource:for_each and dynamicfor_each construction for terraform 12.6 . Hope this helps you work with large arrays of resources. https://github.com/devops-best-practices/terraform-best-practice/blob/master/s3.tf
Contribute to devops-best-practices/terraform-best-practice development by creating an account on GitHub.
Hey people, looking for terraform template on vpc peering ( syntax 0.12) any help plz
@Sharanya’s question was answered by <@Foqal>
Hola fellas….
Quick question here… We have a Terraform RDS module (typical base build format) for the build deployments to use to setup RDS instances in our AWS Setups. Now I am tying to enable alerting (SNS topics with Cloudwatch) within the existing RDS module but not sure how to enable the alerting within an existing RDS module. I found out eh cloud posse GitHub repo (https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms) will give the ability to create the sns topics (please do correct me if I am wrong here), but what I need is to enable the alerting within the RDS module so that the users will create the required sns topics based on their needs. Anyone worked on this kind of typical setups before? Any input helps me for my cause here
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
or even anyone been through this kind of requirement before?
you can use the alarms from https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms/blob/master/alarms.tf (update them and add new ones if needed)
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
then you can create an SNS topic in diff module (or manually, or however you need it), and subscribe the RDS instance to the topic https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms/blob/master/main.tf#L20
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
(the module currently always creates an SNS topic)
and since we are talking about SNS alarms :
I’m using : “git://github.com/cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms.git?ref=tags/0.6.1>” and I notice when using the newer alb module that target_group_name = “${module.alb.target_group_name}” and target_group_arn_suffix = “${module.alb.target_group_arn_suffix}” are not valid outputs anymore so it can’t be use with this cloudwatch-sns module, are you guys deprecating the use of the cloudwatch-sns-alarms or recommend something else ?
@Andriy Knysh (Cloud Posse) I think that maybe this one it has not be updated to reflect changes on the alb module
@jose.amengual https://github.com/cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms does not have outputs at all for some reason
Terraform module to create CloudWatch Alarms on ALB Target level metrics. - cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms
what alb
module are you using that uses terraform-aws-alb-target-group-cloudwatch-sns-alarms
?
I took that from ECS web app example
it uses alb_ingress
, not terraform-aws-alb-target-group-cloudwatch-sns-alarms
at that line
this uses alarms, but does not use any outputs from it https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L170
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
HAHAHAHAH well so you know I’m dyslexic
for example
Thanks again
maybe I’m too old for this
haha, it happens feel free to ask any questions
Sure @Andriy Knysh (Cloud Posse)… Let me give it a try and will update here in the group with the progress..
Interesting!
@Erik Osterman (Cloud Posse) just implemented this.. works really well
can you zoom?
I want to see what you did
The part I’m still miffed about is that I can’t use it to terraform init -from-module
Basically, I want to be dry across repositories
I don’t want to be dry just in a single repository
I want to define my root modules once
I want to use them all over the place.
ya, so ugh, i see now that I look closer.
` tfworkspacesettings = yamldecode(local.tfsettingsfilecontent)` is the operative line. this is nice. i get what they are doing. we’ll probably use some thing like this.
but still this really assumes a monorepo infrastructure strategy and depends on workspaces.
it’s nice though. pretty elegant. very easy to understand.
sorry missed these messages… looks like you got it though… yamldecode
from 0.12.x made this a possibility i can zoom tomorrow if you still need it
Also @Andriy Knysh (Cloud Posse), just a quick fyi.. we dont use the IAM policy in the main.tf for the SNS topics creation as we use the user aces at higher levels in our build deployments. Is there other way to comment that part out of the module?
if you don’t use aws_sns_topic
, then you don’t need the policy as well
comment out of the module
: you can fork it and comment out aws_sns_topic
and the policy, or you can open a PR and add new var sns_topic_enabled
(set to true by default for backwards compatibility). Then use count = "${var.sns_topic_enabled == "true" ? 1 : 0}" for
aws_sns_topic and
aws_db_event_subscription and
aws_sns_topic_policy`
Without using the aws_sns_topic, how can I subscribe the RDS instance to the topic https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms/blob/master/main.tf#L20 (from your above statement dude)?
Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms
you need an SNS topic to subscribe the instance to a topic
ok ok
let me comment out the section from the module and will try it out
2019-08-13
Hi Everyone, I’m having the issue that when I attach a private EIP to an instance in a private subnet,, that associate_public_ip_address
gets set to true. This with a subnet with public ip mapping to false. Maybe someone else stumpled upon the same issue ?
@maarten what module is this?
“terraform-aws-modules/ec2-instance/aws” but it’s irrelevant, it’s not a module problem
No but I just want to have a look
and what is the actual problem? You don’t want associate_public_ip_address
: true ?
# Grafana
module "grafana" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 2.0"
name = "grafana-xlt"
instance_count = 1
ami = var.grafana_ami
instance_type = "m4.xlarge"
key_name = ""
monitoring = true
vpc_security_group_ids = [module.ec2_sg.this_security_group_id]
subnet_id = module.vpc.private_subnets[0]
#private_ip = "10.0.1.200"
associate_public_ip_address = false
user_data = "{\"auth\": [ {\"name\": \"admin\", \"pass\": \"${var.password}\"}]}"
tags = {
Name = "grafana"
Terraform = "true"
Environment = "dev"
}
}
#resource "aws_eip" "grafana" {
# vpc = true
# associate_with_private_ip = "10.0.1.200"
#}
#
#resource "aws_eip_association" "grafana" {
# instance_id = module.grafana.id[0]
# allocation_id = aws_eip.grafana.id
#}
OK, makes more sense seeing that. saying that, I haven’t come across this
Does the instance then get a routable public IP that you don’t want it to have?
yep
Can you deny outside world access via SG rule?
It’s a human error I think. I thought the EIP would be private, but it’s actually a public EIP .. associated with a private address
ah i don’t think you can have a private EIP - you’d need ENI for that … assuming I understood correctly what you want (a private static ip?)
Don’t think that actually works does it?
EIPs are always public
Hello there, do you have any fargate terraform module to analyze and implement in a production environment?
@Hugo Lesta take a look at https://github.com/cloudposse/terraform-aws-ecs-web-app
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
it uses Fargate
used by https://github.com/cloudposse/terraform-aws-ecs-atlantis (which deploys atlantis
on ECS Fargate)
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
the complete atlantis
solution is here https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecs (uses the two modules above and more)
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
@Andriy Knysh (Cloud Posse) thankss
2019-08-14
Hi, is there any possibility to setup azurerm app service
deployment from bitbucket automatically?
scm_type
block doesn’t work, as described in this issue: https://github.com/terraform-providers/terraform-provider-azurerm/issues/3696
Do you see any alternatives?
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Hey guys, anyone know how to get terraform-aws-elastic-beanstalk-environment to attach security groups directly to the launch config this module creates? Currently, whatever SGs you list under var.security_groups, they all get added as security group rules of a new SG that this module creates, instead of actually associating the SGs themselves directly to the Launch Config… https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L318
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L489 I guess the best way is to fork this module and customize it here?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
in all Cloud Posse modules we usually create a new SG and then add existing SGs and CIDRs blocks to it to allow ingress
But how would you allow ingress from external IPs, etc? as nested security groups DO NOT work like that….
we use external SGs and external CIDRs
not sure if https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment uses both (we used the module more than a year ago)
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L486 This setting should be refactored to support a list of paramterized strings
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Hi, I have a general Terraform question. Is anybody using it for deployment of and to HyperV? I found a community provider however I’m a little bit hesitant in using it. Which other gitops based tool would be available for HyperV deployments?
@github140 saw @chrism talking about hyperv in #geodesic last month
Nah it was just that Docker for Windows creates its VM in hyper-v on windows. We use vsphere/aws/azure
Our public weekly #office-hours is starting now! Join https://zoom.us/j/508587304
Hi, I’m trying to create a aws_vpc_peering_connection between 2 accounts, cross region using assume-role and specifying a aws provider for the aws_vpc_peering_connection_accepter with corresponding region. This fails because the requester tries to look for the accepter vpc in the same region and fails to find the vpc. I’ve used this code to successfully deploy cross account, but on the same region. Using TF version 0.11.13
This is what the module I’m using looks like:
data "aws_vpc" "accepter" {
provider = "aws.accepter"
id = "${var.accepter_vpc_id}"
}
locals {
accepter_account_id = "${element(split(":", data.aws_vpc.accepter.arn), 4)}"
}
resource "aws_vpc_peering_connection" "requester" {
vpc_id = "${var.requester_vpc_id}"
peer_vpc_id = "${data.aws_vpc.accepter.id}"
peer_owner_id = "${local.accepter_account_id}"
tags {
Name = "peer_to_${var.accepter_tag}"
}
}
resource "aws_vpc_peering_connection_accepter" "accepter" {
provider = "aws.accepter"
vpc_peering_connection_id = "${aws_vpc_peering_connection.requester.id}"
auto_accept = true
tags {
Name = "peer_to_${var.requester_tag}"
}
}
#######################
# ROUTE TABLE UPDATES #
#######################
data "aws_vpc" "requester" {
id = "${var.requester_vpc_id}"
}
data "aws_route_tables" "requester" {
vpc_id = "${var.requester_vpc_id}"
}
data "aws_route_tables" "accepter" {
provider = "aws.accepter"
vpc_id = "${data.aws_vpc.accepter.id}"
}
resource "aws_route" "requester" {
count = "${length(data.aws_route_tables.requester.ids)}"
route_table_id = "${data.aws_route_tables.requester.ids[count.index]}"
destination_cidr_block = "${data.aws_vpc.accepter.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.requester.id}"
}
resource "aws_route" "accepter" {
provider = "aws.accepter"
count = "${length(data.aws_route_tables.accepter.ids)}"
route_table_id = "${data.aws_route_tables.accepter.ids[count.index]}"
destination_cidr_block = "${data.aws_vpc.requester.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.requester.id}"
}
and this is how I defined the provider:
provider "aws" {
max_retries = "5"
profile = "${var.aws_profile_name}"
region = "${var.accepter_region}"
skip_get_ec2_platforms = true
skip_region_validation = true
alias = "accepter"
assume_role {
role_arn = "${var.accepter_role_arn}"
}
}
So this works perfectly if both VPC’s are on the same region, but when one of the VPC’s is on another region the requester peering connection regions (both accepter and requeter) shows up in AWS console as the same as the requester, thus failing to find the VPC.
Is there a way to specify the accepter’s region?
Hi @Alejandro Rivera I haven’t done this specifically with peering connections, but I have done this with a TGW
I had to set up a Resource Share that contained my VPC ID and shared it with the requester VPC
alternatively, you can share the resource within an OU in your AWS organisation
Manages a Resource Access Manager (RAM) Resource Share.
On the peering connection resource, set the argument peer_region
?
https://www.terraform.io/docs/providers/aws/r/vpc_peering.html#peer_region
Provides a resource to manage a VPC peering connection.
@loren Since the peering connection is the one from the account I’m creating this, this one does get the region correctly set, the problem comes with the accepting peering connection which doesn’t take in peer_region
but takes in a provider which has the correct region set, but won’t recognize it.
@Callum Robertson Thanks!, will try that out and let you know if that helps in this case also.
What you describe is exactly what peer_region
exists for
In the requesting account, it creates the peer request, the request must set the region in which the vpc peering connection will be accepted. you then accept it exactly as you are
@loren omg, trying that out right now and I’ll come back with results
@Callum Robertson Didn’t get to try your approach since @loren’s solution worked out, I had misunderstood that value and thought it referred to the requester vpc, thank you both again for the help!
@Sharanya You can see the code I’m using at the top, hope it helps
I’ve been tinkering with this module today. https://github.com/cloudposse/terraform-aws-ec2-instance-group Is there a way to get instances spread across mutiple AZ’s?
Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - cloudposse/terraform-aws-ec2-instance-group
One strategy is to provision the module once per AZ
Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - cloudposse/terraform-aws-ec2-instance-group
that gives you the most guarantee of even distribution
incidentally, this is the strategy that kops
takes when provisioning ASGs for master nodes.
Thanks for the response. I like the kops
approach as you do get guaranteed provisioning across AZ’s. It just generates a lot more code but it’s a fair tradeoff.
yea, it’s a trade off but as you say, probably fair and easy to understand what’s going on.
I think it would be nice to have both options. I’ve forked the repo and time permitting i’ll try and add that feature. I do agree though its easier to read. I find this module far less magical
than some of the others I have looked at. Thanks so much for open sourcing and sharing all of these modules.
@Patrick Beam you could provide a list of availability zones for that region e.g. [“a”, “b”, “c”] and then in your availability_zone input use this:
I think that will work. The problem i’m seeing is with subnet
which is required. I pass the list of subnets created into this module like this.
variable "public_subnet_ids" {}
subnet =var.public_subnet_ids[0]
without that index position terraform throws an error. When I try and create a new variable subnets
with the following.
variable "subnets" {
description = "A list of VPC Subnet IDs to launch in"
type = list(string)
default = []
}
#In the instance resource I changed this.
subnet_id = element(distinct(compact(concat([var.subnet], var.subnets))),count.index,)
#inside the module I set subnet to subnets
subnets = "${var.public_subnet_ids}"
When I plan I get the following error.
Error: Missing required argument
on instances/instance.tf line 18, in module "versio":
18: module "versio" {
The argument "subnet" is required, but no definition was found.
I’m curious how that argument subnet
is required. I don’t understand how that is defined in the module and can’t seem to track that down in the repo.
availability_zone = “${element(var.availability_zones, count.index)}”
You would just have to change that variable to a type = list(string)
hope that helps
2019-08-15
Anyone with experience in
resource "aws_ssm_document
and after destroy the document does not seems to be deleted
Having this error on Tf Plan - Error: Missing resource instance key
on .terraform\modules\vpc_peering_cross_account[accepter.tf](http://accepter.tf) line 96, in locals: 96: accepter_aws_route_table_ids = “${distinct(sort(data.aws_route_tables.accepter.ids))}”
@Sharanya I just shared a solution I’m using for vpc peering connection x accounts and x regions that might help a couple of posts up, I’ll tag you
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
How do i set order precedence in terraform? I have main.tf where it destroys a IAM role and deploy’s it again on terraform-apply. I am hitting a error ` aws_iam_role.service_role: Error creating IAM Role DEFAULT-TestingService-ecs-service-role: EntityAlreadyExists: Role with name DEFAULT-TestingService-ecs-service-role already exists`
This is after destroying the role, pretty sure that AWS needs some time to update the cache.
I want to first destroy the services and then create it. Is this possible?
hmmm…. terraform is used to create resource definitions - how to create them
terraform knows from its state file what was created
how are you doing destroy
from TF files?
Hi, do you guys have a preference on using the aws KMS managed key or creating a CMK ?
the reason I ask that is that after removing Kms_key_id from :
module "ssm_tls_ssh_key_pair" {
source = "git::<https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair.git?ref=0.2.0>"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = "${var.attributes}"
ssm_path_prefix = "${var.stage}/${var.name}/infrastructure/ssh_keys"
ssh_key_algorithm = "RSA"
ssh_private_key_name = "${module.default_label.id}-private"
ssh_public_key_name = "${module.default_label.id}-public"
#kms_key_id = "${module.kms_key.key_id}"
}
I got
Error refreshing state: 1 error occurred:
* module.ssm_tls_ssh_key_pair.data.aws_kms_key.kms_key: 1 error occurred:
* module.ssm_tls_ssh_key_pair.data.aws_kms_key.kms_key: data.aws_kms_key.kms_key: error while describing key [alias/test-demo-chamber]: NotFoundException: Alias arn:aws:kms:us-east-1:046894643055:alias/test-demo-chamber is not found.
status code: 400, request id: 7e387954-2256-4ef2-a40e-b48269259e9c
Hey all, I updated the release of terraform-aws-dynamic-subnets that I’m pulling down and now I’m getting this error:
Error downloading modules: Error loading modules: module dynamic_subnets: Error parsing .terraform/modules/e972fa1c1c4c2e3a44d52f7491016697/label.tf: At 3:25: Unknown token: 3:25 IDENT var.attributes
Any idea what’s going on?
for TF 0.12, use release https://github.com/cloudposse/terraform-aws-dynamic-subnets/releases/tag/0.13.0 and newer
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
oh ok, I’m on TF 0.11.13 and trying to use release 0.16.0 of the module. That must be why then.
I’ll try version 0.12.0
thanks!
I’m looking to move terraform code for a customer from regular terraform to terraform enterprise. I googled but couldn’t find if there are any quirks or things I should know about TFE. My goal is to create a terraform module that can be used for both the open source version as well as TFE.
I have an existing aws ecs cluster (made by hand) and would like to update it by using terraform. Is there any documentation for updating existing aws services (alb, ecr, ecs)?
2019-08-16
Hi, I am using terraform workspace to create azure windows server’s in more than one environment using one terraform code. For passing hostname and IP address am using the following lookup’s in local
locals { location = “${terraform.workspace}” image_id = “${module.image.image_id}” environment = “${module.locals.environment}”
subnets = { eastus2_prod = “${data.terraform_remote_state.shared_networking.eastus2_api_tier_subnet.id}” centralus_prod = “${data.terraform_remote_state.shared_networking.centralus_api_tier_subnet.id}” }
lb_ips = { eastus2_prod = “10.244.160.164” centralus_prod = “10.245.160.164” }
system = {
eastus2_prod = [ {
hostname = "wqilpeap101"
ip = "10.244.160.165" }, {
hostname = "wqilpeap102"
ip = "10.244.160.166" }, ]
centralus_prod = [ {
hostname = "wqilpcap101"
ip = "10.245.160.165" }, {
hostname = "wqilpcap102"
ip = "10.245.160.166" }, ]
}
subnet_id = “${lookup(local.subnets, format(“%s_%s”,local.location, var.environment))}” lb_ip = “${lookup(local.lb_ips, format(“%s_%s”,local.location, var.environment))}” systems = “${lookup(local.system, format(“%s_%s”,local.location, var.environment))}” }
it fails with following error message
Error: Error asking for user input: 1 error occurred: * local.systems: local.systems: lookup: lookup() may only be used with flat maps, this map contains elements of type list in:
${lookup(local.system, format(“%s_%s”,local.location, var.environment))}
can you help me fix this
@praveen you can try: subnet_id = “${local.subnets[ format(“%s_%s”,local.location, var.environment)]}”
subnet is working fine
issue is with systems
should I try it for systems?
trying now
So, I’m trying to be a bit clever. I have a need to conditionally add statements to an IAM policy document
I did try to do a join on the data.aws_iam_policy_document.<stuff>.json
property of multiple data sources, and mixed data sources and already rendered json documents coming in as variables in strings.
I think the root problem with this approach is that both things will render a FULL json document, so it will confuse stuff, such as:
{
my policy doc
}
{
my next policy doc
}
Is there a way to conditionally add statement blocks to a data.aws_iam_policy_document
?
The root cause here is that the cloudposse/terraform-aws-s3-module
has some built in document handling to set an “encrypted-only” policy, so if I need to do something like add a separate cross-account access principal policy, I can’t, because an s3 bucket can only have one bucket policy attached.
I’ve forked it and attempted my above described fix here: https://github.com/asiegman/terraform-aws-s3-bucket/blob/moar-bucket-policy-0.11/main.tf#L94
But alas, that didn’t work due to the multiple json documents being joined to form invalid json.
I can always not use cloudposse’s module and just build my own resources, but if I could add a clever bit to add arbitrary statements in for stuff like this, I was going to deliver it back to the community
@Alex Siegman in 0.12 you can do this with “dynamic” loops. What you could do with 0.11 is using source_json
with another aws_iam_policy_document
. I don’t really like it so much but it’s a funny hack: https://github.com/doingcloudright/terraform-aws-ecr-cross-account/blob/ab55861e4de158d3bf490976c16a2bebb6661c28/main.tf#L43
Terraform module to create an ECR repo with cross-account-access - doingcloudright/terraform-aws-ecr-cross-account
Oh interesting. What happens if source json is just a blank string then~? I’ll play with it, great lead. Thanks!
That won’t work, but you can have one policy statement which would be valid for all your policies, and use that one to start ‘sourcing’ from.
also take a look at : https://github.com/cloudposse/terraform-aws-iam-policy-document-aggregator
Terraform module to aggregate multiple IAM policy documents into single policy document. - cloudposse/terraform-aws-iam-policy-document-aggregator
heck, i can probably just use that, i already have multiple valid documents, i just need to aggregate all their statements in to one
2019-08-17
Why did the release cadence change for this module? https://github.com/cloudposse/terraform-aws-vpc/releases
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
0.4.2 is a patch release against the last version of the module for terraform 0.11
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
0.7.0 is the latest release for 0.12
Thanks @Erik Osterman (Cloud Posse)
Hi, I do a bit of TF (0.12) + AWS. I have created a few modules: VPC, Subnets, IGW, etc. Each module outputs.tf some variables, I use S3 backend and I can access exported variables from different modules. All smooth and easy but when I create application load balancer (aws_lb) I can NOT make terraform to output any variable. My output.tf (root module dir) looks like this:
output “alb_id” { value = aws_lb.alb.id } output “alb-security-group_id” { value = aws_security_group.alb-security-group.id } output “alb-target-group_arn” { value = aws_lb_target_group.alb-target-group.arn }
The ALB, security group and target group gets created, I can see it in the console but the output is empty, no errors during terraform apply. Why?
Hi, Anybody using local persistent volume for any kubernetes_stateful_set. I am trying use affinity with ndoe_selector_term but failing. Kindly guide me if anyone using in this way.
2019-08-19
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Aug 28, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
anybody working with terraform integration with Jenkins cicd pipeline
we use atlantis
and Codefresh to deploy terraform, but here are a few articles on how to do it from Jenkins
This extensive article shows you how to create an immutable CI/CD infrastructure with Terraform and Jenkins that will make managing your infrastructure easier.
In theory, deploying a dockerised .NET Core app is easy (because Docker simplifies everything, right?). Just trigger your CI/CD pipeline on…
If you’ve followed my last few posts you have now used Terraform to create a Google Kubernetes Engine cluster, and you’ve deployed Jenkins…
Hello, I was curious to see if anyone has ever tried to create a cross account aws codepipeline with terraform. If not, It would be great to get some feedback on my current approach
Does anyone use software to detect state drift on a recurring basis?
I’d like to start performing infrastructure wide Terraform state drift identification, and work towards making sure I see that excellent “No changes.” messaging after issuing a terraform plan
across the board more frequently.
We just have our CI run terraform plan -detailed-exitcode
, and alert on job failures…
I was looking at the detailed-exitcode
option, seems good as I can get an explicit list of terraform projects that are having issues. I’d like to go a step further so I can reduce operator work around identification and fixing of the state drift manually.
Only saw a few project on GitHub, none of them maintained.
Check for drift between Terraform definitions and deployed state. - digirati-labs/drifter
This one seemed to be the most relevant for my use case.
Cheap and easy and works was my thinking, can always optimize later
Could save off the plan, and analyze it separately. Maybe whitelist some resources/diffs to run apply automatically
The Foqual bot was able to find some information regading GitHub Actions to perform plans, could be a good starting point for me.
Yeah, very much MVP for this drift detector - iterate later
Thanks for the ideas, I’ll think about this some more and see how to approach this
Hi, should this module : https://github.com/cloudposse/terraform-aws-cloudwatch-logs when a CMK key gets passed to create a policy to be able to use that key ?
2019-08-20
Hi All, I’m creating a bucket policy data resource (https://www.terraform.io/docs/providers/aws/r/s3_bucket_policy.html)
I’m running into an issue where I’m trying to reference the ‘json’ attribute of the data source in a policy account resource, can anyone help me with the below?
Attaches a policy to an S3 bucket resource.
I’m getting the error that it’s an empty tuple, not sure what I’m doing wrong here..
@Callum Robertson what about you do at line16 the same as line 2, so the count’s are in sync
or what is the idea there, that you only want to apply the policy when var.upload_bucket_objects is set to true, correct ? What happens now is that you refer to a policy you are not creating hence it fails.
encountered the following error when running terraform apply?
[3:57 PM] “policy” contains an invalid JSON: invalid character ‘a’ looking for beginning of value
You’d have to paste your policy here for further debugging. Seems like a syntax or formatting issue.
Also not sure what provider this is for?
AWS, Google Cloud, Sentinel?
If you are using AWS, this tool has helped me in the past for creating sane policy document templates:
Thanks That helped
Thanks @maarten, I think it’s the case of starting at a problem for to long
2019-08-21
Can’t recall but when you create an AWS SG rule with terraform and you do something like count index vs creating separate rules, which one doesn’t delete the whole sg and create a new one? Instead of creating a new sg every time I’d like to just keep adding/removing ports if needed.
from_port = "80"
vs
from_port = var.allowed_ports[count.index]
I know in your tf you do the second one. let me know! Thank you!
@pericdaniel when you create a separate aws_security_group
w/o rules, you can add as many aws_security_group_rule
as you need w/o recreating the whole SG
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
#office-hours starting now! https://zoom.us/j/508587304
Curious, are you all using or looked into dependabot for terraform module dependencies - https://dependabot.com/terraform/
Automated dependency updates for your Ruby, Python, JavaScript, PHP, .NET, Go, Elixir, Rust, Java and Elm.
for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20
I forgot to update the version for CI to 0.12, will try and push that out
This moves us to terraform 0.12, it is working with our usages of this module, but it hasn't been tested completely with all options, but does appear valid. note that the examples aren't po…
Note, this depends on cloudposse/terraform-aws-ec2-autoscale-group#14 getting merged and then making a change here to reference that new tag. This does the upgrade and also copies the new arguments…
This moves this module to terraform 0.12, the example isn't ported, as some of those modules aren't 0.12 compliant yet, but this is working with our EKS clusters. I notice there are also te…
I am using them and they are working
but could use some help to get the rest of the work done (porting examples and adding the new CI stuff)
except… looking around, I have no idea how to get it 0.12 to run as part of the things
2019-08-22
Error: The role “arniam:role/gc-invoicedataimport-function-role” cannot be assumed. There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
IMPORTANT: Upcoming change to AWS Cost and Usage Report Access Control Policies on August 19th
2019-08-23
Do you need to use version Terraform 0.11 when bootstrapping with the reference-architectures?
Hey folks. Is there a way to use a merge or use a splat-type operator in a terraform child block (not identifier values) ? i.e.
data "aws_ami" "potato" {
filter {
...local.default_filters
}
}
I’m effectively trying to filter merge(local.thing, {})
So, when you’re creating an ECS service, you either have the choice between using an ALB or not.
If you’re using an ALB, you need to pass in additional objects for the load_balancer
parameter, example here: https://www.terraform.io/docs/providers/aws/r/ecs_service.html#example-usage
Provides an ECS service.
How would you structure a module to make this object parameterized and optional?
I guess this kind of answers my question: https://github.com/blinkist/terraform-aws-airship-ecs-service/blob/master/modules/ecs_service/main.tf
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service
just create a ton of these different ecs_service resources based on how they’re configured.
The variable “lambda_settings” is required, so Terraform cannot proceed without a defined value for it. “ - any idea about this
2019-08-26
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Sep 04, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-08-27
Hey all! New to terraform, I’m trying to do some acm cert validation using : https://www.terraform.io/docs/providers/aws/r/acm_certificate_validation.html
In the link, they show the following r53 record for cert validation being created:
resource "aws_route53_record" "cert_validation" {
name = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_name}"
type = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_type}"
zone_id = "${data.aws_route53_zone.zone.id}"
records = ["${aws_acm_certificate.cert.domain_validation_options.0.resource_record_value}"]
ttl = 60
}
Specifically, aws_acm_certificate.cert.domain_validation_options.0.resource_record_name
, I create my acm certs(Specifically 2) like this:
resource "aws_acm_certificate" "cert" {
provider = "aws.acm"
domain_name = "*.${var.domain}.${element(var.certs, count.index)}.${var.aws_env == "prod" ? "com." : "test."}"
validation_method = "DNS"
tags = "${local.required_tags}"
lifecycle {
create_before_destroy = true
}
count = "${length(var.certs)}"
}
How would I be able to reference each cert for validation?
Waits for and checks successful validation of an ACM certificate.
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
This might serve as a good reference point.
oh this looks like a good idea, thank you
This looks correct
what looks correct?
I dont know how to reference each cert for validation
@Brij S you are missing a resource - let me find it
ah, I see what you mean … you will need to somehow loop over all of your certs and specify a aws_acm_certificate_validation
resource for each one…
yeah.. thats the tricky part
not sure how to facilitate that
@Brij S are you using terraform 0.12+?
no, TF 0.11
it might be impossible to do this, since the count
for resource "aws_route53_record" "cert_validation"
will not be able to be <computed>
, i.e. can’t dynamically set to the cert count iirc
What are you guys toughs on Terraform enterprise ?
@johncblandii
I like it. I think there are some corners for sure where they could improve, specifically around integrating with other systems without creating custom CLI solutions
PR integration is legit
private module registry is almost a requirement if you plan on having modules live in their own repos, the dependency management between all of them using just git tagging (only option non-enterprise) is such a pita
yeah, it definitely can be
i can demo any parts of it to anyone who wants to check it out
we terraformed all of our workspaces so all projects are basically just reusable TF modules
I will interested on that demo for sure
@johncblandii so you aren’t using workspaces for separation between environments? or did i read that wrong
we are
dev, uat, support, prod, training, etc
@jose.amengual email me: [email protected]
i read somewhere (i think official docs) that workspaces shouldn’t be used for this but they just work so perfectly for DRY
TFE is workspaces
literally
yea my guess is outdated docs or it wasn’t official and im mistaken
Terraform by HashiCorp
oh and the remote execution is legit
such a timesaver for devs who don’t have write access to prod to test fixes with prod secrets
write locally, use vars configured on Cloud
is it worth it ?
@sarkis right! that makes sense, any thoughts on how to go forward with this then
@Brij S i’d say if you want to continue to go with this dynamically, TF 0.12 upgrade may be the only option… otherwise only way I can think of solving this in 0.11 is to have static cert_validations - hopefully someone can prove me wrong here
ok, if I was on TF 0.12, how would you go about it
you just count over the variable input again, length(var.certs)
but how would i reference it?
aws_acm_certificate.cert.domain_validation_options.0.resource_record_name
refers to..one cert?
you want to create a route53 record for each certificate?
yep
because for me aws_acm_certificate.cert
contains two certs (corresponding to two zones)
resource "aws_route53_record" "cert_validation" {
count = "${length(var.certs)}"
name = "${aws_acm_certificate.cert.domain_validation_options.*.resource_record_name[count.index]}"
type = "${aws_acm_certificate.cert.domain_validation_options.*.resource_record_type[count.index]}"
zone_id = "${data.aws_route53_zone.zone.id}"
records = ["${aws_acm_certificate.cert.domain_validation_options.*.resource_record_value[count.index]}"]
ttl = 60
}
I had a problem with domain_validation_options
when using multiple names from aws_acm_certificate
resource - the order of domain_validation_options
were undefined. Not sure if it’s still a problem.
@loren this is awesome, would the following be the same too then?
resource "aws_acm_certificate_validation" "cert" {
certificate_arn = "${aws_acm_certificate.cert.arn}"
validation_record_fqdns = ["${aws_route53_record.cert_validation.fqdn}"]
}
something like that, can also use element(...)
interpolation
need the wildcard to reference all the resources…
resource "aws_acm_certificate_validation" "cert" {
certificate_arn = "${aws_acm_certificate.cert.arn}"
validation_record_fqdns = ["${aws_route53_record.cert_validation.*.fqdn}"]
}
there are also more syntax options in terraform 0.12
"${aws_acm_certificate.cert.arn}"
would be "${aws_acm_certificate.cert.*.arn[count.index]}"
?
validation_record_fqdns
is a list, so i think you want a single resource there, not multiple with count. just pass the list of all fqdns to the parameter, rather than a single one (using count.index
)
if you do want a aws_acm_certificate_validation
resource per cert though, then yes, same setup
@loren mind if I DM you for more questions? Just want to confirm some things to make sure I understand
of course, i may not be online to respond quickly though. kinda doing this in between work tasks
Hi Guys
I have some issue when running terraform validdate on this modules terraform-null-label
hi @Phuc
Does anyone expenrienced this before ?
terraform-null-label git:(0.11) ✗ terraform validate
Error: local.generated_tags: local.generated_tags: zipmap: count of keys (1) does not match count of values (0) in:
${zipmap(
compact(list("Name", local.namespace != "" ? "Namespace" : "", local.environment != "" ? "Environment" : "", local.stage != "" ? "Stage" : "")),
compact(list(local.id, local.namespace, local.environment, local.stage))
)}
I didn’t adjust anything
just running simple command to validate at first
what TF version?
for 0.11 and below
I clone the repo on branch 0.11/master
how do you use it? show the example
actually I didn’t used it yet
I just try to validate the code first to see if there is error
just simple terraform init then terraform validate
and that error coming up
you need to validate module invocation with all vars provided, similar to https://github.com/cloudposse/terraform-null-label/blob/0.11/master/examples/complete/main.tf
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
thank @Andriy Knysh (Cloud Posse), I will try to follow that
Hi aknysh
Just a smal question need you to answer
I see there is 2 modules: terraform-label and terraform-null-label on github of cloudposse
It seems they are all for naming conversion.
so what is the difference between those modules ?
null-label
and terraform-label
are mostly the same in terms of naming convention and generating globally unique IDs for AWS resources
null-label
has much more features, e.g. context, additional tags as list of maps outputs, etc.
but with TF 0.11, all that complex logic in null-label
was sometimes throwing the count can't be computed errors
in top-level modules
that’s why we created a simplified version of it and named it terraform-label
so if you just need a naming convention and globally unique IDs, both could be used
but both were converted to TF 0.12 now, so null-label
should be OK to use (much less count can't be computed
errors`
so try null-label
with TF 0.12, it has more features
(but both modules are supported)
thank Akysh, due to current TF 0.11, I think I will test with null-label
@Andriy Knysh (Cloud Posse) Is there any possibility to support TF12 for https://github.com/cloudposse/terraform-aws-ec2-bastion-server module ? I can’t find any issue regarding this?
Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server
yes, we’ll convert it. Did not have time yet, we have more than 100 modules, converted 40+ so far
Terraform Module to define a generic Bastion host with parameterized user_data - cloudposse/terraform-aws-ec2-bastion-server
Thanks for the prompt response
2019-08-28
Can I know estimation time for this PR to be merged - https://github.com/cloudposse/terraform-aws-multi-az-subnets/pull/16 ? Also TravisCI is failing because it checks this TF12 upgrade PR in TF11 binary.
we will review it ASAP. We started converting the module to 0.12, but did not have time to finish it. We also adding tests for the module and for the example (bats and terratest) and Codefresh Ci/CD pipelines to deploy the example on AWS account
Sounds good Thanks buddy.
Hello all, I’m trying to do acm cert validation(multiple at a time) and I’m running into some issues.
resource "aws_route53_record" "cert_validation" {
count = length(var.certs)
name = aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_name
type = aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_type
zone_id = aws_route53_zone.zones[count.index].id
records = ["${aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_value}"]
ttl = 60
}
the above code..kinda works? when I apply this, I get the following error:
Error: Invalid index
on ../modules/bootstrap/acm_validation.tf line 4, in resource "aws_route53_record" "cert_validation":
4: type = aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_type
|----------------
| aws_acm_certificate.cert is tuple with 2 elements
| count.index is 0
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/bootstrap/acm_validation.tf line 6, in resource "aws_route53_record" "cert_validation":
6: records = ["${aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_value}"]
|----------------
| aws_acm_certificate.cert is tuple with 2 elements
| count.index is 0
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/bootstrap/acm_validation.tf line 6, in resource "aws_route53_record" "cert_validation":
6: records = ["${aws_acm_certificate.cert[count.index].domain_validation_options.0.resource_record_value}"]
|----------------
| aws_acm_certificate.cert is tuple with 2 elements
| count.index is 0
The given key does not identify an element in this collection value.
But when I do destory, it seems its creating at least 1..
module.nonprod.aws_route53_record.cert_validation[1]: Still destroying... [id=Z3FZBH8XNPPPYT__3aedbf37656ebde46d6db19a4f38212c.test-api.dev._CNAME, 30s elapsed]
module.prod.aws_route53_record.cert_validation[1]: Still destroying... [id=Z2IAS5UODUXPHA__73016316fbab15ae3d5db4d2b9b240c8.test-api.com._CNAME, 30s elapsed]
any ideas?
what version of TF? what’s type of var.certs
? If it’s a list of string, how many items are in it?
Terraform v0.12.5
variable "certs" {
default = ["apps", "api"]
type = "list"
}
you are mixing TF 0.11 code with TF 0.12
that’s why it’s not working
(yes, TF is not so smart)
oh, what do you mean
like my var is setup wrong?
var types and interpolations in TF code
im not sure I follow, what would I need to change?
not sure if that’s the reason of the errors, but first convert everything to TF 0.12 syntax
type = list(string)
0.12 ^
thank you both! indeed wizards. Wouldve never noticed the 11 vs 12 syntax. D’oh
just went to the top of my TF debug playbook as well - thanks to @Andriy Knysh (Cloud Posse)
Saw on Reddit: https://www.reddit.com/r/Terraform/comments/cwgy7r/i_created_a_visualizer_for_terraform_project_cc/
10 votes and 2 comments so far on Reddit
#office-hours starting now! join us here https://zoom.us/s/508587304
how would this line be turned into ‘tf 12 syntax
“*.${var.domain}-${element(var.certs, count.index)}.${var.aws_env == “prod” ? “com.” : “dev.”}”`
it’s already TF 0.12 syntax since you are using string concatenation with interpolation
yes, I ran the terraform12 upgrade command and it didnt change
also you could use this: format("*.%s-%s.%s", var.domain, var.certs[count.index], var.aws_env == "prod" ? "com." : "dev.")
whatever looks better for you
oh cool, didnt know I could do that
thank you again !!
did not see it before
you can try to run terraform taint
Hi everyone! We are currently using the terraform-null-label module for labels in Terraform but we are running into an issue when updating our code to v0.12 We pass “context” between modules so we have a variable defined in the module called “tags_context” and type: map This was upgraded to type: map(string) by Terraform but then the plan doesn’t work What type should the context variable be in v0.12?
@Joshua Snider see #announcements
nvm, it was answered in #announcements
Did anyone Come across NPM memory Issues ?
Hey folks! I see that there’s a Cloud Posse container definition module, but I’m wondering if there’s an easy way to make a container_definition a reusable variable (or similar) that still supports interpolation. Use case: I have an ECS service that I run as both Fargate and EC2 using two different modules and don’t want to duplicate the container definition to keep it dry.
the container definition for Fargate and ECS are slightly different
in Fargate the Task have to set memory and CPU and the container definition too
but in ECS EC2 that is not required
and there is some other differences on the network setup etc
I think is sane to have them separated
Hmmm, with 15+ envvars/secrets and 4 environments it feels very anti-dry to repeat it so many times.
you have them in variables ?
what is so not DRY about that ?
if you had them hardcoded I will agree
and you have 4 environments ?
so if you separate those for ENVs in it’s own TF
would you call it DRY ?
In my structure I have a directory per env for an application
we have some cluster with like 20 different task defs
Perhaps I should switch to using tfvars?
but in reality they should be it’s own thing
I use tfvars
Hmmm
I think if I reassessed how I structured this it would help my DRY concerns
Thanks for the feedback
we populate the tfvar from SSM parameter store
when necesary
or jenkins does it
from other secret/parameter store
Does it automatically create the tfvars based on what is in parameter store? I have my secrets name spaced with app/env/secret_name and was thinking I could write something to automatically grab all the params set for an app and put together the secrets block
you can use chamber for that
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
so you can do something like
chamber write test-ec2-helloworld ecs_parameter_secret password333
where
test-ec2-helloworld
is your app/service
Very interesting
chamber export test-ec2-hello -f tfvars 1350ms Wed 28 Aug 15:42:11 2019
ecs_parameter_secret = "password1"
ecs_parameter_string = "NOTSECRET"
or you can use chamber as ENTRYPOINT in your containers
but I don’t know how usefull is that since now you can use SM or SSM parameter store directly in the task def
I’m trying to use a local variable as the bucket name:
resource "aws_s3_bucket" "remote_state" {
bucket = local.bucket_name
force_destroy = var.force_destroy
acl = "private"
versioning {
enabled = var.versioning_enabled
}
tags = local.required_tags
}
local var is :
locals {
bucket_name = "account-${var.aws_env}-${project_domain}-${var.aws_region}"
}
but when I try to run terraform plan
I get the following error:
50: bucket_name = "account-${var.aws_env}-${project_domain}-${var.aws_region}"
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
what does that mean?
Missing var in project_domain
missing var?
I have project_domain declared as a var
var.project_domain
omg!
Use a good editor with TF error detection, like JetBrains IDEA with TF plugin, or VS Code
I use vscode with the terraform plugin
ive been told its ‘alright’ though
it seems that this example : https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/examples/enhanced_monitoring/main.tf#L17 is not correct
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
does not work on TF 0.12
Maybe, only examples/complete tested in 0.12 and have automatic tests for them
I had to do
assume_role_policy = "${data.aws_iam_policy_document.enhanced_monitoring.json}"
to make it work
which I found it weird
I’m new to 0.12 and I posted this in the 0.11 channel
Weird indeed
@Andriy Knysh (Cloud Posse) one more question if you dont mind! I have a module declaration as follows:
module "s3-remote-state-bucket" {
source = "../../modules/remote_state"
versioning_enabled = var.versioning_enabled
force_destroy = var.force_destroy
aws_env = "nonprod"
aws_region = var.aws_region
aws_account_id_nonprod = var.aws_account_id_nonprod
aws_account_id_prod = var.aws_account_id_prod
providers = {
aws = "aws.nonprod"
}
}
in my variables.tf file in the same folder I have:
variable "aws_env" {
description = "aws account environment"
type = string
}
Note: no default value. but when I run apply I get asked for the aws_env:
terraform apply
var.aws_env
aws account environment
Enter a value:
I dont want to use a default variable for this, this worked as intended with tf11 - anything new with tf12 maybe?
Don’t believe 0.11 didn’t ask you for a missing value :)
Nothing changed in 0.12
im providing it in the module declaration though, strange
But maybe the var value was provided in tfvar file, or on command line, or in ENV var
huh, so if I remove it from the vars file and provide the value it works as intended
wild..
2019-08-29
hi Guys, I have some issue when running test on module creating s3
Here is the code in my main file
module "s3_bucket" {
source = "git:xxxxx/terraform-modules/aws-s3.git?ref=terraform_0.11"
enabled = "true"
user_enabled = "false"
allowed_bucket_actions = []
policy = ""
force_destroy = "false"
versioning_enabled = "true"
allow_encrypted_uploads_only = "false"
sse_algorithm = "AES256"
kms_master_key_arn = ""
namespace = "test"
name = "frontend"
stage = ""
attributes = []
delimiter = "-"
tags = {
"BusinessUnit" = "XYZ",
"Snapshot" = "true"
}
}
and here is in variables.tf
variable "namespace" {
type = "string"
#default = ""
description = "Namespace (e.g. `eg` or `cp`)"
}
variable "stage" {
type = "string"
#default = ""
description = "Stage (e.g. `prod`, `dev`, `staging`)"
}
variable "name" {
type = "string"
#default = ""
description = "Name (e.g. `app` or `db`)"
}
variable "delimiter" {
type = "string"
default = "-"
description = "Delimiter to be used between `namespace`, `stage`, `name` and `attributes`"
}
variable "attributes" {
type = "list"
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = "map"
default = {}
description = "Additional tags (e.g. `{ BusinessUnit = \"XYZ\" }`"
}
variable "acl" {
type = "string"
default = "private"
description = "The canned ACL to apply. We recommend `private` to avoid exposing sensitive information"
}
variable "policy" {
type = "string"
default = ""
description = "A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy."
}
variable "region" {
type = "string"
default = ""
description = "If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee."
}
variable "force_destroy" {
type = "string"
default = "false"
description = "A boolean string that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable."
}
variable "versioning_enabled" {
type = "string"
default = "false"
description = "A state of versioning. Versioning is a means of keeping multiple variants of an object in the same bucket."
}
variable "sse_algorithm" {
type = "string"
default = "AES256"
description = "The server-side encryption algorithm to use. Valid values are `AES256` and `aws:kms`"
}
variable "kms_master_key_arn" {
type = "string"
default = ""
description = "The AWS KMS master key ARN used for the `SSE-KMS` encryption. This can only be used when you set the value of `sse_algorithm` as `aws:kms`. The default aws/s3 AWS KMS master key is used if this element is absent while the `sse_algorithm` is `aws:kms`"
}
variable "enabled" {
type = "string"
description = "Set to `false` to prevent the module from creating any resources"
default = "true"
}
variable "user_enabled" {
type = "string"
default = "false"
description = "Set to `true` to create an S3 user with permission to access the bucket"
}
variable "allowed_bucket_actions" {
type = "list"
default = ["s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:GetBucketLocation", "s3:AbortMultipartUpload"]
description = "List of actions the user is permitted to perform on the S3 bucket"
}
variable "allow_encrypted_uploads_only" {
type = "string"
default = "false"
description = "Set to `true` to prevent uploads of unencrypted objects to S3 bucket"
}
`
the issue is, if I run terraform validate to see the resource which will be created. I will come up with this error :
Test_s3_module terraform validate
Error: Required variable not set: namespace
Error: Required variable not set: stage
Error: Required variable not set: name
but I already declared the value for each of that value in main.tf this error wont show up if I put default value under the variables file like this:
variable "namespace" {
type = "string"
default = "" <------ this line
description = "Namespace (e.g. `eg` or `cp`)"
}
variable "stage" {
type = "string"
default = "" <------ this line
description = "Stage (e.g. `prod`, `dev`, `staging`)"
}
variable "name" {
type = "string"
default = "" <------ this line
description = "Name (e.g. `app` or `db`)"
}
I’m testing on TF v0.11
has anyone tried github actions for terraform? https://www.terraform.io/docs/github-actions/getting-started/index.html
Terraform by HashiCorp
I’m actually looking to do this soon. I get tired of having people re-commit/push due to TF fmt failing on CI. Looking to automate an fmt commit.
Terraform by HashiCorp
yeah, I followed their getting started……but it doesnt work
(unsure if possible, but i’m going to try it out soon)
it is possible according to their docs
welp…then there is that
but I set it up to be invoked on PR’s
I tried it last night and it didnt even work
their documentation is a bit confusing
I’m a bit flummoxed. I have a 00_remote_state.tf
file that I’ve used all over the place that configures s3 for remote state for various vanilla terraform projects. I’m trying to use it now in a new project. terraform init
downloads the latest aws plugin and says it succeeds, but it’s not creating the key (dns-nonprod/terraform.tfstate
) that I’ve told it to up in the S3 bucket. I swear this used to work. Shouldn’t terraform init create the key up in s3? Running any terraform plan or terraform apply errors with state not found:
data.terraform_remote_state.ops_s3: Refreshing state...
data.aws_route53_zone.qa_example_net: Refreshing state...
data.aws_route53_zone.qa2_example_net: Refreshing state...
data.aws_route53_zone.dev_example_net: Refreshing state...
Error: Unable to find remote state
on 00_remote_state.tf line 1, in data "terraform_remote_state" "ops_s3":
1: data "terraform_remote_state" "ops_s3" {
Does this behavior sound familiar to anybody?
I have tried forcing older aws plugin version that worked fine previously. 2.19 and 2.14 (2.25 is latest). No change in behavior. I can paste the file here if requested. It’s just not making any sense to me.
- terraform versions are the same or different?
Running terraform 0.12.4, same as I have been for weeks.
Thanks for the feedback, BTW.
- Does the user you are using to provision have the permissions to access the remote state?
I just created a file in that S3 bucket, and then deleted it. I do have create access.
I ran the init with TF_LOG=debug
. I see it checks if the file exists up in S3. But then it never tries to create it.
question: I am trying to build a module wrapping a resource and don’t want to provide every single argument for the resources blocks.
resource "type" "name" {
some_block {
blah = true
}
}
I want to instead say:
resource "type" "name" {
some_block = var.some_block
}
I did see possibly using a loop, but I’m not seeing a definitive answer or direction.
dynamic blocks (TF 0.12 only)
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
perfect
if the var is not a list of objects (and you don’t want i to be a list), then you could do something like this:
dynamic "bootstrap_action" {
for_each = [var.bootstrap_action]
content {
path = bootstrap_action.value.path
name = bootstrap_action.value.name
args = bootstrap_action.value.args
}
}
if you want conditionally add the block depending on some bool expression:
dynamic "bootstrap_action" {
for_each = var.add_block ? [var.bootstrap_action] : []
content {
path = bootstrap_action.value.path
name = bootstrap_action.value.name
args = bootstrap_action.value.args
}
}
nice
so we still need to define the content
of each individual one, but we do not need to worry w/ defining all vars
that’s lovely
and this works with blocks of blocks?
can you for_each
in a content
block?
yes
good deal
spot check (if you don’t mind):
resource "aws_msk_cluster" "this" {
cluster_name = var.cluster_name
kafka_version = var.kafka_version
number_of_broker_nodes = var.number_of_broker_nodes
tags = var.tags
dynamic "client_authentication" {
for_each = var.client_authentication
content {
dynamic "tls" {
for_each client_authentication.tls
content {
certificate_authority_arns = client_authentication.tls.value.certificate_authority_arns
}
}
}
}
}
https://www.terraform.io/docs/providers/aws/r/msk_cluster.html
var.client_authentication
should be list(object)
tls
inside it should be list(string)
and add this:
dynamic "tls" {
for_each = toset(client_authentication.value.tls)
iterator = item
content {
certificate_authority_arns = item.value.certificate_authority_arns
}
TF 0.12 has difficulties with list(string)
, needs it to be set(string)
so i take it we need to flesh out the object
in the variable declaration as well?
Error: Invalid type specification
on ../variables.tf line 18, in variable "client_authentication":
18: type = list(object)
The object type constructor requires one argument specifying the attribute
types and values as a map.
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
@Andriy Knysh (Cloud Posse) I thought I had found it.
In my s3.conf used for terraform init -backend-config=s3.conf
, I had put dns-nonprod/terraform.tfstate
.
And in my [00_remote_state.tf](http://00_remote_state.tf)
, I had put dns_nonprod/terraform.tfstate
.
I fixed the second one, and now it’s giving me the same error. This is perplexing, still working through it. Thanks for the sanity questions previously.
In my s3.conf, I have:
bucket = "foo"
key = "dns-nonprod/terraform.tfstate"
region = "us-east-1"
Is it possible to make the terraform_remote_state
configuration read that in for the config = { ... }
section? I hate having two different sources of truth (one for terraform init - the s3.conf, and one for everything else)
@Robert has joined the channel
@Todd Lyons with the consul backend you can do something like this:
terraform {
backend "consul" {
address = "consul.vault:8500"
scheme = "http"
path = ""
}
}
export BACKEND_KEY=project/environment/name
terraform init -backend-config="path=$BACKEND_KEY" "$TERRAFORM_DIR"
Maybe you could do the same with the key
.
I apologize, but I don’t quite understand the “do the same with the key
” comment. I’m still experimenting though, I may get better results now that I’m no longer dealing with the chicken/egg issue.
No worries
It sounds like I can only do it with something remote, not a local file. I tried reading it in using the local_file resource and then all manner of tomap() and split() and such things. Some things flat out errored, some things acted like they were going to work but then complained about the attempt I was making.
I also found the original issue I was having. It turns out that the key in S3 isn’t actually created until the first apply is run. So when I was doing a terraform plan, the data lookup wasn’t finding it. I wonder how the heck it ever worked.
Hmmm, maybe now that I got past that original issue, I can retry some things.
What is your goal again? And the issue
I want to configure my s3 bucket name, key, and region in one place, that can be used both by terraform init and by terraform plan | apply | refresh | output, etc. I think I just figured it out (now that the previous issue was working on is resolved). |
I have a file, s3.conf
, that has:
$ cat s3.conf
bucket = "foo"
key = "dns-nonprod/terraform.tfstate"
region = "us-east-1"
I initialize like this: terraform init -backend-config=s3.conf
And my remote state is configured like this:
data "local_file" "s3" {
filename = "${path.module}/s3.conf"
}
terraform {
backend "s3" {
config = tomap(data.local_file.s3.content)
}
}
# Default provider works for the various pieces of the terraform initialization
provider "aws" {
region = "us-east-1"
}
this is really cool.
for the longest time, interpolation wasn’t supported in this context.
I thought it still wasn’t
@Andriy Knysh (Cloud Posse) I think we could benefit from this too.
could be useful, agree
So far, it seems to be working.
The simple solution above was just muddied by my initial error: attempting to set a data “terraform_remote_state” for an S3 key dns-nonprod/terraform.tfstate
that didn’t exist yet, because I hadn’t yet run an apply (couldn’t run an apply because init failed, because that file didn’t exist yet because I hadn’t yet run an apply, circular dependencies FTW).
I don’t know why, but I could swear that terraform init used to create that S3 key with a minimal tfstate file. I must have been wrong. This has all been with the latest aws module and terraform 0.12.4.
Sorry for spamming the channel. I’ll exercise restraint from now on.
i would put the backend files into a separate folder, provision it first (w/o specifying remote backend obviously b/c it does not exist yet), then add the remote backend config to the code, then terraform init
will ask you to import the old (local) backend config into the remote one
after that, don’t touch the tf-backend
folder
for all other modules, use diff folders
although outdated, the doc above will give you an idea on what needs to be done to provision the remote backend without having a remote backend to provision it in the first place
here is the project structure that we usually use https://github.com/cloudposse/testing.cloudposse.co/tree/master/conf
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
tfstate-backend
is in separate folder and gets provisioned separately and only once
the code for tfstate-backend
is here https://github.com/cloudposse/terraform-root-modules/tree/master/aws/tfstate-backend
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
the script to provision the backend locally and then enable remote backend https://github.com/cloudposse/terraform-root-modules/blob/master/aws/tfstate-backend/scripts/init.sh
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
I’ll surely study them. If I’m having trouble, I’ll bug my co-worker Jon and see if he can fill in the gaps.
Thank you for all you’ve shown me.
I’ve seen some chatter on the pre-commit-terraform working with tf 0.12, but it seems to have a problem with a simple something = var.something
declaration
I’m on 1.19 of pre-commit-terraform so that should support it just fine
never used that version. why not switch to the latest 0.12?
pre-commit-terraform version
TF 0.12 p-c-t 1.19
hookid: terraform_docs
2019/08/30 10:51:44 At 41:5: error while trying to parse object within list: At 42:40: Unknown token: 42:40 IDENT null
^ it seems terraform-docs is not able to properly parse the code
Running outside of pre-commit shows:
~/Work/terraform-aws-kafka (git: feature/DEVOPS-557-kafka-module) (tf: default): terraform-docs md document .
2019/08/30 10:53:08 At 2:28: Unknown token: 2:28 IDENT var.cluster_name
line 2:
resource "aws_msk_cluster" "this" {
cluster_name = var.cluster_name
...
}
that error is when you try to parse 0.12 code with TF 0.11
but i’m on tf 0.12
: tf --version
Terraform v0.12.7
+ provider.aws v2.25.0
maybe you have two of them installed
I am using tfenv
make sure in the Dockerfile :
# Install terraform 0.11 for backwards compatibility
RUN apk add terraform_0.11@cloudposse
# Install terraform 0.12
RUN apk add terraform_0.12@cloudposse terraform@cloudposse==0.12.3-r0
if you are using geodesic for that
nopers. all local