#terraform (2023-07)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2023-07-01

Krish avatar

Hey Guys, appreciate any help with this : https://stackoverflow.com/questions/74904283/how-to-pass-variables-between-terragrunt-workspaces ,

I’ve applied networks-vpc workspace first and have refreshed state that shows the outputs as well. However Unable to pass the subnet values into “ec2-amz-lnx” workspac?. Not sure what am I doing wrong. Happy to provide any further info if needed. Thanks

Hao Wang avatar
Hao Wang

I heard Terragrant but never used it, is it possible not to use it?

Krish avatar

Unfortunately not. The whole point terragrunt was created is to address the drawbacks of terraform in keeping configuration DRY (Ref# https://terragrunt.gruntwork.io/)

Terragrunt | Terraform wrapper

Terragrunt is a thin wrapper for Terraform that provides extra tools for keeping your Terraform configurations DRY, working with multiple Terraform modules, and managing remote state.

Hao Wang avatar
Hao Wang

Can Terraform modules help DRY?

BATeller avatar
BATeller

I’ve used terragrunt quite a bit to pull state from another workspace, typically S3.

Are you pulling state with something like:

data "terraform_remote_state" "vpc" {
  backend = "s3"
  config = {
    bucket = "some-terraform-state-s3-bucket"
    key    = "full/path/${var.env}/${var.env}-vpc/terraform.tfstate"
    region = "us-west-2"
  }
}

Then referencing in your config with something like: data.terraform_remote_state.vpc.outputs.vpc_id?

Hao Wang avatar
Hao Wang

yeah

Hao Wang avatar
Hao Wang

If the wrapper is the problem, may need to test if the codes work without the wrapper

2023-07-02

2023-07-03

Saichovsky avatar
Saichovsky

Heya terraformers,

I would like to create an external data source but I see no timeouts… Is there a default value for reading this resource type because I have a scrit that is likely to run for about 10 minutes or so and I am hoping that my build does not fail due to a shorter timeout period

kallan.gerard avatar
kallan.gerard

Hey mate, sounds like there’s probably some better ways of doing what you’re trying to do, but as far as I know there’s no timeout on the external data source.

Where is the terraform running?

Saichovsky avatar
Saichovsky

Hey,

The terraform is running in a Jenkins worker. I am trying to bake AMIs using imagebuilder, but whenever we commit to master. The AWS provider does not have a resource for starting image builds, so I would like to have an external-data resource do that and then pass the AMI name back to terraform so it can be set in a launch template.

I will try it on my local env and see how it goes. The builds take about 40 minutes and I wouldn’t want them failing

kallan.gerard avatar
kallan.gerard

I would strongly advise against doing that. I never recommend wrapping a build process within Terraform.

kallan.gerard avatar
kallan.gerard

There’s a few different ways you could tackle it, but as a simplified scenario can you not just build the AMI in one job and pass the ami id to Terraform in a second job.

kallan.gerard avatar
kallan.gerard

I’m not really familiar with Jenkins, but whatever construct they use for stages of execution

Saichovsky avatar
Saichovsky

How would you approach this from whatever CI tool you’d use?

kallan.gerard avatar
kallan.gerard

What’s the particular circumstances that kick off this whole process

Saichovsky avatar
Saichovsky

commit change to master branch

kallan.gerard avatar
kallan.gerard

As in, what is the business reason for building a new ami and updating a template

kallan.gerard avatar
kallan.gerard

Okay so taking a step back what causes a new commit to the default branch

Saichovsky avatar
Saichovsky

So we have AWS instances which serve as VPN peers to partner networks. Whenever we need to establish a new VPN tunnel with a new peer, we add the parameters to our repo and upon merging, CICD should trigger the creation of a new instance. This new instance is supposed to have certain tools installed inside (packages, SSL certs, config files rendered by chef, etc). This whole process used to take a bit of time, so we decided to bake all the dependencies into an AMI, and then update the launch instances with the new AMI and tag it with the commit hash (alongside the other tags that we normally use)

kallan.gerard avatar
kallan.gerard

So are the new parameters used within the ami image itself,

kallan.gerard avatar
kallan.gerard

As in when you update the parameters what part are you updating specifically

Saichovsky avatar
Saichovsky

IPSEC parameters for example

Saichovsky avatar
Saichovsky

peer IP addresses, etc

Saichovsky avatar
Saichovsky

rendering config files using such parameters

kallan.gerard avatar
kallan.gerard

Okay cool cool. So there’s a few ways you could approach it,

kallan.gerard avatar
kallan.gerard

If I was doing something like that with GitHub and GitHub actions here’s how I’d probably do it.

kallan.gerard avatar
kallan.gerard

On commit to master a github actions job kicks off, which builds the new AMI, and adds the ami details and the release notes to a GitHub release

kallan.gerard avatar
kallan.gerard

Then my Terraform provisioning would be triggered to run off release events.

kallan.gerard avatar
kallan.gerard

You could also consider going more of a pure gitops model and having the job change an actual ami variable in git

Saichovsky avatar
Saichovsky

What do you mean by ami variable? The ami name?

kallan.gerard avatar
kallan.gerard

One problem with your current implementation is that your ami id isn’t actually stored in git is it

kallan.gerard avatar
kallan.gerard

As in if you searched your master branch for your ami id it wouldn’t show up would it

Saichovsky avatar
Saichovsky

We cannot store the ami anywhere as the ami does not exist - it needs to get built first

kallan.gerard avatar
kallan.gerard

The ami name yeah

Saichovsky avatar
Saichovsky

We have a base image that we build upon.

kallan.gerard avatar
kallan.gerard

Whatever the output of the image builder is you’re using

kallan.gerard avatar
kallan.gerard

Yeah but the building of the ami doesn’t have any inherent coupling to terraform

kallan.gerard avatar
kallan.gerard

Like when I build a Dockerfile and publish it to a registry, it’s still built and published. Whether or not I do something with it immediately or later or never

kallan.gerard avatar
kallan.gerard

So there’s no reason your terraform has to build the ami, you could just provide the ami name to a terraform variable

kallan.gerard avatar
kallan.gerard

Whether that’s an input in CI or an update to a .tfvars on the master branch

Saichovsky avatar
Saichovsky

So how does the AMI get built?

Saichovsky avatar
Saichovsky

the AMI whose name you pass to TF

kallan.gerard avatar
kallan.gerard

Whatever you’re doing in the local executor script

kallan.gerard avatar
kallan.gerard

Just do that in your CI environment

Saichovsky avatar
Saichovsky

Oh, I see. So execute the script directly from CI, rather than using the external data resource?

1
kallan.gerard avatar
kallan.gerard

Then you can either pass the ami name or use a data.aws_ami to get the latest version of that ami

kallan.gerard avatar
kallan.gerard

Always seperate builds from deployments/provisioning

1
Saichovsky avatar
Saichovsky

Alright. Let me see how to go about that. Thanks for the insights

kallan.gerard avatar
kallan.gerard

No problem

Jay avatar

Hi all, I’m using Terraform to deploy a EC2 instance on Windows Server 2022 (using AWS base image for this) I have a user data script that is executed at launch with no problems on Server 2019 but for some reason it doesn’t seem to work at launch on Server 2022. The script runs fine when running it locally on the box. Wondering if anyone has come across this issue?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi, has anyone been able to successfully run userdata (powershell script) at launch of an EC2 instance on Windows Server 2022 using Amazons base image for this?

2023-07-04

Psy-Q avatar

For terraform-aws-mq-broker there seems to be a deprecated argument now with aws 5.x:

│ Warning: Argument is deprecated
│ 
│   with module.mq.module.mq_broker.aws_ssm_parameter.mq_application_username[0],
│   on .terraform/modules/mq.mq_broker/main.tf line 74, in resource "aws_ssm_parameter" "mq_application_username":
│   74:   overwrite   = var.overwrite_ssm_parameter
│ 
│ this attribute has been deprecated

I’ve created an issue if that’s OK: https://github.com/cloudposse/terraform-aws-mq-broker/issues/64

1
BATeller avatar
BATeller

I’ve expanded on your issue and linked an upstream bug with the overwrite argument

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Psy-Q we’ll have a look asap

2023-07-05

Joe Perez avatar
Joe Perez

Hello Terraformers! Here’s a post that I created to show how you can grab a current list of VPC names in your environment https://www.taccoform.com/posts/tfg_p7/

How Do I Retrieve VPC Names Via Terraform?

Overview At some point in your AWS and Terraform journey, you may need the names of the VPCs in a given AWS region. You’d normally look to use a data source lookup, but the name field is not exposed as an attribute for aws_vpcs. Here’s a quick way to get those VPC names. Lesson You will still need to use the aws_vpcs data source to get a list of all the VPC IDs in a given region:

1
1
managedkaos avatar
managedkaos

Thanks for sharing! Can you also share the recipe for the image at the top of the article?

How Do I Retrieve VPC Names Via Terraform?

Overview At some point in your AWS and Terraform journey, you may need the names of the VPCs in a given AWS region. You’d normally look to use a data source lookup, but the name field is not exposed as an attribute for aws_vpcs. Here’s a quick way to get those VPC names. Lesson You will still need to use the aws_vpcs data source to get a list of all the VPC IDs in a given region:

1
1
Joe Perez avatar
Joe Perez

I wish I remember that recipe, it’s from a couple of years ago

2023-07-06

2023-07-07

Balazs Varga avatar
Balazs Varga

aws_organizations_account terraform module how to creates account under an OU?

Does it create directly under the OU, or create in root then move to OU ?

Bruno Lucena avatar
Bruno Lucena

Hi, i don’t know if this is the right channel. Sometimes rolling out a helm update using cloudposse/helm-release/aws (version=0.8.1) “breaks” my helm deployment.

Thank you for any help

⎈|arn:aws:eks:us-west-2:605322476540:cluster/notifi-uw2-dev-eks-cluster:default) bruno@t490s  ~/Notifi/notifi-infra   fix/change-trace-id-string  helm ls -n prometheus --debug
NAME                            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
oauth2-proxy-alertmanager       prometheus      1               2023-07-06 20:01:06.47035317 +0000 UTC  deployed        oauth2-proxy-6.13.1     7.4.0      
oauth2-proxy-prometheus         prometheus      1               2023-07-06 20:01:00.401293758 +0000 UTC deployed        oauth2-proxy-6.13.1     7.4.0      
(⎈|arn:aws:eks:us-west-2:605322476540:cluster/notifi-uw2-dev-eks-cluster:default) bruno@t490s  ~/Notifi/notifi-infra   fix/change-trace-id-string  k get secrets -n prometheus
NAME                                                           TYPE                 DATA   AGE
alertmanager-kube-prometheus-stack-alertmanager                Opaque               2      19h
alertmanager-kube-prometheus-stack-alertmanager-generated      Opaque               2      19h
alertmanager-kube-prometheus-stack-alertmanager-tls-assets-0   Opaque               0      19h
alertmanager-kube-prometheus-stack-alertmanager-web-config     Opaque               1      19h
kube-prometheus-stack-admission                                Opaque               3      19h
kube-prometheus-stack-grafana                                  Opaque               3      19h
oauth2proxy-alertmanager                                       Opaque               3      19h
oauth2proxy-prometheus                                         Opaque               3      19h
prometheus-kube-prometheus-stack-prometheus                    Opaque               1      19h
prometheus-kube-prometheus-stack-prometheus-tls-assets-0       Opaque               1      19h
prometheus-kube-prometheus-stack-prometheus-web-config         Opaque               1      19h
sh.helm.release.v1.kube-prometheus-stack.v1                    helm.sh/release.v1   1      19h
sh.helm.release.v1.kube-prometheus-stack.v2                    helm.sh/release.v1   1      37m
sh.helm.release.v1.kube-prometheus-stack.v3                    helm.sh/release.v1   1      22m
sh.helm.release.v1.oauth2-proxy-alertmanager.v1                helm.sh/release.v1   1      19h
sh.helm.release.v1.oauth2-proxy-prometheus.v1                  helm.sh/release.v1   1      19h
Mike Shade avatar
Mike Shade

It’s not really clear what the problem is here?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, agree - don’t see what’s broken.

Bruno Lucena avatar
Bruno Lucena

PS: i have to delete the secret and the release comes back

Daniel Ade avatar
Daniel Ade

Hey Guys, I’ve used terraform to create an ecs cluster and it works locally. when i try and use it in a git hub action the terraform apply is successful but no resources are created and when i check the terraform state via my local machine it say there are no resources. However when i then run a terraform apply from my local machine it says some roles already exist that didn’t exist prior to the github actions apply. They are using the same back end so it shouldn’t be a state file issue. Does anyone know whats happening

msharma24 avatar
msharma24

my Best guess - you’re missing TF state backend

Add backend.tf on local and run tf apply - migrate the state to s3 and then commit the backend tf file to GH backend.tf

terraform { backend “s3” { bucket = “mybucket” key = “path/to/my/key” region = “us-east-1” } }

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think the cluster does exist but you are looking in the wrong region, or perhaps account

Alex Jurkiewicz avatar
Alex Jurkiewicz

Probably locally you’ve explicitly set the region, while in GitHub actions, it’s defaulting to useast1

Daniel Ade avatar
Daniel Ade

my terraform file has this backend locally and on github actions so it should be in the same region. backend “s3” { bucket = “firstwebsite-tf-state-backend” key = “tf-infra/terraform.tfstate” region = “us-east-1” dynamodb_table = “terraform-state-locking”

}

Daniel Ade avatar
Daniel Ade

And everything should be in same region

Mike Shade avatar
Mike Shade

That is just the region for the state file. Can you share the provider config?

Daniel Ade avatar
Daniel Ade

terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 4.0” } }

backend “s3” { bucket = “firstwebsite-tf-state-backend” key = “tf-infra/terraform.tfstate” region = “us-east-1” dynamodb_table = “terraform-state-locking”

} }

# Configure the AWS Provider provider “aws” { region = “us-east-1” }

resource “aws_ecs_cluster” “dan” { name = “diggas”

setting { name = “containerInsights” value = “enabled” } }

resource “aws_ecs_cluster_capacity_providers” “dan_capacity_provider” { cluster_name = aws_ecs_cluster.dan.name

capacity_providers = [”${aws_ecs_capacity_provider.test.name}”]

default_capacity_provider_strategy { base = 1 weight = 100 capacity_provider = aws_ecs_capacity_provider.test.name } }

resource “aws_ecs_capacity_provider” “test” { name = “test”

auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.bar.arn managed_termination_protection = “ENABLED”

managed_scaling {
  status                    = "ENABLED"
  target_capacity           = 1
  maximum_scaling_step_size = 1
}   }

}

resource “aws_iam_role” “ecs_agent” { name = “ecs-agent” assume_role_policy = data.aws_iam_policy_document.ecs_agent.json }

resource “aws_iam_role” “execution_role” { name = “execution-ecs-ec2-role” assume_role_policy = jsonencode({ Version = “2012-10-17” Statement = [ { Action = “sts:AssumeRole” Effect = “Allow” Sid = “” Principal = { Service = “ec2.amazonaws.com” } } ] }) } data “aws_iam_policy_document” “ecs_agent” { statement { actions = [“sts:AssumeRole”]

principals {
  type        = "Service"
  identifiers = ["[ec2.amazonaws.com](http://ec2.amazonaws.com)"]
}   } }

resource “aws_iam_role_policy_attachment” “ecs_agent_permissions” { role = aws_iam_role.ecs_agent.name policy_arn = “arnawsiam:policy/service-role/AmazonEC2ContainerServiceforEC2Role” }

resource “aws_iam_instance_profile” “instance_profile” { name = “comeon-instanceprofile” role = aws_iam_role.execution_role.name }

resource “aws_iam_role_policy_attachment” “ecs_task_permissions” { role = aws_iam_role.execution_role.name policy_arn = “arnawsiam:policy/service-role/AmazonEC2ContainerServiceforEC2Role” }

resource “aws_iam_instance_profile” “ecs_agent” { name = “ecs-agent” role = aws_iam_role.ecs_agent.name } resource “aws_launch_template” “test” { name_prefix = “test”

iam_instance_profile { name = aws_iam_instance_profile.ecs_agent.name }

image_id = “ami-0bf5ac026c9b5eb88” instance_type = “t3.large”

user_data = base64encode( <<-EOF #!/bin/bash echo “ECS_CLUSTER=diggas” >> /etc/ecs/ecs.config EOF ) }

resource “aws_autoscaling_group” “bar” { availability_zones = [“us-east-1a”] desired_capacity = 1 max_size = 1 min_size = 1

launch_template { id = aws_launch_template.test.id version = “$Latest” } protect_from_scale_in = true }

resource “aws_default_vpc” “default” { tags = { Name = “Default VPC” } } resource “aws_ecs_task_definition” “tformtest” { family = “tformtest” container_definitions = jsonencode([ { name = “tform” image = “public.ecr.aws/v8j0g7n1/firstwebapp:latest” cpu = 2048 memory = 4096 essential = true execution_role_arn = “ecsTaskExecutionRole” network_mode = “default”

  portMappings = [
    {
      containerPort = 8000
      hostPort      = 8000
    }
  ]
},   ])

} resource “aws_ecs_service” “test_service” { name = “test-service” cluster = aws_ecs_cluster.dan.id task_definition = aws_ecs_task_definition.tformtest.id

deployment_minimum_healthy_percent = 100 deployment_maximum_percent = 200

desired_count = 1 }

data “aws_vpc” “default” { default = true }

data “aws_route_table” “default” { vpc_id = data.aws_vpc.default.id filter { name = “association.main” values = [“true”] } }

Daniel Ade avatar
Daniel Ade

thats the whole file

2023-07-08

Brian Ojeda avatar
Brian Ojeda

I published a new TF module that allow you to utilize docker to build artifacts (eg, zip file that contains lambda source code) without polluting the machine running TF and docker. May or may not find it useful, but I found it very useful. Especially for building lambda@edge functions that has deployment specific configuration.

https://registry.terraform.io/modules/sgtoj/artifact-packager/docker/latest

joshmyers avatar
joshmyers

Any Hashi folks in here these days? https://github.com/hashicorp/terraform-provider-aws/pull/31284 has been sitting there for 2 months.

#31284 DDB update replica Amazon Owned SSE

Description

Fixes update DDB table action to only try and update replicas CMK if actually using a CMK.

Relations

Closes #31153

References

Built and tested this locally. Ran what was failing on the linked bug report which applied clean.

Output from Acceptance Testing

Before change:

❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned'  -timeout 180m
=== RUN   TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT  TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
    table_test.go:1771: Step 2/2 error: Error running apply: exit status 1

        Error: updating DynamoDB Table (tf-acc-test-9185934778554862359) SSE: ValidationException: 1 validation error detected: Value '' at 'replicaUpdates.1.member.update.kMSMasterKeyId' failed to satisfy constraint: Member must have length greater than or equal to 1
        	status code: 400, request id: S56IOFM90QM4O98R5OIFJNG5IFVV4KQNSO5AEMVJF66Q9ASUAAJG

          with aws_dynamodb_table.test,
          on terraform_plugin_test.tf line 14, in resource "aws_dynamodb_table" "test":
          14: resource "aws_dynamodb_table" "test" {

--- FAIL: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (154.64s)
FAIL
FAIL	github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb	154.709s
FAIL

After change:

❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned'  -timeout 180m
=== RUN   TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT  TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
--- PASS: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (331.96s)
PASS
ok  	github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb	332.037s
Alex Jurkiewicz avatar
Alex Jurkiewicz

IMO, one reason Hashi folk are in here is that they don’t get context-free “+1” requests constantly

#31284 DDB update replica Amazon Owned SSE

Description

Fixes update DDB table action to only try and update replicas CMK if actually using a CMK.

Relations

Closes #31153

References

Built and tested this locally. Ran what was failing on the linked bug report which applied clean.

Output from Acceptance Testing

Before change:

❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned'  -timeout 180m
=== RUN   TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT  TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
    table_test.go:1771: Step 2/2 error: Error running apply: exit status 1

        Error: updating DynamoDB Table (tf-acc-test-9185934778554862359) SSE: ValidationException: 1 validation error detected: Value '' at 'replicaUpdates.1.member.update.kMSMasterKeyId' failed to satisfy constraint: Member must have length greater than or equal to 1
        	status code: 400, request id: S56IOFM90QM4O98R5OIFJNG5IFVV4KQNSO5AEMVJF66Q9ASUAAJG

          with aws_dynamodb_table.test,
          on terraform_plugin_test.tf line 14, in resource "aws_dynamodb_table" "test":
          14: resource "aws_dynamodb_table" "test" {

--- FAIL: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (154.64s)
FAIL
FAIL	github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb	154.709s
FAIL

After change:

❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned'  -timeout 180m
=== RUN   TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT  TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
--- PASS: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (331.96s)
PASS
ok  	github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb	332.037s

2023-07-10

Rajat Verma avatar
Rajat Verma

Hello #terraform Has anyone setup google binary authorization policy to sign the images in Google’s AR

Hao Wang avatar
Hao Wang

ChatGPT to the rescue lol

1
Rajat Verma avatar
Rajat Verma

Hahah indeed

Hao Wang avatar
Hao Wang

Forwarded to ai as well

mike avatar

Hey all. Is there a way to do string expressions in Terraform? For example, I have this:

resource "aws_ssm_parameter" "authz_server_name" {
  name        = "server_name"
  value       = module.authz_server_remote_state.outputs.authz_server_name
  description = "Server name"
  type        = "String"
  overwrite   = true
}

but I would like to do this:

resource "aws_ssm_parameter" "authz_server_name" {
  name        = "server_name"
  value       = eval("module.authz_server_remote_state.outputs." + var.value_key_name)
  description = "Server name"
  type        = "String"
  overwrite   = true
}

Is something like this possible with Terraform?

loren avatar

well you can create strings using interpolation, but there is no way to evaluate references like that

loren avatar

your best bet for this particular use case would be to modify the source to be a map, and then you can index into the map, e.g.

module.authz_server_remote_state.outputs.authz_server_names[var.value_key_name]
3

2023-07-11

2023-07-12

Release notes from terraform avatar
Release notes from terraform
02:43:30 PM

v1.5.3 1.5.3 (July 12, 2023) BUG FIXES: core: Terraform could fail to evaluate module outputs when they are used in a provider configuration during a destroy operation (#33462) backend/consul: When failing to save state, consul CAS failed with transaction errors no longer shows an error instance memory address, but an actual error message….

always evaluate module outputs during destroy by jbardin · Pull Request #33462 · hashicorp/terraformattachment image

A module output is generally not used during destroy, however it must be evaluated when its value is used by a provider for configuration, because that configuration is not stored between walks. Th…

Ralf Pieper avatar
Ralf Pieper

What is the best book to learn Terraform?

loren avatar

The Book of Hard Knocks

Ralf Pieper avatar
Ralf Pieper

I read https://www.google.com/search?cs=0&sxsrf=AB5stBiCyPCyumrKOTNGjWrLvVG3WooynQ<i class="em em-1689188620075&q=Terraform"</i>+Up+and+Running Up and Running: Writing Infrastructure as Code> a few years ago, and it looks like it has been updated.

loren avatar

Yeah that’s still a good one IMO

loren avatar

There’s also a number of tutorials now, https://developer.hashicorp.com/terraform/tutorials

Tutorials | Terraform | HashiCorp Developerattachment image

Explore Terraform product documentation, tutorials, and examples.

Ralf Pieper avatar
Ralf Pieper

They seem more focused on being an onramp for Terraform cloud. I think I will reread https://www.google.com/search?cs=0&sxsrf=AB5stBiCyPCyumrKOTNGjWrLvVG3WooynQ<i class="em em-1689188620075&q=Terraform"</i>+Up+and+Running Up and Running: Writing Infrastructure as Code>, I am getting rusty and need to learn the new import block.

loren avatar

Yeah maybe. But the terraform cli still works fine. Just ignore the TFC pieces and you can do pretty much all the same things locally

Joe Perez avatar
Joe Perez

I enjoyed the IaC book by Kief Morris https://infrastructure-as-code.com/book/

Book

Exploring better ways to build and manage cloud infrastructure

2
Joe Perez avatar
Joe Perez

I read it later on, but it puts into words what I’ve learned the hard way

Joe Perez avatar
Joe Perez

There’s also this book by Rosemary Wang, but I haven’t gotten around to it yet: https://www.amazon.com/Patterns-Practices-Infrastructure-Code-Terraform/dp/1617298298

1

2023-07-13

Mahesh avatar

Hi All..I am trying to create vpc using module. module “vpc” { source = “cloudposse/vpc/aws” # Cloud Posse recommends pinning every module to a specific version version = “2.1.0” namespace = “eg” stage = “test” name = “app”

ipv4_primary_cidr_block = “10.0.0.0/16”

assign_generated_ipv6_cidr_block = false }

But it is prompting to enter vpc_id for tfplan

Mahesh avatar

$ terraform plan var.vpc_id VPC ID where subnets will be created (e.g. vpc-aceb2723)

Enter a value:

Hao Wang avatar
Hao Wang

seems it is needed to use -var-file with fixure var file

Jay avatar

Hi, has anyone been able to successfully run userdata (powershell script) at launch of an EC2 instance on Windows Server 2022 using Amazons base image for this?

Jay avatar

If so, then some examples would be great. I’ve only noticed this when we started to provision EC2’s on Windows Server 2022. The same userdata (powershell script) works fine on Server 2019

Paul avatar
Hi Jay, we’ve found in our labs that if the formatting of the instance disk is on 3 lines of code it breaks the user data script on Windows Server 2022. You might want to try putting it on one line instead (piped through eachif required)
Big Ste avatar
Big Ste

That’s awesome! Thanks Paul. I’ve been trying to figure this out for months and it’s worked straight away!

1
Jay avatar

I knew it would be something simple

Jay avatar

Thank you both for the help and guidance!

Jay avatar

I’ll be sure to try this and confirm back :)

Jay avatar

Amazing @Paul @Big Ste that worked!

2
Paul avatar

No problem @Jay, glad I could help. It’s tricky getting to the bottom of these strange Windows/Terraform issues

1
Jay avatar

I agree!

2023-07-14

Matt Gowie avatar
Matt Gowie

Hey @jose.amengual – Since you’re the expert, how do you typically run Atlantis? ECS or just on EC2? Do you use the CP module? Any suggestions for success on that front?

cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task

jose.amengual avatar
jose.amengual

ECS

jose.amengual avatar
jose.amengual

a lot of people uses Antons atlantis module

jose.amengual avatar
jose.amengual

I use the cloudposse components in atmos

jose.amengual avatar
jose.amengual

the cloudposse module is severely out of date

jose.amengual avatar
jose.amengual

I just declare a ECS cluster and task def using the ghr image and pass the necessary variables

Matt Gowie avatar
Matt Gowie

Gotcha Thanks for the info!

k.naval76 avatar
k.naval76

Hi All, I’m new to this group and terraform.can someone help me what’s the best way to learn terraform .Any good GitHub repo to do hands on and learn

Chris avatar

Terraform has some great tutorials on the main website: https://developer.hashicorp.com/terraform/tutorials?product_intent=terraform

Tutorials | Terraform | HashiCorp Developerattachment image

Explore Terraform product documentation, tutorials, and examples.

k.naval76 avatar
k.naval76

Thanks @Chris for the help.Does that have any hands on lab excercise or where can I look for better hands on

Chris avatar

The tutorials have videos so you can follow along

1
k.naval76 avatar
k.naval76

Thanks buddy

Chris avatar

No problem at all!

2023-07-17

Chris avatar

Is anyone here familiar with the cloudposse/label/null module?

We’ve just started adopting it in a project for consistency and would like to understand the best way to label some resources in a module.

For example, would you use the same label for a lambda function and a security group for the lambda function? Is there any guidance on best practices so as not to run into any naming issues?

Brian avatar

Personally, I lean towards using the same label for most things unless I know there will be collisions. e.g., When I am deploying multiple fns within the same root module, I will have multiple labels.

Brian avatar

This is a module where I only need one label (yopass_label), which is used for all resources. Here is a different module where I use one label (mw_service_label) for most things but have two additional labels (mw_auth_service_label and mw_urlrewrite_service_label) that are used for their respective lambda-fn/service.

susie-h avatar
susie-h

No, but if you figure out how to reference it as a child module in #terragrunt i’m all ears.

Chris avatar

Thanks, Brian. I’ve made use of attributes for specific cases and pass context between modules. Works a treat

1
Brian avatar

If anyone is using Yopass for secret sharing over the web, I have a TF module to deploy it to AWS managed/serverless resources. It uses CP’s naming patterns. https://github.com/sgtoj/terraform-aws-yopass

sgtoj/terraform-aws-yopass

Terraform Module to deploy Yopass on AWS serverless technologies

2
Graham avatar

Hi! I’m relatively new to Terraform, but have read “Up and Running”.

I was looking for good templates on how to deploy a full web app end-to-end with best practices and came across the terraform-aws-ecs-web-app module. I was wondering whether people think it’s generally good practice to use something end-to-end like this, or if it’s better avoid using a pre-packaged module for something as complex as this. Any opinions?

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.

Joe Perez avatar
Joe Perez

I think it’s ok to check out this module and even deploy it, but if you want to understand how things fit together and how to better troubleshoot this stack. I would recommend building it yourself. After that, you can decide to keep your deployment and abstract it into modules that you see fit or come back to this original public module. I kinda shy away from public modules, but they are great as reference points

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.

1
Graham avatar

Thanks @Joe Perez! Since I asked that question I realized that the template there is a bit out of date so I had to go through and update all the library dependencies, etc., anyway. So it was a great reference point but I ended up needing to do a lot of stuff myself.

1
Joe Perez avatar
Joe Perez

I also forgot to mention that Jerry Chang did a great series on ECS https://www.jerrychang.ca/writing/introducing-aws-ecs-technical-series

Introducing the AWS ECS technical seriesattachment image

A technical series on AWS ECS

kallan.gerard avatar
kallan.gerard

Personally I find it more work to audit and maintain a public module than it is to use the vendors provider directly. But they can be great for reference points like Joe said.

Remember you’re totally at the mercy of anything they put in there across releases, plus you’d have to verify that what is tagged on the release on github is actually what’s uploaded to the registry etc.

kallan.gerard avatar
kallan.gerard

Commenting on that module specifically, I think your gut instinct is on the money, it’s trying to do way too much.

But on another note, I’m really not a fan of putting build and deployment in terraform.

kallan.gerard avatar
kallan.gerard

They belong in the application CI & CD imo. Not infrastructure provisioning.

kallan.gerard avatar
kallan.gerard

I’m referring in particular the bits to do with docker building, container definitions and releases etc

Graham avatar

Great, thanks for the suggestions @kallan.gerard!

2023-07-18

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-security-group

Terraform module to provision an AWS Security Group

Bart Coddens avatar
Bart Coddens

Can I point to another (already existing) security group as a destination ?

Brian avatar

If you’re trying to attach new rules to an existing group, then use target_security_group_id. https://github.com/cloudposse/terraform-aws-security-group#input_target_security_group_id

Brian avatar

If instead you’re trying to add rules for source security group (ie, allow connections from resources that use X security group), you can define source_security_group_id at the individual rule level.

Brian avatar

And finally, if you want to create rules that allow connections from other resources using the same group, then set self to true at the individual rule level.

Bart Coddens avatar
Bart Coddens

like this:

Bart Coddens avatar
Bart Coddens
Alex Atkinson avatar
Alex Atkinson
Alex Atkinson avatar
Alex Atkinson

Oh, the azure provider docs are gone also…. Hmmm.

Nat Williams avatar
Nat Williams

https://registry.terraform.io/ cloudfront misconfig?

Alex Atkinson avatar
Alex Atkinson

They fixed it. :)

Alex Atkinson avatar
Alex Atkinson
Terraform Registry UI Errors

HashiCorp Services’s Status Page - Terraform Registry UI Errors.

1

2023-07-19

Release notes from terraform avatar
Release notes from terraform
05:33:33 PM

v1.6.0-alpha20230719 1.6.0-alpha20230719 (Unreleased) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how Terraform tests are written and executed. Terraform tests are now written within .tftest files, controlled by a series of run blocks. Each run block will execute a Terraform plan or apply command against the Terraform configuration under test and can execute conditions against the resultant plan and…

Release v1.6.0-alpha20230719 · hashicorp/terraformattachment image

1.6.0-alpha20230719 (Unreleased) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how T…

2023-07-20

Frank avatar

Hi,

I’m currently reworking our Terraform setups which currently use the AWS Provider with assume_role. Now I want to move this over to use OIDC instead, so the assume_role needs to become assume_role_with_web_identity. This works fine in our pipelines however this does break running Terraform locally (we usually run a plan before committing it / creating an MR).

I’m not sure yet as to what the best approach would be to ensure that CI uses OIDC and local uses the “old” method, except for keeping the original assume_role in the provider config and adding a script in the pipeline that replaces that before running Terraform commands. But it feels like a bit of a dirty workaround.

Any ideas how to tackle this issue?

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

terraform init -backend-config rather than hardcoding either auth method

Frank avatar

Isn’t that just for state backends?

1
Imran Hussain avatar
Imran Hussain

would dynamic blocks be of help ?

Frank avatar

@Imran Hussain It doesn’t appear so, atleast not according to https://support.hashicorp.com/hc/en-us/articles/6304194229267-Dynamic-provider-configuration

Dynamic provider configuration

  Current Status While a longtime requested feature in Terraform, it is not possible to use count or for_each in the provider configuration block in Terraform.   Background Much of the reasoning be…

Imran Hussain avatar
Imran Hussain

Maybe I miss read this then

Imran Hussain avatar
Imran Hussain

A dynamic block can only generate arguments that belong to the resource type, data source, provider or provisioner being configured. It is not possible to generate meta-argument blocks such as lifecycle and provisioner blocks, since Terraform must process these before it is safe to evaluate expressions.

Imran Hussain avatar
Imran Hussain
Dynamic Blocks - Configuration Language | Terraform | HashiCorp Developerattachment image

Dynamic blocks automatically construct multi-level, nested block structures. Learn to configure dynamic blocks and understand their behavior.

Imran Hussain avatar
Imran Hussain

A quick test is if the provider resource accepts dynamic blocks

Imran Hussain avatar
Imran Hussain
provider "aws" {
  dynamic assume_role {
    for_each = var.local ? [1] : []
    content {
    role_arn     = "arn:aws:iam::123456789012:role/ROLE_NAME"
    session_name = "SESSION_NAME"
    external_id  = "EXTERNAL_ID"
   }
  }
}
Imran Hussain avatar
Imran Hussain

did not throw an error when I did an init and a validate

Frank avatar

Wow, and plan doesn’t mind that either

Imran Hussain avatar
Imran Hussain

you have to create a variable by default which is set to local = false

Imran Hussain avatar
Imran Hussain

then have the two dynamic blocks defined

Imran Hussain avatar
Imran Hussain

by default it set to false but when you run locally you can pass in the value or by environment variable what ever floats your boat

Imran Hussain avatar
Imran Hussain

Let me know how it pans out

Frank avatar

Thanks for your help @Imran Hussain

Frank avatar

This is what I ended up with:

provider "aws" {
  region = var.aws_region

  dynamic "assume_role" {
    for_each = var.oidc_web_identity_token == null ? [1] : []
    content {
      role_arn     = "arn:aws:iam::${local.account_id}:role/${var.role_name}"
      session_name = var.session_name
    }
  }

  dynamic "assume_role_with_web_identity" {
    for_each = var.oidc_web_identity_token != null ? [1] : []
    content {
      role_arn           = "arn:aws:iam::${local.account_id}:role/${var.role_name}"
      session_name       = var.session_name
      web_identity_token = var.oidc_web_identity_token
    }
  }
}
Frank avatar

Seems to work fine, both locally and in CI so problem solved

Imran Hussain avatar
Imran Hussain

cool

Imran Hussain avatar
Imran Hussain

glad I could help

loren avatar

You could also give the CI a role via oidc that only has permissions to assume the execution role

1
kallan.gerard avatar
kallan.gerard

Seems like you should just use environmental variables

kallan.gerard avatar
kallan.gerard

AssumeRole is a nightmare in AWS, IAM Identity Center is much better from a usability perspective

loren avatar

AssumeRole is easy! And not mutually exclusive with identity center. I use both, and also oidc. Depends on the use case

kallan.gerard avatar
kallan.gerard

But I’d just take all that code out of your aws provider entirely

loren avatar

No thanks

kallan.gerard avatar
kallan.gerard

And ensure the environmental variables are set in the process that runs the tf

loren avatar

Also no thanks

kallan.gerard avatar
kallan.gerard

Why?

loren avatar

Credentials in environment variables are an anti-pattern to me, and also do not support configs that require multiple aws providers. To standardize, I assume_role blocks almost everywhere, and the credential executing terraform only needs permissions to assume those roles. Easy peasy, and works for every use case, local or CI

aws1
kallan.gerard avatar
kallan.gerard

Sure if you’ve got cross account requirements that’s a whole other kettle of fish

2
kallan.gerard avatar
kallan.gerard

Credentials in environmental variables is exactly how aws-sso exec passes them down to the command

aws1
kallan.gerard avatar
kallan.gerard

Or you pass the aws profile env var

loren avatar

Yeah I’ll never use aws-sso lol

kallan.gerard avatar
kallan.gerard


Configuration for the AWS Provider can be derived from several sources, which are applied in the following order:

Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
It’s standardising aws identity across your entire job vs standardising aws identity within tf but not outside.

It’s a bit rich to call something an antipattern

loren avatar

Feel free to do you.

kallan.gerard avatar
kallan.gerard

You might want to work on your soft skills.

1
loren avatar

You asked why, I answered. That is my reason. Not sure why it’s upset you

kallan.gerard avatar
kallan.gerard

Feel free to not drop dismissive “no thanks” on people who aren’t even talking to you.

1
loren avatar

I’m not making you do it my way.

kallan.gerard avatar
kallan.gerard

Like I said, soft skills, you don’t snipe in on other peoples responses to someone else in a group conversation in that sort of tone

1
kallan.gerard avatar
kallan.gerard

If it makes you feel any better I don’t think your way of doing it is a bad solution at all

1
kallan.gerard avatar
kallan.gerard

But I definitely wouldn’t want to work with someone with that sort of way of getting their input across

1
loren avatar

likewise

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think your approach is clearly a bit of a hot take Loren. Using environment variables with the AWS SDK is very common, and assume role for same account access is uncommon. You can do it your way but don’t act surprised when people question it

Mahesh avatar

hi ,

Mahesh avatar

Hi All, I am trying to create VPC along with dynamic subnet modules. locals { vpc_availability_zones = [“us-east-1a”,”us-east-1b”,”us-east-1c”,”us-east-2a”,”us-east-2b”,”us-east-2c”] use_az_ids = true az_name_map = { “us-east-1a” = “AZ-1”, “us-east-1b” = “AZ-2”, “us-east-1c” = “AZ-3”, “us-east-2a” = “AZ-4”, “us-east-2b” = “AZ-5”, “us-east-2c” = “AZ-6” # Add more mappings for your availability zones } }

module “vpc” { source = “cloudposse/vpc/aws” # Cloud Posse recommends pinning every module to a specific version version = “2.1.0” namespace = “eg” stage = “test” name = “app”

ipv4_primary_cidr_block = “10.0.0.0/16”

assign_generated_ipv6_cidr_block = false }

module “dynamic_subnets” { source = “cloudposse/dynamic-subnets/aws” # Cloud Posse recommends pinning every module to a specific version # version = “x.x.x” namespace = “eg” stage = “test” name = “app” availability_zones = [“us-east-2a”, “us-east-2b”, “us-east-2c”] vpc_id = module.vpc.vpc_id igw_id = [module.vpc.igw_id] ipv4_cidr_block = [“10.0.0.0/16”] }

Mahesh avatar

But I am getting the error Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value. ╵ ╷ │ Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value. ╵ ╷ │ Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value.

Brian avatar

I think there is missing context in what you provided because this works.

locals {
  vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-2a", "us-east-2b", "us-east-2c"]
  az_name_map = {
    "us-east-1a" = "AZ-1",
    "us-east-1b" = "AZ-2",
    "us-east-1c" = "AZ-3",
    "us-east-2a" = "AZ-4",
    "us-east-2b" = "AZ-5",
    "us-east-2c" = "AZ-6"
  }
}

output "test" {
  value = [
    for az in local.vpc_availability_zones : local.az_name_map[az]
  ]
}

The results of the terraform apply

Changes to Outputs:
  + test = [
      + "AZ-1",
      + "AZ-2",
      + "AZ-3",
      + "AZ-4",
      + "AZ-5",
      + "AZ-6",
    ]

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Mahesh avatar

Adding dynamic block creates error

Mahesh avatar

module “dynamic_subnets” { source = “cloudposse/dynamic-subnets/aws” # Cloud Posse recommends pinning every module to a specific version # version = “x.x.x” namespace = “eg” stage = “test” name = “app” availability_zones = [“us-east-2a”, “us-east-2b”, “us-east-2c”] vpc_id = module.vpc.vpc_id igw_id = [module.vpc.igw_id] ipv4_cidr_block = [“10.0.0.0/16”] }

Brian avatar

My only conclusion is the aws provider is not scoped to us-east-2 but you’re intended to use that region. The map is built via aws_availability_zones in the cloudposse/dynamic-subnets/aws module. This what it says…
The Availability Zones data source allows access to the list of AWS Availability Zones which can be accessed by an AWS account within the region configured in the provider.

Mahesh avatar

Thanks a lot Brian, this works

1
Mahesh avatar

Inspite of locals defined..throwing error

2023-07-21

Juan Soto avatar
Juan Soto

Hi, anybody is using https://www.winglang.io/ ?

Wing Programming Language for the Cloudattachment image

Wing is a cloud-oriented programming language. Most programming languages think about computers as individual machines. In Wing, the cloud is the computer.

Danny avatar

wow this is super interesting. Seems similar to pulumi but with the addition of targeting multiple clouds

Wing Programming Language for the Cloudattachment image

Wing is a cloud-oriented programming language. Most programming languages think about computers as individual machines. In Wing, the cloud is the computer.

jonjitsu avatar
jonjitsu

looks exactly like pulumi especially when you consider pulumi-cloud which has cloud agnostic components

Danny avatar

The more i look into it the more unique it is from pulumi

Danny avatar

• looks like it tries to bridge the app code to be right next to the infra

• more opinionated in that you dont need to provision IAM and it handles least privileges out of the box to name a few

Danny avatar

i could see it being really cool, but would be very hesitant to use this for any production application

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep in mind that Winglang is a net-new language. It’s not an SDK like Pulumi that let’s you bring-your-own-language.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s like HCL (in that HashiCorp invented it), and like Pulumi (in that it feels more like a real programming language than HCL).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

At least, this was as of a few months ago when we got a demo.

1

2023-07-22

jonjitsu avatar
jonjitsu

Anyone have any articles, books or videos discussing approaches and tooling for dealing with large amounts of infrastructure code. I’m looking for experiential opinions on:

  • dealing with a lot of terraform workspaces and their potential interdependencies
  • module versioning approaches (use a proper registry or just use git sources)
  • keeping workspaces up to date
    • terraform version: often something is built and then activity on that thing stops, someone goes in a few years later to do something and realizes it was built with terraform 0.0.1 which doesn’t even exist for you new m1 mac
    • provider versions: upgrading providers causes a property to become a resource
  • apply approaches: apply from workstation or use something like atlantis to have somewhat a log of applies along with some group review
  • monorepo vs 1 repo per module vs 1 repo per module group
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do all of that (or almost all) with https://atmos.tools (but we don’t have a book written on the topic yet ). If you are interested in trying Atmos, let us know, we will try to answer the questions one by one (and if you are, we have atmos channel)

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

jonjitsu avatar
jonjitsu

Thanks, I’m actually currently going through the docs for atmos now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, #office-hours is a great resource to ask questions live. Every wednesday.

1
Chris Dobbyn avatar
Chris Dobbyn

We use a monorepo for our IaC (modules are managed separately). That monorepo contains some yaml files defining stuff we share across workspaces. We try to not share outputs across workspaces as this quickly gets unmanageable. It’s easier to use descriptive data blocks.

The more infrastructure you have, the more you will realize it’s easier to maintain smaller logical subsections of infrastructure. For example if we deploy Postgres on RDS that would have the KMS key, log group, aws backup, proxy, maybe some extra region we transfer data to, etc.

While it’s easier to start to have one big state for everything, it quickly will cost you a lot in API calls and wasted time.

jonjitsu avatar
jonjitsu

Thanks for the response Chris. So your monorepo has only iac that creates “live” concrete infrastructure and that infrastructure is made up of modules that are separated out into their own respective repositories? Are the module repositories 1 repo per module or do you have multi-module repos too? Do you use a private registry or git source references in your live monorepo? What do you mean by share outputs across workspaces? Is a descriptive data block a well documented tfvars file?

Chris Dobbyn avatar
Chris Dobbyn

We started with one monorepo for all modules but found that applying GitHub actions was more and more complicated as time went on. Eventually we separated out individual modules, and are still separating things today.

We use terraform to manage GitHub to make the setup of this much less complicated on ourselves.

Chris Dobbyn avatar
Chris Dobbyn

We used to go get output from independent states, and while this is a thing you can do it gets complicated over time.

Instead we define things as much as we can in static yaml files that both states can see (in our monorepo), then we setup a resource in one, and go and get it using data blocks in another.

Either way adds complexity but we’re using terraform cloud and we found this to be more complex than we were willing to manage.

Muhammad Taqi avatar
Muhammad Taqi

Hy folks, I’m using rds module and getting Error: creating RDS DB Instance (dev-devdb): InvalidParameterValue: Invalid master user name Is this related to database_user variable?

Hao Wang avatar
Hao Wang

is it admin used?

Hao Wang avatar
Hao Wang

It is reserved by AWS

1
Chris avatar

Has anyone had any good/bad experiences with Terragrunt? We’re considering adopting it to be more DRY.

loren avatar

I enjoy using it. I find a lot of cases where its features are valuable to me. Especially managing a lot of similar deployments to a lot of different accounts over a long time. Being able to generate terraform dynamically becomes pretty powerful. And it gives you simple ways to provide common inputs to many different root modules.

loren avatar

I haven’t had anything I’d call a bad experience yet. Though I suppose, the abstraction can make it easy to overcomplicate things.

Chris avatar

Thanks @loren

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Shameless plug, also consider atmos. See what resonates with you. https://atmos.tools (it’s by Cloud Posse and what we use).

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

2
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, we have a #terragrunt channel. Might get more responses in there re: terragrunt.

Chris avatar

Thanks @Erik Osterman (Cloud Posse). Will investigate Atmos too

2023-07-23

2023-07-24

Brian avatar

Should the name field for each account be the same value of stage in the account component module (aws-terraform-components)?

Full context in the

Brian avatar

I am provisioning a new aws org with latest core (cold start) components: tfstate-backend, account, account-map, aws-sso, aws-teams, and aws-team-roles. I believe there is some conflict with the documentation between a subset of those components.

Specifically, account readme had name field for the accounts be {tenant}-{stage}. However, when deploying aws-sso , it looks for the name of the root account via the stage field instead of the name field configured by account-map (which says it root_account_account_name should be stage of the root account).

   on main.tf line 44, in locals:
│   44:   root_account = local.account_map[module.account_map.outputs.root_account_account_name]
│     ├────────────────
│     │ local.account_map is object with 5 attributes
│     │ module.account_map.outputs.root_account_account_name is "root"
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

1
Brian avatar

Thank you @Gabriela Campana (Cloud Posse). I already resolved it. There were a combination of problems. One of which was incorrect description for a variable in account-map component. I submitted PR and merged that updated the description.

Fixing the *_account_account_name variable descriptions and setting values for the those variables accordingly and explicitly defining stack as descriptor_formats (see below) in the stack configurations resolved all problems I had when running a cold-start with the latest components.

  descriptor_formats:
    stack:
      labels: ["environment", "stage"]
      format: "%v-%v"

I didn’t submit any PRs for the second fix because I am not sure where it needs to be documented. And I assume it already documented in the private Reference Architecture documentation (I dont have access).

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

All the defaults should work if you are not using tenant. If you are using tenant, our standard settings are

descriptor_formats:
  account_name:
    format: "%v-%v"
    labels:
    - tenant
    - stage
  stack:
    format: %v-%v-%v"
    labels:
    - tenant
    - environment
    - stage
Brian avatar

Thank you @Jeremy G (Cloud Posse). That is also what I learned a few weeks ago.

jose.amengual avatar
jose.amengual

Terraform cloud question :

jose.amengual avatar
jose.amengual

I used TF opensource with workspaces all time but TFC workspaces a re a bit different I read the docs for what I understand you can use the remote backend to use TFC which is recommended for CLI driven deployments but there is the terraform block config that is used more for TFC UI driven deployments

is that a correct assumption?

jose.amengual avatar
jose.amengual

so you can do this :

terraform {
  cloud {
    organization = "example_corp"
    ## Required for Terraform Enterprise; Defaults to app.terraform.io for Terraform Cloud
    hostname = "app.terraform.io"

    workspaces {
      tags = ["app"]
    }
  }
}

OR this :

{
   "terraform": {
   "backend": {
   "remote": {
   "organization": "pepeinc",
   "workspaces": {
   "prefix": "tagging"
}
}
}
}
}

that is a backend.tf file in json format I think is not possible to do both but AFAIK if you do cloud {} the apply/plan will have to happen in the UI

Chris Dobbyn avatar
Chris Dobbyn

The terraform cloud block let’s you connect your workspace (in terraform cloud) with your local so that the backend will be cloud. It works very differently from the workspaces in the cli which is unfortunate naming on their part.

Chris Dobbyn avatar
Chris Dobbyn

Remind me in the morning and I can explain more if you want.

jose.amengual avatar
jose.amengual

no problem

jose.amengual avatar
jose.amengual

thanks

2023-07-25

2023-07-26

Charles Rey avatar
Charles Rey

Hello all, quick question on Terraform deployed via Github actions to AWS. I’m looking for a complete guide on how to do it correctly and what to include/exclude. Would anyone have any suggestions please? Some documents mention Route 53 config others don’t so it’s all very confusing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There are a lot of solutions these days for how to us GHA. No two will be equal. We also have our solution for atmos with terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our solution requires a bucket and dynamodb table used for storing the planfile (not to be confused with the state file).

Josh Pollara avatar
Josh Pollara

Building the solution out yourself is dangerous. I wrote a blog post on this exactly. It even references an original cloud posse post https://terrateam.io/blog/cloud-posse-what-about-github-actions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Except for now we’ve done it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So it;’s no longer dangerous to use GHA natively with Terraform

2
Charles Rey avatar
Charles Rey

Thank you Erik and Team, appreciate the help

Sergei avatar

So the answer to the question ‘what is complete guide on how to do it correctly’ is ‘use atmos’ …

Josh Pollara avatar
Josh Pollara

Could be. Depends if you need enterprise-level features and support.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Sergei the complete guide is what we offer as our service. We haven’t yet had a chance to release a simple (free) quick start for the GitHub Actions, but it is our plan. For now, like everything else we publish, it’s free to use and do what you will. Or not.

Sergei avatar

Thank you @Erik Osterman (Cloud Posse), appreciate that

Justin Picar avatar
Justin Picar

Hey all! Had a question about Terraform and AWS IAM, specifically using the aws_iam_policy resource.

Context: When I run terraform apply to update a policy, AWS performs 2 operations in the background: CreatePolicyVersion and DeletePolicyVersion (since I’m always at the customer managed-policy limit of 5). According to the AWS doc https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicyVersion.html, the API call has the request parameter “SetAsDefault”.

Question: is it possible to configure Terraform so that running terraform apply on the modified aws_iam_policy resource has AWS run CreatePolicyVersion without setting the new policy as default?

CreatePolicyVersion - AWS Identity and Access Management

Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using before you create a new version.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

CreatePolicyVersion - AWS Identity and Access Management

Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using before you create a new version.

Release notes from terraform avatar
Release notes from terraform
06:53:33 PM

v1.5.4 1.5.4 (July 26, 2023) BUG FIXES: check blocks: Fixes crash when nested data sources are within configuration targeted by the terraform import command. (#33578) check blocks: Check blocks now operate in line with other checkable objects by also executing during import operations. (<a…

Release v1.5.4 · hashicorp/terraformattachment image

1.5.4 (July 26, 2023) BUG FIXES:

check blocks: Fixes crash when nested data sources are within configuration targeted by the terraform import command. (#33578) check blocks: Check blocks now opera…

Fix crash when nested data blocks are mixed with the import command by liamcervante · Pull Request #33578 · hashicorp/terraformattachment image

This PR fixes a crash that occurs when the terraform import command is used against configuration that has a nested data block defined within a check block. Essentially, the nested data block still…

tommy.walker avatar
tommy.walker

Hi folks,

Anyone have the bandwidth to opine a little bit at someone just starting out with Terraform? We’re a shop who has lots of AWS stuff built out on cloudformation, and we are just starting to build out stuff in terraform. We’re getting to the point where we need to design a platform for running our terraform, figuring out where we’ll save state, etc. More threaded !

tommy.walker avatar
tommy.walker

First we built a (brand new to us) OpenSearch module in AWS with Terraform (after our hosted-elastic went belly-up with Qbox. Oy!) . That went well and the OpenSearch is a nicely module piece of infra, so that was easy to do.

Next, we are working to build a big module that will spin up a new EKS cluster for us. We currently run 3 or 4 clusters and we do upgrade in place.. which is hard. We’re excited to start using immutable infra.

We did a trial of Spacelift and definitely liked it - but - we aren’t ALL in Terraform (we have a fair amount of cloudformation, bash, python, etc etc scripts and workflows that we won’t migrate out of completely anytime soon) so Spacelift was a bit less flexible than we wanted.

I’m tempted to just set up build pipelines in (self hosted) Bamboo, store our state in S3 and run from there. At least, until we have more experience and things get more complex.

Q1: Any reasons you would recommend against our starting off with pipelines in Bamboo & state in S3?

Q2: If you were starting over, what would you do now to set yourselves up well for the future? Or what do you wish you knew then that you know now?

Q3: I’m thinking about running pipelines that build logical pieces of our Infra - for instance “openSearch” and “EKS-cluster” and then “IAM roles, users and policies” - and each of these would end up with their own separate state file. Is that how it’s done?

Alex Jurkiewicz avatar
Alex Jurkiewicz

The question can sorta be rephrased as, how complex should our solution be? And the answer depends heavily on the size of your business and the number of engineers working on infra you have. If you’re a small consultancy, and you have one engineer writing terraform, and this won’t change for a few years, just do whatever fits in with your existing workflows the best

1
tommy.walker avatar
tommy.walker

thx @Alex Jurkiewicz - yeah, we’re a 200 person nonprofit - 16 eng total, 2 of those are infra. That makes sense - start off simple.

Alex Jurkiewicz avatar
Alex Jurkiewicz

My experience of non profit is that turnover is relatively high. So simplicity and clarity of solution are key

Joaquin Menchaca avatar
Joaquin Menchaca

How could I apply indent to multiline string in Terraform template? The following below only indents the first line.

grafana:
  dashboards:
    default:
      dgraph-control-plane-dashboard:
        json: |
           ${indent(8, "${dashboard_dgraph_control_plane}")}
Alex Jurkiewicz avatar
Alex Jurkiewicz

Don’t do string templating for json or yaml. Use yamlencode instead

Joaquin Menchaca avatar
Joaquin Menchaca

Grafana can load dashboards in yaml now?

Alex Jurkiewicz avatar
Alex Jurkiewicz

no, template the entire snippet you’ve pasted

Alex Jurkiewicz avatar
Alex Jurkiewicz
locals {
  dashboard = yamlencode({
    grafana = {
      dashboards = {
        default = {
          "dgraph-control-plane-dashboard" = {
            json: jsonencode(local.dashboard_dgraph_control_plane)
          }
        }
      }
    }
  }
})
1
this1

2023-07-27

Mahesh avatar

Team, I am wondering if we can assign custom endpoint (manually created) to RDS cluster writer instances via terraform

Mahesh avatar

hi Alex, I had referred the URL. However, would like to assign custom endpoint which are manually created.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Are you wanting to import existing infra into your configuration? https://developer.hashicorp.com/terraform/cli/import

Import | Terraform | HashiCorp Developerattachment image

Terraform can import and manage existing infrastructure. This can help you transition your infrastructure to Terraform.

Mahesh avatar

Let me check if we can assign the imported custom endpoint. Thank you..

2023-07-28

Graham avatar

Hi! I have a conceptual question about deploying multiple lightly-dependent services through terraform. Specifically I’d like to know whether I should 1. run terraform apply separately on each service, or 2. create one monolithic terraform file (with modules) and run terraform apply once on this. Details in the thread.

Graham avatar

Let’s say we have an org that has two web apps, which results in 6 pieces of infrastructure:

  • iam
  • db
  • backend_a (relies on security groups from iam and db)
  • frontend_a (relies on backend_a)
  • backend_b (relies on security groups from iam and db)
  • frontend_b (relies on backend_b)

All of these live in separate directories with their own [main.tf](http://main.tf), [outputs.tf](http://outputs.tf), and [variables.tf](http://variables.tf).

My question is, should I:

  1. run terraform apply separately for each service, and pass variables to the dependent services with terraform_remote_state
  2. create a super-directory that treats each of these as modules and only run terraform apply once on this super-directory

I think if everything goes right, “2.” seems easier because if I do something that requires a change up-stream (e.g. change the db configuration), it will automatically propagate downstream without me having to remember to re-run everything. However, I’m worried about both speed (terraform plan/apply may take forever) and fragility (if I break something it has the potential to break all our infra at once).

I was wondering if anyone had any wisdom about this? Thanks a lot in advance!

Dominique Dumont avatar
Dominique Dumont

I would try option 2:

• remote_state is flaky: its structure depends on how is organised the other stack. This creates coupling.

• your stack does not look that big

2
Dominique Dumont avatar
Dominique Dumont

If option 1 becomes necessary, I would use terragrunt to pass information between stacks. HTH

Graham avatar

Thanks a bunch!

Alex Jurkiewicz avatar
Alex Jurkiewicz

I would go with the first option. There are a few principles why:

  1. Terraformed resources should be split up into different configurations by lifecycle. The lifecycle of your base infra is very different to that of the services living on top
  2. Smaller configurations are better. Smaller blast radius, faster plan/apply, clearer boundaries
this1
Alex Jurkiewicz avatar
Alex Jurkiewicz

If you look at the example project layouts recommended in Terraform docs, you’ll find an example very similar to your own, with a vpc, db, and application. They are divided into three separate configurations

Graham avatar

Thank you! I guess it sounds like both are reasonable, and we’ll just have to choose which one seems most appropriate for our situation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think @Alex Jurkiewicz sums it up perfectly. By lifecycle is the most scalable way to organize it. While the other suggestions, are reasonable, the set a project up for long-term failure, because while most projects start small they end up growing. It’s true that depending on terraform remote state, makes modules more flaky, the alternative is to use a k/v store like parameter store.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We organize components by lifecycle in our strategy. https://atmos.tools/core-concepts/components/library/#terraform-conventions

Component Library | atmos

A component library is a collection of reusable building blocks.

Graham avatar

Great, thanks a bunch @Erik Osterman (Cloud Posse)

kallan.gerard avatar
kallan.gerard

While I agree cross configuration dependencies is a real problem with no great way to solve it with vanilla Terraform, I think it’s important to identify when something is a problem with TF and when it’s a symptom of incorrect boundaries. Misaligned boundaries is a major problem for any system not just TF.

For example, there’s concepts like vertically sliced, horizontally sliced, lifecycle sliced amongst others. These styles can also be nested.

Vertical slicing would be where everything for a service is contained within one boundary. so that would be service A’s database and IAM, frontend and backend would be contained within one configuration.

Horizontally sliced would be where you would have 4 configurations, one for IAM, one for dbs, one for frontends and one for backends.

Lifecycle sliced would depend, and could be nested inside either horizontally or vertically sliced.

kallan.gerard avatar
kallan.gerard

I’d also be wary of thinking of it as a terraform scoped problem, as these sort of dependency issues aren’t limited to TF and it’s where things like inversion of control patterns really become essential.

kallan.gerard avatar
kallan.gerard

Generally I say if it’s showing itself as a significant problem that leads me to believe there’s some design problems that need to be fixed

kallan.gerard avatar
kallan.gerard

To give you a more concrete answer to your original example, and given some assumptions, I would have:

• Service A and service B as separate vertical slices

• Service A IaC contains it’s IAM, DB and whatever is in the backend and frontend config (What would you actually put in here though?)

• And within each vertical slice, likely separate by lifecycle, which may be ( IAM ) > ( DB ) > ( frontend and backend )

• Each service vertical slice (service) would have it’s own pipeline where TF Apply would be ran every time, but each stage would only run if it’s tf code or a parent tf code changed.

• These vertical slices would likely be with the service’s application repo or atleast within it’s own repo. So service A tf wouldn’t be in the same repo as service B tf unless they were already monorepo’d

• I’d keep as much application level configuration out of the tf entirely. I wouldn’t use Tf to deploy docker images, builds, bundling, set deployments config or image tags etc.

2023-07-29

2023-07-30

2023-07-31

    keyboard_arrow_up