#terraform (2023-07)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-07-01
Hey Guys, appreciate any help with this : https://stackoverflow.com/questions/74904283/how-to-pass-variables-between-terragrunt-workspaces ,
I’ve applied networks-vpc
workspace first and have refreshed state that shows the outputs as well. However Unable to pass the subnet values into “ec2-amz-lnx” workspac?. Not sure what am I doing wrong. Happy to provide any further info if needed. Thanks
I heard Terragrant but never used it, is it possible not to use it?
Unfortunately not. The whole point terragrunt was created is to address the drawbacks of terraform in keeping configuration DRY (Ref# https://terragrunt.gruntwork.io/)
Terragrunt is a thin wrapper for Terraform that provides extra tools for keeping your Terraform configurations DRY, working with multiple Terraform modules, and managing remote state.
Can Terraform modules help DRY?
I’ve used terragrunt quite a bit to pull state from another workspace, typically S3.
Are you pulling state with something like:
data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = "some-terraform-state-s3-bucket"
key = "full/path/${var.env}/${var.env}-vpc/terraform.tfstate"
region = "us-west-2"
}
}
Then referencing in your config with something like:
data.terraform_remote_state.vpc.outputs.vpc_id
?
yeah
If the wrapper is the problem, may need to test if the codes work without the wrapper
2023-07-02
2023-07-03
Heya terraformers,
I would like to create an external data source but I see no timeouts… Is there a default value for reading this resource type because I have a scrit that is likely to run for about 10 minutes or so and I am hoping that my build does not fail due to a shorter timeout period
Hey mate, sounds like there’s probably some better ways of doing what you’re trying to do, but as far as I know there’s no timeout on the external data source.
Where is the terraform running?
Hey,
The terraform is running in a Jenkins worker. I am trying to bake AMIs using imagebuilder, but whenever we commit to master. The AWS provider does not have a resource for starting image builds, so I would like to have an external-data resource do that and then pass the AMI name back to terraform so it can be set in a launch template.
I will try it on my local env and see how it goes. The builds take about 40 minutes and I wouldn’t want them failing
I would strongly advise against doing that. I never recommend wrapping a build process within Terraform.
There’s a few different ways you could tackle it, but as a simplified scenario can you not just build the AMI in one job and pass the ami id to Terraform in a second job.
I’m not really familiar with Jenkins, but whatever construct they use for stages of execution
How would you approach this from whatever CI tool you’d use?
What’s the particular circumstances that kick off this whole process
commit change to master branch
As in, what is the business reason for building a new ami and updating a template
Okay so taking a step back what causes a new commit to the default branch
So we have AWS instances which serve as VPN peers to partner networks. Whenever we need to establish a new VPN tunnel with a new peer, we add the parameters to our repo and upon merging, CICD should trigger the creation of a new instance. This new instance is supposed to have certain tools installed inside (packages, SSL certs, config files rendered by chef, etc). This whole process used to take a bit of time, so we decided to bake all the dependencies into an AMI, and then update the launch instances with the new AMI and tag it with the commit hash (alongside the other tags that we normally use)
So are the new parameters used within the ami image itself,
As in when you update the parameters what part are you updating specifically
IPSEC parameters for example
peer IP addresses, etc
rendering config files using such parameters
Okay cool cool. So there’s a few ways you could approach it,
If I was doing something like that with GitHub and GitHub actions here’s how I’d probably do it.
On commit to master a github actions job kicks off, which builds the new AMI, and adds the ami details and the release notes to a GitHub release
Then my Terraform provisioning would be triggered to run off release events.
You could also consider going more of a pure gitops model and having the job change an actual ami variable in git
What do you mean by ami variable? The ami name?
One problem with your current implementation is that your ami id isn’t actually stored in git is it
As in if you searched your master branch for your ami id it wouldn’t show up would it
We cannot store the ami anywhere as the ami does not exist - it needs to get built first
The ami name yeah
We have a base image that we build upon.
Whatever the output of the image builder is you’re using
Yeah but the building of the ami doesn’t have any inherent coupling to terraform
Like when I build a Dockerfile and publish it to a registry, it’s still built and published. Whether or not I do something with it immediately or later or never
So there’s no reason your terraform has to build the ami, you could just provide the ami name to a terraform variable
Whether that’s an input in CI or an update to a .tfvars on the master branch
So how does the AMI get built?
the AMI whose name you pass to TF
Whatever you’re doing in the local executor script
Just do that in your CI environment
Oh, I see. So execute the script directly from CI, rather than using the external data resource?
Then you can either pass the ami name or use a data.aws_ami to get the latest version of that ami
Alright. Let me see how to go about that. Thanks for the insights
No problem
Hi all, I’m using Terraform to deploy a EC2 instance on Windows Server 2022 (using AWS base image for this) I have a user data script that is executed at launch with no problems on Server 2019 but for some reason it doesn’t seem to work at launch on Server 2022. The script runs fine when running it locally on the box. Wondering if anyone has come across this issue?
Solved on another thread https://sweetops.slack.com/archives/CB6GHNLG0/p1689251766670899
Hi, has anyone been able to successfully run userdata (powershell script) at launch of an EC2 instance on Windows Server 2022 using Amazons base image for this?
2023-07-04
For terraform-aws-mq-broker there seems to be a deprecated argument now with aws 5.x:
│ Warning: Argument is deprecated
│
│ with module.mq.module.mq_broker.aws_ssm_parameter.mq_application_username[0],
│ on .terraform/modules/mq.mq_broker/main.tf line 74, in resource "aws_ssm_parameter" "mq_application_username":
│ 74: overwrite = var.overwrite_ssm_parameter
│
│ this attribute has been deprecated
I’ve created an issue if that’s OK: https://github.com/cloudposse/terraform-aws-mq-broker/issues/64
I’ve expanded on your issue and linked an upstream bug with the overwrite
argument
@Psy-Q we’ll have a look asap
2023-07-05
Hello Terraformers! Here’s a post that I created to show how you can grab a current list of VPC names in your environment https://www.taccoform.com/posts/tfg_p7/
Overview At some point in your AWS and Terraform journey, you may need the names of the VPCs in a given AWS region. You’d normally look to use a data source lookup, but the name field is not exposed as an attribute for aws_vpcs. Here’s a quick way to get those VPC names. Lesson You will still need to use the aws_vpcs data source to get a list of all the VPC IDs in a given region:
Thanks for sharing! Can you also share the recipe for the image at the top of the article?
Overview At some point in your AWS and Terraform journey, you may need the names of the VPCs in a given AWS region. You’d normally look to use a data source lookup, but the name field is not exposed as an attribute for aws_vpcs. Here’s a quick way to get those VPC names. Lesson You will still need to use the aws_vpcs data source to get a list of all the VPC IDs in a given region:
I wish I remember that recipe, it’s from a couple of years ago
2023-07-06
2023-07-07
aws_organizations_account terraform module how to creates account under an OU?
Does it create directly under the OU, or create in root then move to OU ?
Hi, i don’t know if this is the right channel. Sometimes rolling out a helm update using cloudposse/helm-release/aws (version=0.8.1)
“breaks” my helm deployment.
Thank you for any help
⎈|arn:aws:eks:us-west-2:605322476540:cluster/notifi-uw2-dev-eks-cluster:default) bruno@t490s ~/Notifi/notifi-infra fix/change-trace-id-string helm ls -n prometheus --debug
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
oauth2-proxy-alertmanager prometheus 1 2023-07-06 20:01:06.47035317 +0000 UTC deployed oauth2-proxy-6.13.1 7.4.0
oauth2-proxy-prometheus prometheus 1 2023-07-06 20:01:00.401293758 +0000 UTC deployed oauth2-proxy-6.13.1 7.4.0
(⎈|arn:aws:eks:us-west-2:605322476540:cluster/notifi-uw2-dev-eks-cluster:default) bruno@t490s ~/Notifi/notifi-infra fix/change-trace-id-string k get secrets -n prometheus
NAME TYPE DATA AGE
alertmanager-kube-prometheus-stack-alertmanager Opaque 2 19h
alertmanager-kube-prometheus-stack-alertmanager-generated Opaque 2 19h
alertmanager-kube-prometheus-stack-alertmanager-tls-assets-0 Opaque 0 19h
alertmanager-kube-prometheus-stack-alertmanager-web-config Opaque 1 19h
kube-prometheus-stack-admission Opaque 3 19h
kube-prometheus-stack-grafana Opaque 3 19h
oauth2proxy-alertmanager Opaque 3 19h
oauth2proxy-prometheus Opaque 3 19h
prometheus-kube-prometheus-stack-prometheus Opaque 1 19h
prometheus-kube-prometheus-stack-prometheus-tls-assets-0 Opaque 1 19h
prometheus-kube-prometheus-stack-prometheus-web-config Opaque 1 19h
sh.helm.release.v1.kube-prometheus-stack.v1 helm.sh/release.v1 1 19h
sh.helm.release.v1.kube-prometheus-stack.v2 helm.sh/release.v1 1 37m
sh.helm.release.v1.kube-prometheus-stack.v3 helm.sh/release.v1 1 22m
sh.helm.release.v1.oauth2-proxy-alertmanager.v1 helm.sh/release.v1 1 19h
sh.helm.release.v1.oauth2-proxy-prometheus.v1 helm.sh/release.v1 1 19h
It’s not really clear what the problem is here?
Yes, agree - don’t see what’s broken.
PS: i have to delete the secret and the release comes back
Hey Guys, I’ve used terraform to create an ecs cluster and it works locally. when i try and use it in a git hub action the terraform apply is successful but no resources are created and when i check the terraform state via my local machine it say there are no resources. However when i then run a terraform apply from my local machine it says some roles already exist that didn’t exist prior to the github actions apply. They are using the same back end so it shouldn’t be a state file issue. Does anyone know whats happening
my Best guess - you’re missing TF state backend
Add backend.tf on local and run tf apply - migrate the state to s3 and then commit the backend tf file to GH backend.tf
terraform { backend “s3” { bucket = “mybucket” key = “path/to/my/key” region = “us-east-1” } }
I think the cluster does exist but you are looking in the wrong region, or perhaps account
Probably locally you’ve explicitly set the region, while in GitHub actions, it’s defaulting to useast1
my terraform file has this backend locally and on github actions so it should be in the same region. backend “s3” { bucket = “firstwebsite-tf-state-backend” key = “tf-infra/terraform.tfstate” region = “us-east-1” dynamodb_table = “terraform-state-locking”
}
And everything should be in same region
That is just the region for the state file. Can you share the provider config?
terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 4.0” } }
backend “s3” { bucket = “firstwebsite-tf-state-backend” key = “tf-infra/terraform.tfstate” region = “us-east-1” dynamodb_table = “terraform-state-locking”
} }
# Configure the AWS Provider provider “aws” { region = “us-east-1” }
resource “aws_ecs_cluster” “dan” { name = “diggas”
setting { name = “containerInsights” value = “enabled” } }
resource “aws_ecs_cluster_capacity_providers” “dan_capacity_provider” { cluster_name = aws_ecs_cluster.dan.name
capacity_providers = [”${aws_ecs_capacity_provider.test.name}”]
default_capacity_provider_strategy { base = 1 weight = 100 capacity_provider = aws_ecs_capacity_provider.test.name } }
resource “aws_ecs_capacity_provider” “test” { name = “test”
auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.bar.arn managed_termination_protection = “ENABLED”
managed_scaling {
status = "ENABLED"
target_capacity = 1
maximum_scaling_step_size = 1
} }
}
resource “aws_iam_role” “ecs_agent” { name = “ecs-agent” assume_role_policy = data.aws_iam_policy_document.ecs_agent.json }
resource “aws_iam_role” “execution_role” { name = “execution-ecs-ec2-role” assume_role_policy = jsonencode({ Version = “2012-10-17” Statement = [ { Action = “sts:AssumeRole” Effect = “Allow” Sid = “” Principal = { Service = “ec2.amazonaws.com” } } ] }) } data “aws_iam_policy_document” “ecs_agent” { statement { actions = [“sts:AssumeRole”]
principals {
type = "Service"
identifiers = ["[ec2.amazonaws.com](http://ec2.amazonaws.com)"]
} } }
resource “aws_iam_role_policy_attachment” “ecs_agent_permissions” { role = aws_iam_role.ecs_agent.name policy_arn = “arniam:policy/service-role/AmazonEC2ContainerServiceforEC2Role” }
resource “aws_iam_instance_profile” “instance_profile” { name = “comeon-instanceprofile” role = aws_iam_role.execution_role.name }
resource “aws_iam_role_policy_attachment” “ecs_task_permissions” { role = aws_iam_role.execution_role.name policy_arn = “arniam:policy/service-role/AmazonEC2ContainerServiceforEC2Role” }
resource “aws_iam_instance_profile” “ecs_agent” { name = “ecs-agent” role = aws_iam_role.ecs_agent.name } resource “aws_launch_template” “test” { name_prefix = “test”
iam_instance_profile { name = aws_iam_instance_profile.ecs_agent.name }
image_id = “ami-0bf5ac026c9b5eb88” instance_type = “t3.large”
user_data = base64encode( <<-EOF #!/bin/bash echo “ECS_CLUSTER=diggas” >> /etc/ecs/ecs.config EOF ) }
resource “aws_autoscaling_group” “bar” { availability_zones = [“us-east-1a”] desired_capacity = 1 max_size = 1 min_size = 1
launch_template { id = aws_launch_template.test.id version = “$Latest” } protect_from_scale_in = true }
resource “aws_default_vpc” “default” { tags = { Name = “Default VPC” } } resource “aws_ecs_task_definition” “tformtest” { family = “tformtest” container_definitions = jsonencode([ { name = “tform” image = “public.ecr.aws/v8j0g7n1/firstwebapp:latest” cpu = 2048 memory = 4096 essential = true execution_role_arn = “ecsTaskExecutionRole” network_mode = “default”
portMappings = [
{
containerPort = 8000
hostPort = 8000
}
]
}, ])
} resource “aws_ecs_service” “test_service” { name = “test-service” cluster = aws_ecs_cluster.dan.id task_definition = aws_ecs_task_definition.tformtest.id
deployment_minimum_healthy_percent = 100 deployment_maximum_percent = 200
desired_count = 1 }
data “aws_vpc” “default” { default = true }
data “aws_route_table” “default” { vpc_id = data.aws_vpc.default.id filter { name = “association.main” values = [“true”] } }
thats the whole file
2023-07-08
I published a new TF module that allow you to utilize docker to build artifacts (eg, zip file that contains lambda source code) without polluting the machine running TF and docker. May or may not find it useful, but I found it very useful. Especially for building lambda@edge functions that has deployment specific configuration.
https://registry.terraform.io/modules/sgtoj/artifact-packager/docker/latest
Any Hashi folks in here these days? https://github.com/hashicorp/terraform-provider-aws/pull/31284 has been sitting there for 2 months.
Description
Fixes update DDB table action to only try and update replicas CMK if actually using a CMK.
Relations
Closes #31153
References
Built and tested this locally. Ran what was failing on the linked bug report which applied clean.
Output from Acceptance Testing
Before change:
❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned' -timeout 180m
=== RUN TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
table_test.go:1771: Step 2/2 error: Error running apply: exit status 1
Error: updating DynamoDB Table (tf-acc-test-9185934778554862359) SSE: ValidationException: 1 validation error detected: Value '' at 'replicaUpdates.1.member.update.kMSMasterKeyId' failed to satisfy constraint: Member must have length greater than or equal to 1
status code: 400, request id: S56IOFM90QM4O98R5OIFJNG5IFVV4KQNSO5AEMVJF66Q9ASUAAJG
with aws_dynamodb_table.test,
on terraform_plugin_test.tf line 14, in resource "aws_dynamodb_table" "test":
14: resource "aws_dynamodb_table" "test" {
--- FAIL: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (154.64s)
FAIL
FAIL github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb 154.709s
FAIL
After change:
❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned' -timeout 180m
=== RUN TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
--- PASS: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (331.96s)
PASS
ok github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb 332.037s
IMO, one reason Hashi folk are in here is that they don’t get context-free “+1” requests constantly
Description
Fixes update DDB table action to only try and update replicas CMK if actually using a CMK.
Relations
Closes #31153
References
Built and tested this locally. Ran what was failing on the linked bug report which applied clean.
Output from Acceptance Testing
Before change:
❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned' -timeout 180m
=== RUN TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
table_test.go:1771: Step 2/2 error: Error running apply: exit status 1
Error: updating DynamoDB Table (tf-acc-test-9185934778554862359) SSE: ValidationException: 1 validation error detected: Value '' at 'replicaUpdates.1.member.update.kMSMasterKeyId' failed to satisfy constraint: Member must have length greater than or equal to 1
status code: 400, request id: S56IOFM90QM4O98R5OIFJNG5IFVV4KQNSO5AEMVJF66Q9ASUAAJG
with aws_dynamodb_table.test,
on terraform_plugin_test.tf line 14, in resource "aws_dynamodb_table" "test":
14: resource "aws_dynamodb_table" "test" {
--- FAIL: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (154.64s)
FAIL
FAIL github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb 154.709s
FAIL
After change:
❯ TF_ACC=1 go test ./internal/service/dynamodb/... -v -count 1 -parallel 20 -run='TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned' -timeout 180m
=== RUN TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== PAUSE TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
=== CONT TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned
--- PASS: TestAccDynamoDBTable_Replica_singleDefaultKeyEncryptedAmazonOwned (331.96s)
PASS
ok github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb 332.037s
2023-07-10
Hello #terraform Has anyone setup google binary authorization policy to sign the images in Google’s AR
Hey all. Is there a way to do string expressions
in Terraform? For example, I have this:
resource "aws_ssm_parameter" "authz_server_name" {
name = "server_name"
value = module.authz_server_remote_state.outputs.authz_server_name
description = "Server name"
type = "String"
overwrite = true
}
but I would like to do this:
resource "aws_ssm_parameter" "authz_server_name" {
name = "server_name"
value = eval("module.authz_server_remote_state.outputs." + var.value_key_name)
description = "Server name"
type = "String"
overwrite = true
}
Is something like this possible with Terraform?
well you can create strings using interpolation, but there is no way to evaluate references like that
your best bet for this particular use case would be to modify the source to be a map, and then you can index into the map, e.g.
module.authz_server_remote_state.outputs.authz_server_names[var.value_key_name]
2023-07-11
2023-07-12
v1.5.3 1.5.3 (July 12, 2023) BUG FIXES: core: Terraform could fail to evaluate module outputs when they are used in a provider configuration during a destroy operation (#33462) backend/consul: When failing to save state, consul CAS failed with transaction errors no longer shows an error instance memory address, but an actual error message….
A module output is generally not used during destroy, however it must be evaluated when its value is used by a provider for configuration, because that configuration is not stored between walks. Th…
What is the best book to learn Terraform?
The Book of Hard Knocks
I read https://www.google.com/search?cs=0&sxsrf=AB5stBiCyPCyumrKOTNGjWrLvVG3WooynQ<i class="em em-1689188620075&q=Terraform"</i>+Up+and+Running Up and Running: Writing Infrastructure as Code> a few years ago, and it looks like it has been updated.
Yeah that’s still a good one IMO
There’s also a number of tutorials now, https://developer.hashicorp.com/terraform/tutorials
Explore Terraform product documentation, tutorials, and examples.
They seem more focused on being an onramp for Terraform cloud. I think I will reread https://www.google.com/search?cs=0&sxsrf=AB5stBiCyPCyumrKOTNGjWrLvVG3WooynQ<i class="em em-1689188620075&q=Terraform"</i>+Up+and+Running Up and Running: Writing Infrastructure as Code>, I am getting rusty and need to learn the new import block.
Yeah maybe. But the terraform cli still works fine. Just ignore the TFC pieces and you can do pretty much all the same things locally
I enjoyed the IaC book by Kief Morris https://infrastructure-as-code.com/book/
Exploring better ways to build and manage cloud infrastructure
I read it later on, but it puts into words what I’ve learned the hard way
There’s also this book by Rosemary Wang, but I haven’t gotten around to it yet: https://www.amazon.com/Patterns-Practices-Infrastructure-Code-Terraform/dp/1617298298
2023-07-13
Hi All..I am trying to create vpc using module. module “vpc” { source = “cloudposse/vpc/aws” # Cloud Posse recommends pinning every module to a specific version version = “2.1.0” namespace = “eg” stage = “test” name = “app”
ipv4_primary_cidr_block = “10.0.0.0/16”
assign_generated_ipv6_cidr_block = false }
But it is prompting to enter vpc_id for tfplan
please check the example https://github.com/cloudposse/terraform-aws-vpc/tree/main/examples/complete
vpc_id = module.vpc.vpc_id
$ terraform plan
var.vpc_id
VPC ID where subnets will be created (e.g. vpc-aceb2723
)
Enter a value:
seems it is needed to use -var-file
with fixure var file
Hi, has anyone been able to successfully run userdata (powershell script) at launch of an EC2 instance on Windows Server 2022 using Amazons base image for this?
If so, then some examples would be great. I’ve only noticed this when we started to provision EC2’s on Windows Server 2022. The same userdata (powershell script) works fine on Server 2019
Hi Jay, we’ve found in our labs that if the formatting of the instance disk is on 3 lines of code it breaks the user data script on Windows Server 2022. You might want to try putting it on one line instead (piped through each | if required) |
That’s awesome! Thanks Paul. I’ve been trying to figure this out for months and it’s worked straight away!
I knew it would be something simple
Thank you both for the help and guidance!
I’ll be sure to try this and confirm back :)
No problem @Jay, glad I could help. It’s tricky getting to the bottom of these strange Windows/Terraform issues
I agree!
2023-07-14
Hey @jose.amengual – Since you’re the expert, how do you typically run Atlantis? ECS or just on EC2? Do you use the CP module? Any suggestions for success on that front?
Terraform module for deploying Atlantis as an ECS Task
ECS
a lot of people uses Antons atlantis module
I use the cloudposse components in atmos
the cloudposse module is severely out of date
I just declare a ECS cluster and task def using the ghr image and pass the necessary variables
Gotcha Thanks for the info!
Hi All, I’m new to this group and terraform.can someone help me what’s the best way to learn terraform .Any good GitHub repo to do hands on and learn
Terraform has some great tutorials on the main website: https://developer.hashicorp.com/terraform/tutorials?product_intent=terraform
Explore Terraform product documentation, tutorials, and examples.
Thanks @Chris for the help.Does that have any hands on lab excercise or where can I look for better hands on
Thanks buddy
No problem at all!
2023-07-17
Is anyone here familiar with the cloudposse/label/null
module?
We’ve just started adopting it in a project for consistency and would like to understand the best way to label some resources in a module.
For example, would you use the same label for a lambda function and a security group for the lambda function? Is there any guidance on best practices so as not to run into any naming issues?
Personally, I lean towards using the same label for most things unless I know there will be collisions. e.g., When I am deploying multiple fns within the same root module, I will have multiple labels.
This is a module where I only need one label (yopass_label
), which is used for all resources. Here is a different module where I use one label (mw_service_label
) for most things but have two additional labels (mw_auth_service_label
and mw_urlrewrite_service_label
) that are used for their respective lambda-fn/service.
No, but if you figure out how to reference it as a child module in #terragrunt i’m all ears.
Thanks, Brian. I’ve made use of attributes
for specific cases and pass context between modules. Works a treat
If anyone is using Yopass for secret sharing over the web, I have a TF module to deploy it to AWS managed/serverless resources. It uses CP’s naming patterns. https://github.com/sgtoj/terraform-aws-yopass
Terraform Module to deploy Yopass on AWS serverless technologies
Hi! I’m relatively new to Terraform, but have read “Up and Running”.
I was looking for good templates on how to deploy a full web app end-to-end with best practices and came across the terraform-aws-ecs-web-app module. I was wondering whether people think it’s generally good practice to use something end-to-end like this, or if it’s better avoid using a pre-packaged module for something as complex as this. Any opinions?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
I think it’s ok to check out this module and even deploy it, but if you want to understand how things fit together and how to better troubleshoot this stack. I would recommend building it yourself. After that, you can decide to keep your deployment and abstract it into modules that you see fit or come back to this original public module. I kinda shy away from public modules, but they are great as reference points
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
Thanks @Joe Perez! Since I asked that question I realized that the template there is a bit out of date so I had to go through and update all the library dependencies, etc., anyway. So it was a great reference point but I ended up needing to do a lot of stuff myself.
I also forgot to mention that Jerry Chang did a great series on ECS https://www.jerrychang.ca/writing/introducing-aws-ecs-technical-series
A technical series on AWS ECS
Personally I find it more work to audit and maintain a public module than it is to use the vendors provider directly. But they can be great for reference points like Joe said.
Remember you’re totally at the mercy of anything they put in there across releases, plus you’d have to verify that what is tagged on the release on github is actually what’s uploaded to the registry etc.
Commenting on that module specifically, I think your gut instinct is on the money, it’s trying to do way too much.
But on another note, I’m really not a fan of putting build and deployment in terraform.
They belong in the application CI & CD imo. Not infrastructure provisioning.
I’m referring in particular the bits to do with docker building, container definitions and releases etc
Great, thanks for the suggestions @kallan.gerard!
2023-07-18
Hi all, I am using this module: https://github.com/cloudposse/terraform-aws-security-group
Terraform module to provision an AWS Security Group
Can I point to another (already existing) security group as a destination ?
If you’re trying to attach new rules to an existing group, then use target_security_group_id
. https://github.com/cloudposse/terraform-aws-security-group#input_target_security_group_id
If instead you’re trying to add rules for source security group (ie, allow connections from resources that use X security group), you can define source_security_group_id
at the individual rule level.
And finally, if you want to create rules that allow connections from other resources using the same group, then set self
to true
at the individual rule level.
like this:
Have the aws provider docs gone? https://registry.terraform.io/providers/hashicorp/aws/latest/docs
Oh, the azure provider docs are gone also…. Hmmm.
https://registry.terraform.io/ cloudfront misconfig?
They fixed it. :)
HashiCorp Services’s Status Page - Terraform Registry UI Errors.
2023-07-19
v1.6.0-alpha20230719 1.6.0-alpha20230719 (Unreleased) NEW FEATURES:
terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how Terraform tests are written and executed. Terraform tests are now written within .tftest files, controlled by a series of run blocks. Each run block will execute a Terraform plan or apply command against the Terraform configuration under test and can execute conditions against the resultant plan and…
1.6.0-alpha20230719 (Unreleased) NEW FEATURES:
terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how T…
2023-07-20
Hi,
I’m currently reworking our Terraform setups which currently use the AWS Provider with assume_role
. Now I want to move this over to use OIDC instead, so the assume_role
needs to become assume_role_with_web_identity
. This works fine in our pipelines however this does break running Terraform locally (we usually run a plan
before committing it / creating an MR).
I’m not sure yet as to what the best approach would be to ensure that CI uses OIDC and local uses the “old” method, except for keeping the original assume_role
in the provider config and adding a script in the pipeline that replaces that before running Terraform commands. But it feels like a bit of a dirty workaround.
Any ideas how to tackle this issue?
terraform init -backend-config
rather than hardcoding either auth method
would dynamic blocks be of help ?
@Imran Hussain It doesn’t appear so, atleast not according to https://support.hashicorp.com/hc/en-us/articles/6304194229267-Dynamic-provider-configuration
Current Status While a longtime requested feature in Terraform, it is not possible to use count or for_each in the provider configuration block in Terraform. Background Much of the reasoning be…
Maybe I miss read this then
A dynamic
block can only generate arguments that belong to the resource type, data source, provider or provisioner being configured. It is not possible to generate meta-argument blocks such as lifecycle
and provisioner
blocks, since Terraform must process these before it is safe to evaluate expressions.
Dynamic blocks automatically construct multi-level, nested block structures. Learn to configure dynamic blocks and understand their behavior.
A quick test is if the provider resource accepts dynamic blocks
provider "aws" {
dynamic assume_role {
for_each = var.local ? [1] : []
content {
role_arn = "arn:aws:iam::123456789012:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
}
did not throw an error when I did an init and a validate
Wow, and plan
doesn’t mind that either
you have to create a variable by default which is set to local = false
then have the two dynamic blocks defined
by default it set to false but when you run locally you can pass in the value or by environment variable what ever floats your boat
Let me know how it pans out
Thanks for your help @Imran Hussain
This is what I ended up with:
provider "aws" {
region = var.aws_region
dynamic "assume_role" {
for_each = var.oidc_web_identity_token == null ? [1] : []
content {
role_arn = "arn:aws:iam::${local.account_id}:role/${var.role_name}"
session_name = var.session_name
}
}
dynamic "assume_role_with_web_identity" {
for_each = var.oidc_web_identity_token != null ? [1] : []
content {
role_arn = "arn:aws:iam::${local.account_id}:role/${var.role_name}"
session_name = var.session_name
web_identity_token = var.oidc_web_identity_token
}
}
}
Seems to work fine, both locally and in CI so problem solved
cool
glad I could help
You could also give the CI a role via oidc that only has permissions to assume the execution role
Seems like you should just use environmental variables
AssumeRole is a nightmare in AWS, IAM Identity Center is much better from a usability perspective
AssumeRole is easy! And not mutually exclusive with identity center. I use both, and also oidc. Depends on the use case
But I’d just take all that code out of your aws provider entirely
No thanks
And ensure the environmental variables are set in the process that runs the tf
Also no thanks
Why?
Credentials in environment variables are an anti-pattern to me, and also do not support configs that require multiple aws providers. To standardize, I assume_role blocks almost everywhere, and the credential executing terraform only needs permissions to assume those roles. Easy peasy, and works for every use case, local or CI
Sure if you’ve got cross account requirements that’s a whole other kettle of fish
Credentials in environmental variables is exactly how aws-sso exec
passes them down to the command
Or you pass the aws profile env var
Yeah I’ll never use aws-sso lol
Configuration for the AWS Provider can be derived from several sources, which are applied in the following order:
Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
It’s standardising aws identity across your entire job vs standardising aws identity within tf but not outside.
It’s a bit rich to call something an antipattern
Feel free to do you.
You asked why, I answered. That is my reason. Not sure why it’s upset you
Feel free to not drop dismissive “no thanks” on people who aren’t even talking to you.
I’m not making you do it my way.
Like I said, soft skills, you don’t snipe in on other peoples responses to someone else in a group conversation in that sort of tone
If it makes you feel any better I don’t think your way of doing it is a bad solution at all
But I definitely wouldn’t want to work with someone with that sort of way of getting their input across
likewise
I think your approach is clearly a bit of a hot take Loren. Using environment variables with the AWS SDK is very common, and assume role for same account access is uncommon. You can do it your way but don’t act surprised when people question it
hi ,
Hi All, I am trying to create VPC along with dynamic subnet modules. locals { vpc_availability_zones = [“us-east-1a”,”us-east-1b”,”us-east-1c”,”us-east-2a”,”us-east-2b”,”us-east-2c”] use_az_ids = true az_name_map = { “us-east-1a” = “AZ-1”, “us-east-1b” = “AZ-2”, “us-east-1c” = “AZ-3”, “us-east-2a” = “AZ-4”, “us-east-2b” = “AZ-5”, “us-east-2c” = “AZ-6” # Add more mappings for your availability zones } }
module “vpc” { source = “cloudposse/vpc/aws” # Cloud Posse recommends pinning every module to a specific version version = “2.1.0” namespace = “eg” stage = “test” name = “app”
ipv4_primary_cidr_block = “10.0.0.0/16”
assign_generated_ipv6_cidr_block = false }
module “dynamic_subnets” { source = “cloudposse/dynamic-subnets/aws” # Cloud Posse recommends pinning every module to a specific version # version = “x.x.x” namespace = “eg” stage = “test” name = “app” availability_zones = [“us-east-2a”, “us-east-2b”, “us-east-2c”] vpc_id = module.vpc.vpc_id igw_id = [module.vpc.igw_id] ipv4_cidr_block = [“10.0.0.0/16”] }
But I am getting the error Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value. ╵ ╷ │ Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value. ╵ ╷ │ Error: Invalid index │ │ on .terraform\modules\dynamic_subnets[outputs.tf](http://outputs.tf) line 9, in output “availability_zone_ids”: │ 9: for az in local.vpc_availability_zones : local.az_name_map[az] │ ├──────────────── │ │ local.az_name_map is map of string with 6 elements │ │ The given key does not identify an element in this collection value.
I think there is missing context in what you provided because this works.
locals {
vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-2a", "us-east-2b", "us-east-2c"]
az_name_map = {
"us-east-1a" = "AZ-1",
"us-east-1b" = "AZ-2",
"us-east-1c" = "AZ-3",
"us-east-2a" = "AZ-4",
"us-east-2b" = "AZ-5",
"us-east-2c" = "AZ-6"
}
}
output "test" {
value = [
for az in local.vpc_availability_zones : local.az_name_map[az]
]
}
The results of the terraform apply
Changes to Outputs:
+ test = [
+ "AZ-1",
+ "AZ-2",
+ "AZ-3",
+ "AZ-4",
+ "AZ-5",
+ "AZ-6",
]
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Adding dynamic block creates error
module “dynamic_subnets” { source = “cloudposse/dynamic-subnets/aws” # Cloud Posse recommends pinning every module to a specific version # version = “x.x.x” namespace = “eg” stage = “test” name = “app” availability_zones = [“us-east-2a”, “us-east-2b”, “us-east-2c”] vpc_id = module.vpc.vpc_id igw_id = [module.vpc.igw_id] ipv4_cidr_block = [“10.0.0.0/16”] }
My only conclusion is the aws
provider is not scoped to us-east-2
but you’re intended to use that region. The map is built via aws_availability_zones
in the cloudposse/dynamic-subnets/aws
module. This what it says…
The Availability Zones data source allows access to the list of AWS Availability Zones which can be accessed by an AWS account within the region configured in the provider.
Inspite of locals defined..throwing error
2023-07-21
Hi, anybody is using https://www.winglang.io/ ?
Wing is a cloud-oriented programming language. Most programming languages think about computers as individual machines. In Wing, the cloud is the computer.
wow this is super interesting. Seems similar to pulumi but with the addition of targeting multiple clouds
Wing is a cloud-oriented programming language. Most programming languages think about computers as individual machines. In Wing, the cloud is the computer.
looks exactly like pulumi especially when you consider pulumi-cloud which has cloud agnostic components
The more i look into it the more unique it is from pulumi
• looks like it tries to bridge the app code to be right next to the infra
• more opinionated in that you dont need to provision IAM and it handles least privileges out of the box to name a few
i could see it being really cool, but would be very hesitant to use this for any production application
Keep in mind that Winglang is a net-new language. It’s not an SDK like Pulumi that let’s you bring-your-own-language.
It’s like HCL (in that HashiCorp invented it), and like Pulumi (in that it feels more like a real programming language than HCL).
At least, this was as of a few months ago when we got a demo.
2023-07-22
Anyone have any articles, books or videos discussing approaches and tooling for dealing with large amounts of infrastructure code. I’m looking for experiential opinions on:
- dealing with a lot of terraform workspaces and their potential interdependencies
- module versioning approaches (use a proper registry or just use git sources)
- keeping workspaces up to date
- terraform version: often something is built and then activity on that thing stops, someone goes in a few years later to do something and realizes it was built with terraform 0.0.1 which doesn’t even exist for you new m1 mac
- provider versions: upgrading providers causes a property to become a resource
- apply approaches: apply from workstation or use something like atlantis to have somewhat a log of applies along with some group review
- monorepo vs 1 repo per module vs 1 repo per module group
we do all of that (or almost all) with https://atmos.tools (but we don’t have a book written on the topic yet ). If you are interested in trying Atmos, let us know, we will try to answer the questions one by one (and if you are, we have atmos channel)
Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Thanks, I’m actually currently going through the docs for atmos now.
Also, #office-hours is a great resource to ask questions live. Every wednesday.
We use a monorepo for our IaC (modules are managed separately). That monorepo contains some yaml files defining stuff we share across workspaces. We try to not share outputs across workspaces as this quickly gets unmanageable. It’s easier to use descriptive data blocks.
The more infrastructure you have, the more you will realize it’s easier to maintain smaller logical subsections of infrastructure. For example if we deploy Postgres on RDS that would have the KMS key, log group, aws backup, proxy, maybe some extra region we transfer data to, etc.
While it’s easier to start to have one big state for everything, it quickly will cost you a lot in API calls and wasted time.
Thanks for the response Chris. So your monorepo has only iac that creates “live” concrete infrastructure and that infrastructure is made up of modules that are separated out into their own respective repositories? Are the module repositories 1 repo per module or do you have multi-module repos too? Do you use a private registry or git source references in your live monorepo? What do you mean by share outputs across workspaces? Is a descriptive data block a well documented tfvars file?
We started with one monorepo for all modules but found that applying GitHub actions was more and more complicated as time went on. Eventually we separated out individual modules, and are still separating things today.
We use terraform to manage GitHub to make the setup of this much less complicated on ourselves.
We used to go get output from independent states, and while this is a thing you can do it gets complicated over time.
Instead we define things as much as we can in static yaml files that both states can see (in our monorepo), then we setup a resource in one, and go and get it using data blocks in another.
Either way adds complexity but we’re using terraform cloud and we found this to be more complex than we were willing to manage.
Hy folks, I’m using rds module and getting Error: creating RDS DB Instance (dev-devdb): InvalidParameterValue: Invalid master user name
Is this related to database_user
variable?
is it admin
used?
Has anyone had any good/bad experiences with Terragrunt? We’re considering adopting it to be more DRY.
I enjoy using it. I find a lot of cases where its features are valuable to me. Especially managing a lot of similar deployments to a lot of different accounts over a long time. Being able to generate terraform dynamically becomes pretty powerful. And it gives you simple ways to provide common inputs to many different root modules.
I haven’t had anything I’d call a bad experience yet. Though I suppose, the abstraction can make it easy to overcomplicate things.
Thanks @loren
Shameless plug, also consider atmos. See what resonates with you. https://atmos.tools (it’s by Cloud Posse and what we use).
Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Also, we have a #terragrunt channel. Might get more responses in there re: terragrunt.
Thanks @Erik Osterman (Cloud Posse). Will investigate Atmos too
2023-07-23
2023-07-24
Should the name
field for each account be the same value of stage
in the account
component module (aws-terraform-components
)?
Full context in the
I am provisioning a new aws org with latest core (cold start) components: tfstate-backend
, account
, account-map
, aws-sso
, aws-teams
, and aws-team-roles
. I believe there is some conflict with the documentation between a subset of those components.
Specifically, account
readme had name
field for the accounts be {tenant}-{stage}
. However, when deploying aws-sso
, it looks for the name of the root account via the stage
field instead of the name
field configured by account-map
(which says it root_account_account_name
should be stage
of the root account).
on main.tf line 44, in locals:
│ 44: root_account = local.account_map[module.account_map.outputs.root_account_account_name]
│ ├────────────────
│ │ local.account_map is object with 5 attributes
│ │ module.account_map.outputs.root_account_account_name is "root"
Thank you @Gabriela Campana (Cloud Posse). I already resolved it. There were a combination of problems. One of which was incorrect description for a variable in account-map
component. I submitted PR and merged that updated the description.
Fixing the *_account_account_name
variable descriptions and setting values for the those variables accordingly and explicitly defining stack
as descriptor_formats
(see below) in the stack configurations resolved all problems I had when running a cold-start with the latest components.
descriptor_formats:
stack:
labels: ["environment", "stage"]
format: "%v-%v"
I didn’t submit any PRs for the second fix because I am not sure where it needs to be documented. And I assume it already documented in the private Reference Architecture documentation (I dont have access).
All the defaults should work if you are not using tenant
. If you are using tenant
, our standard settings are
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
stack:
format: %v-%v-%v"
labels:
- tenant
- environment
- stage
Thank you @Jeremy G (Cloud Posse). That is also what I learned a few weeks ago.
Terraform cloud question :
I used TF opensource with workspaces all time but TFC workspaces a re a bit different I read the docs for what I understand you can use the remote backend to use TFC which is recommended for CLI driven deployments but there is the terraform block config that is used more for TFC UI driven deployments
is that a correct assumption?
so you can do this :
terraform {
cloud {
organization = "example_corp"
## Required for Terraform Enterprise; Defaults to app.terraform.io for Terraform Cloud
hostname = "app.terraform.io"
workspaces {
tags = ["app"]
}
}
}
OR this :
{
"terraform": {
"backend": {
"remote": {
"organization": "pepeinc",
"workspaces": {
"prefix": "tagging"
}
}
}
}
}
that is a backend.tf file in json format
I think is not possible to do both
but AFAIK if you do cloud {}
the apply/plan will have to happen in the UI
The terraform cloud block let’s you connect your workspace (in terraform cloud) with your local so that the backend will be cloud. It works very differently from the workspaces in the cli which is unfortunate naming on their part.
Remind me in the morning and I can explain more if you want.
no problem
thanks
2023-07-25
2023-07-26
Hello all, quick question on Terraform deployed via Github actions to AWS. I’m looking for a complete guide on how to do it correctly and what to include/exclude. Would anyone have any suggestions please? Some documents mention Route 53 config others don’t so it’s all very confusing
There are a lot of solutions these days for how to us GHA. No two will be equal. We also have our solution for atmos with terraform.
GitHub Actions
Our solution requires a bucket and dynamodb table used for storing the planfile (not to be confused with the state file).
Building the solution out yourself is dangerous. I wrote a blog post on this exactly. It even references an original cloud posse post https://terrateam.io/blog/cloud-posse-what-about-github-actions
Except for now we’ve done it.
So it;’s no longer dangerous to use GHA natively with Terraform
Thank you Erik and Team, appreciate the help
So the answer to the question ‘what is complete guide on how to do it correctly’ is ‘use atmos’ …
Could be. Depends if you need enterprise-level features and support.
@Sergei the complete guide is what we offer as our service. We haven’t yet had a chance to release a simple (free) quick start for the GitHub Actions, but it is our plan. For now, like everything else we publish, it’s free to use and do what you will. Or not.
Thank you @Erik Osterman (Cloud Posse), appreciate that
Hey all! Had a question about Terraform and AWS IAM, specifically using the aws_iam_policy resource.
Context: When I run terraform apply
to update a policy, AWS performs 2 operations in the background: CreatePolicyVersion and DeletePolicyVersion (since I’m always at the customer managed-policy limit of 5). According to the AWS doc https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicyVersion.html, the API call has the request parameter “SetAsDefault”.
Question: is it possible to configure Terraform so that running terraform apply
on the modified aws_iam_policy resource has AWS run CreatePolicyVersion without setting the new policy as default?
Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using before you create a new version.
@Jeremy White (Cloud Posse)
Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using before you create a new version.
v1.5.4 1.5.4 (July 26, 2023) BUG FIXES: check blocks: Fixes crash when nested data sources are within configuration targeted by the terraform import command. (#33578) check blocks: Check blocks now operate in line with other checkable objects by also executing during import operations. (<a…
1.5.4 (July 26, 2023) BUG FIXES:
check blocks: Fixes crash when nested data sources are within configuration targeted by the terraform import command. (#33578) check blocks: Check blocks now opera…
This PR fixes a crash that occurs when the terraform import command is used against configuration that has a nested data block defined within a check block. Essentially, the nested data block still…
Hi folks,
Anyone have the bandwidth to opine a little bit at someone just starting out with Terraform? We’re a shop who has lots of AWS stuff built out on cloudformation, and we are just starting to build out stuff in terraform. We’re getting to the point where we need to design a platform for running our terraform, figuring out where we’ll save state, etc. More threaded !
First we built a (brand new to us) OpenSearch module in AWS with Terraform (after our hosted-elastic went belly-up with Qbox. Oy!) . That went well and the OpenSearch is a nicely module piece of infra, so that was easy to do.
Next, we are working to build a big module that will spin up a new EKS cluster for us. We currently run 3 or 4 clusters and we do upgrade in place.. which is hard. We’re excited to start using immutable infra.
We did a trial of Spacelift and definitely liked it - but - we aren’t ALL in Terraform (we have a fair amount of cloudformation, bash, python, etc etc scripts and workflows that we won’t migrate out of completely anytime soon) so Spacelift was a bit less flexible than we wanted.
I’m tempted to just set up build pipelines in (self hosted) Bamboo, store our state in S3 and run from there. At least, until we have more experience and things get more complex.
Q1: Any reasons you would recommend against our starting off with pipelines in Bamboo & state in S3?
Q2: If you were starting over, what would you do now to set yourselves up well for the future? Or what do you wish you knew then that you know now?
Q3: I’m thinking about running pipelines that build logical pieces of our Infra - for instance “openSearch” and “EKS-cluster” and then “IAM roles, users and policies” - and each of these would end up with their own separate state file. Is that how it’s done?
The question can sorta be rephrased as, how complex should our solution be? And the answer depends heavily on the size of your business and the number of engineers working on infra you have. If you’re a small consultancy, and you have one engineer writing terraform, and this won’t change for a few years, just do whatever fits in with your existing workflows the best
thx @Alex Jurkiewicz - yeah, we’re a 200 person nonprofit - 16 eng total, 2 of those are infra. That makes sense - start off simple.
My experience of non profit is that turnover is relatively high. So simplicity and clarity of solution are key
How could I apply indent to multiline string in Terraform template? The following below only indents the first line.
grafana:
dashboards:
default:
dgraph-control-plane-dashboard:
json: |
${indent(8, "${dashboard_dgraph_control_plane}")}
Don’t do string templating for json or yaml. Use yamlencode
instead
Grafana can load dashboards in yaml now?
no, template the entire snippet you’ve pasted
locals {
dashboard = yamlencode({
grafana = {
dashboards = {
default = {
"dgraph-control-plane-dashboard" = {
json: jsonencode(local.dashboard_dgraph_control_plane)
}
}
}
}
}
})
2023-07-27
Team, I am wondering if we can assign custom endpoint (manually created) to RDS cluster writer instances via terraform
hi Alex, I had referred the URL. However, would like to assign custom endpoint which are manually created.
Are you wanting to import existing infra into your configuration? https://developer.hashicorp.com/terraform/cli/import
Terraform can import and manage existing infrastructure. This can help you transition your infrastructure to Terraform.
Let me check if we can assign the imported custom endpoint. Thank you..
2023-07-28
Hi! I have a conceptual question about deploying multiple lightly-dependent services through terraform. Specifically I’d like to know whether I should 1. run terraform apply separately on each service, or 2. create one monolithic terraform file (with modules) and run terraform apply once on this. Details in the thread.
Let’s say we have an org that has two web apps, which results in 6 pieces of infrastructure:
- iam
- db
- backend_a (relies on security groups from iam and db)
- frontend_a (relies on backend_a)
- backend_b (relies on security groups from iam and db)
- frontend_b (relies on backend_b)
All of these live in separate directories with their own [main.tf](http://main.tf)
, [outputs.tf](http://outputs.tf)
, and [variables.tf](http://variables.tf)
.
My question is, should I:
- run
terraform apply
separately for each service, and pass variables to the dependent services withterraform_remote_state
- create a super-directory that treats each of these as modules and only run
terraform apply
once on this super-directory
I think if everything goes right, “2.” seems easier because if I do something that requires a change up-stream (e.g. change the db configuration), it will automatically propagate downstream without me having to remember to re-run everything. However, I’m worried about both speed (terraform plan/apply may take forever) and fragility (if I break something it has the potential to break all our infra at once).
I was wondering if anyone had any wisdom about this? Thanks a lot in advance!
I would try option 2:
• remote_state is flaky: its structure depends on how is organised the other stack. This creates coupling.
• your stack does not look that big
If option 1 becomes necessary, I would use terragrunt to pass information between stacks. HTH
Thanks a bunch!
I would go with the first option. There are a few principles why:
- Terraformed resources should be split up into different configurations by lifecycle. The lifecycle of your base infra is very different to that of the services living on top
- Smaller configurations are better. Smaller blast radius, faster plan/apply, clearer boundaries
If you look at the example project layouts recommended in Terraform docs, you’ll find an example very similar to your own, with a vpc, db, and application. They are divided into three separate configurations
Thank you! I guess it sounds like both are reasonable, and we’ll just have to choose which one seems most appropriate for our situation.
I think @Alex Jurkiewicz sums it up perfectly. By lifecycle is the most scalable way to organize it. While the other suggestions, are reasonable, the set a project up for long-term failure, because while most projects start small they end up growing. It’s true that depending on terraform remote state, makes modules more flaky, the alternative is to use a k/v store like parameter store.
We organize components by lifecycle in our strategy. https://atmos.tools/core-concepts/components/library/#terraform-conventions
A component library is a collection of reusable building blocks.
Great, thanks a bunch @Erik Osterman (Cloud Posse)
While I agree cross configuration dependencies is a real problem with no great way to solve it with vanilla Terraform, I think it’s important to identify when something is a problem with TF and when it’s a symptom of incorrect boundaries. Misaligned boundaries is a major problem for any system not just TF.
For example, there’s concepts like vertically sliced, horizontally sliced, lifecycle sliced amongst others. These styles can also be nested.
Vertical slicing would be where everything for a service is contained within one boundary. so that would be service A’s database and IAM, frontend and backend would be contained within one configuration.
Horizontally sliced would be where you would have 4 configurations, one for IAM, one for dbs, one for frontends and one for backends.
Lifecycle sliced would depend, and could be nested inside either horizontally or vertically sliced.
I’d also be wary of thinking of it as a terraform scoped problem, as these sort of dependency issues aren’t limited to TF and it’s where things like inversion of control patterns really become essential.
Generally I say if it’s showing itself as a significant problem that leads me to believe there’s some design problems that need to be fixed
To give you a more concrete answer to your original example, and given some assumptions, I would have:
• Service A and service B as separate vertical slices
• Service A IaC contains it’s IAM, DB and whatever is in the backend and frontend config (What would you actually put in here though?)
• And within each vertical slice, likely separate by lifecycle, which may be ( IAM ) > ( DB ) > ( frontend and backend )
• Each service vertical slice (service) would have it’s own pipeline where TF Apply would be ran every time, but each stage would only run if it’s tf code or a parent tf code changed.
• These vertical slices would likely be with the service’s application repo or atleast within it’s own repo. So service A tf wouldn’t be in the same repo as service B tf unless they were already monorepo’d
• I’d keep as much application level configuration out of the tf entirely. I wouldn’t use Tf to deploy docker images, builds, bundling, set deployments config or image tags etc.