#terraform (2024-06)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-06-04

Taylor Turner avatar
Taylor Turner

What are some of the newer age IaC tools that you’ve shown before on the podcast? There was one that was dependency aware and had a visual drag-n-drop UI.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Referring to Cloud Posse’s office hours?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey everyone wave I’m George the founder of Stakpak, we’re a VC backed startup working on a specialized copilot for Terraform that attributes recommendations back to real people’s work.

I’d love to learn more about your workflows, and get your feedback, please DM me if you’re interested in helping us out.

https://www.youtube.com/watch?v=sNA1wC02pa8

Taylor Turner avatar
Taylor Turner

Hey Erik, good to see you are still going at it!

I’ve been bouncing around DevOps jobs the last 6 months so I haven’t been attending office hours but I plan on changing that starting this week.

That looks like what I was thinking. Thanks for the link!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@george.m.sedky

Taylor Turner avatar
Taylor Turner

Some people on my team haven’t used Terraform so I was doing a little demo today. We started talking about pain points, one of them being managing really large Terraform deployments for all of prod.

The example was “I want to deploy an EC2”, the tool should ideally know that I need

• VPC

• Subnets

• IGW/NAT

• AMI

• EBS w/ KMS key for encryption

• And so on… Terraform is great but it’s pretty “dumb” compared to where we’ll be in a few years with AI and tools like Stakpak. It may not be a popular idea since managing Terraform, and bringing that knowledge of how to do it well, counts for a big chunk of what many DevOps Engineers are being paid to do.

The tool I’m thinking of had a different name and it would’ve been something you shared on at least a year ago. If I can dig it up I’ll share it. Same idea as Stakpak though but I don’t think it was using AI.

george.m.sedky avatar
george.m.sedky

Hey @Taylor Turner this is exactly what we’re working on at Stakpak I’d love to arrange a call sometime to learn about your use case we’re doing a lot of product discovery interviews this week

sheldonh avatar
sheldonh

remind me… if i’m building a resuable module… does it declare an empty provider or not, the docs got me confused. I see no examples of you doing this outside the examples/ directory in cloudposse modules.

Assuming I should gut any providers.tf in the resuable module and only declare required provider?

So would leave this out?

provider "azurerm" {
  features {}
}

or any other provide mapping to the username/password etc?

RB avatar

It’s not good practices to bake the provider instantiation in the consumable module itself

1
this1
Nate McCurdy avatar
Nate McCurdy

https://developer.hashicorp.com/terraform/language/modules/develop/providers#provider-version-constraints-in-modules

It should declare a required provider with a minimum version. But not a provider block.

5
RB avatar

The provider should be instantiated in the root dir where the reusable module is called

sheldonh avatar
sheldonh

cool, that’s what I thought but couldn’t find confirmation. thank you for this quick help. cheers!

1

2024-06-05

Release notes from terraform avatar
Release notes from terraform
09:43:31 AM

v1.8.5 1.8.5 (June 5, 2024) BUG FIXES:

terraform test: Remove duplicate warning diagnostic when providing values for unknown variables in run blocks. (#35172)

Remove invalid warning during cleanup phase by MicahKimel · Pull Request #35172 · hashicorp/terraformattachment image

Fixes #35061 Target Release

1.8.x Draft CHANGELOG entry

NEW FEATURESUPGRADE NOTESENHANCEMENTSBUG FIXESEXPERIMENTS

This update takes out display warnings during cleanup phase and by…

Zing avatar

what’s the best way to do the following?

• convert from CDKTF (failed experiment) cloudposse eks module to vanilla terraform cloudposse eks module

• convert from terraform-aws-modules/eks to cloudposse eks module? (some of our old clusters) we’re trying to standardize and need to perform module migrations for both of the above. I think the first one might be simpler in terms of state file massaging, but not certain

is terraform state mv the right call? or something fancy / clever with the import blocks?

Destroying / recreating the sensitive resources (eks cluster) is not an option

theherk avatar
theherk

Some combination of import blocks and moved blocks is probably your best bet. The moved block is one of the most handy tools added, in my view.

Refactoring | Terraform | HashiCorp Developerattachment image

How to make backward-compatible changes to modules already in use.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, just to clarify - you cannot destroy/rebuild the cluster?

Zing avatar

yeah cannot destroy rebuild :(

Zing avatar

I haven’t messed with moved blocks yet

Zing avatar

just brainstorming best possible options atm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect it will be very difficult to use any off the shelf terraform module, because the shape of infrastructure will always be different.

2024-06-06

Release notes from terraform avatar
Release notes from terraform
05:43:30 PM

v1.10.0-alpha20240606 1.10.0-alpha20240606 (June 6, 2024) EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.

ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds….

Release v1.10.0-alpha20240606 · hashicorp/terraformattachment image

1.10.0-alpha20240606 (June 6, 2024) EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.

ephemeral_values:…

Terraform Settings - Configuration Language | Terraform | HashiCorp Developerattachment image

The terraform block allows you to configure Terraform behavior, including the Terraform version, backend, integration with HCP Terraform, and required providers.

2024-06-09

2024-06-10

Zing avatar

have people been using the “new” security group resources?

aws_vpc_security_group_ingress/egress_rule

historically, the group rules have been a PITA with destroy recreates / rule dedupe logic… wondering if folks think it’s worth it to move to the new resources

1
loren avatar

Personally I prefer my security group rules inline for exclusive management by the security group resource, so no

1
Dale avatar

I managed to fix the ‘old’ SG resource we have by switching around the order of the rules so the TF state (which was ordered arbitrarily) matched the AWS console, but it did require me to first delete the individual rules and then reapply them which made me unhappy for the 2 seconds they weren’t applied

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We updated our security group module to use these rules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
The Difficulty of Managing AWS Security Groups with Terraform – Cloud Posse

There are trade-offs when updating AWS Security Groups with Terraform. This article discusses them and the various options for managing them with Cloud Posse’s Terraform security group module.

3
RB avatar

Hmm… It would be nice if they had a new security group v2 resource that enforced inline rules (no clickops rules) and tagging individual rules so you get the best of both worlds.

1
Zing avatar


We updated our security group module to use these rules
am I doing something silly? I’m looking at https://github.com/cloudposse/terraform-aws-security-group/security

and it doesn’t seem to be using the new “ingress/egress” rule resources

Zing avatar


Hmm… It would be nice if they had a new security group v2 resource that enforced inline rules (no clickops rules) and tagging individual rules so you get the best of both worlds.
yeah, that’s been a desirable feature for years haha. it’s one of the many reasons we were considering onboarding a product like spacelift for drift detection

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On that note, we provide a GitHub action for drift detection if using Atmos. https://atmos.tools/integrations/github-actions/atmos-terraform-drift-detection/

Atmos Terraform Drift Detection | atmos

The Cloud Posse GitHub Action for “Atmos Terraform Drift Detection” and “Atmos Terraform Drift Remediation” define a scalable pattern for detecting and remediating Terraform drift from within GitHub using workflows and Issues. “Atmos Terraform Drift Detection” will determine drifted Terraform state by running Atmos Terraform Plan and creating GitHub Issues for any drifted component and stack. Furthermore, “Atmos Terraform Drift Remediation” will run Atmos Terraform Apply for any open Issue if called and close the given Issue. With these two actions, we can fully support drift detection for Terraform directly within the GitHub UI.

RB avatar

There’s no way to detect drift in security group rules if they are clickopsed which is the issue with using the individual rule resources vs inline resources

1
loren avatar

At least, no way using just terraform. You’d have to query the APIs directly, and compare against the config.

1
Zing avatar

thanks @RB,

security groups have been such a headache for us. I’m debating between creating the security groups outside of the eks module rather than using the in-module one atm

RB avatar

That’s certainly an option. Most of the modules, if not all, provide a way to pass in your own security group

Zing avatar

what are your personal preferences?

Zing avatar

im mainly concerned with scenarios with rule modifications causing downtime

RB avatar

My personal preference is the same as lorens

Zing avatar

yeah I generally like that approach as well, but it seems all the folks out there these days advise against it :p

tbh, I don’t even know why… inline rules just seems… better?

loren avatar

it changes the responsibility a bit. folks sometimes want different processes (or different teams) to manage the creation of the security group, vs the rules in the security group. inline rules can also make some use cases more difficult, like rules that cross-reference other sgs

loren avatar

but if you can change your operating model to one that supports inline rules, then i feel like it works a lot better. and i strongly prefer having a resource that manages the “container” (the security group) and its “items” (the rules) together, so that terraform can alert on drift in the items in the container (e.g. unmanaged rules added to the sg)

loren avatar

i use the container/items terminology for a number of things with similar concerns, like iam roles (container) and policy attachments (items), or route tables (container) and routes (items)

loren avatar

unfortunately terraform providers have been moving the other direction, and instead are aligning a resource purely to a single api action, so we see less and less inline support for “items” and their exclusive management, which ultimately makes drift detection effectively impossible within pure terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

which ultimately makes drift detection effectively impossible within pure terraform I think “drift” is not an entirely fair characterization. You can certainly catch the drift of what is managed in Terraform. You cannot catch things managed outside of that Terraform. That may or may not be drift, if other systems.

I actually am the outlier. I like the current trajectory, enabling a lose coupling between root modules. E.g. one root module provisions a cluster and security group. Another service managed by different root module, that perhaps provisions node pools that work with the cluster, can add then modify security groups as needed that it needs to the cluster’s security group. It’s similar to a hub/spoke. As infrastructure state gets decomposed into it’s various components, this is a nice to have.

loren avatar

I acknowledge that operating model, and it has its use cases, I just don’t prefer it for everything.

1
loren avatar


You can certainly catch the drift of what is managed in Terraform. You cannot catch things managed outside of that Terraform.
You can’t even be aware of drift outside of Terraform, without exclusive management features. That’s the whole point. That “in Terraform” is a massive caveat. It leaves a gaping hole, which gets nicely plugged by exclusive management features at the container level. All I’m arguing for is that it is a valid model with desirable characteristics, and it would be nice if terraform providers put effort into meeting the use case. It is also valid and desirable to support the pure “attachment” model.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I look at that more as IaC coverage than drift.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. if you use a EKS cluster, and any operators, there will likely be TONS of resources provisioned not managed by Terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s not drift. It’s just managed by something else.

loren avatar

like i said, there is a place for both

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Security Groups might be a special case though. From a security perspective, I could appreciate wanting tighter controls.

1
loren avatar

once you start thinking about it, there are lots of resources where it’s rather nice to manage everything from the container level, rather than as separate resources. iam roles and users and groups, certainly. s3 buckets used to work this way, which was nice from the perspective of managing the ordering of actions and conflicting api calls. it certainly does put more work on the code for the resource itself, which is of course why the providers don’t want to do it

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, I’m trying to understand why the ecs cluster module is trying to recreate the policy attachments every time I add more than one module instance via a for_each. The plan shows the arn will change, but it’s a AWS managed policy, so it won’t change:

update policy_arn : “arnawsiam:awspolicy/AmazonSSMManagedInstanceCore”

change to Known after apply Forces replacement

the resource address is:

module.ecs_clusters[“xxx”].module.ecs_cluster.aws_iam_role_policy_attachment.default[“AmazonSSMManagedInstanceCore”]

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you properly setting a unique attribute for each instance

Juan Pablo Lorier avatar
Juan Pablo Lorier

the module takes care of the ids, I only set the cluster id. The id is unique (the name is formed from the tenant,+environment+name)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think I’ll need to see more code

Juan Pablo Lorier avatar
Juan Pablo Lorier

This is the cluster related code

module “ecs_cluster” { source = “cloudposse/ecs-cluster/aws” version = “~>0.6”

enabled = var.enabled context = module.label.context

container_insights_enabled = var.container_insights_enabled logging = var.logging log_configuration = var.log_configuration capacity_providers_fargate = true tags = var.tags depends_on = [module.label] }

Juan Pablo Lorier avatar
Juan Pablo Lorier

most options are defaulted. The context includes tenant and environment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Where’s the for_each?

Juan Pablo Lorier avatar
Juan Pablo Lorier

I’m managing a local module that uses the cloudposse modules

module "ecs_clusters" {
  for_each = { for v in var.ecs_clusters : "${v.tenant}-${v.environment}-${v.cluster_name}" => v }
  source   = "./modules/ecs"

  vpc_id              = module.vpc.vpc_id
  subnet_ids          = module.vpc.private_subnets
  public_subnet_ids   = module.vpc.public_subnets
  dns_domain          = each.value.dns_domain == null ? var.dns_domain : each.value.dns_domain
  zone_id             = each.value.dns_domain == null ? data.cloudflare_zone.cloudflare_zone[0].id : data.cloudflare_zone.cloudflare_zones[each.value.dns_domain].id
  # If a certificate is provided, pass that. If not, dns_domain determins if we use the root cert for the workspace or create a new one
  cloudwatch_alarm            = each.value.cloudwatch_alarm
  container_definitions       = var.container_definitions
  ecr_repos                   = var.ecr_repos
  enforced_dns_root           = each.value.enforced_dns_root
  task_execution_IAM_role_arn = aws_iam_role.roles["ecs_task_execution_role"].arn

  cluster_name                     = each.value.cluster_name
  enabled                          = each.value.enabled
  alb_enable_logs                  = each.value.alb_enable_logs
  alb_url                          = each.value.alb_url
  https_listener_arn               = each.value.https_listener_arn
  redis_cluster_enabled            = each.value.redis_cluster_enabled
  sdk_redis_enabled                = each.value.sdk_redis_enabled
  tenant                           = each.value.tenant
  namespace                        = each.value.tenant
  environment                      = each.value.environment
  container_insights_enabled       = each.value.container_insights_enabled
  logging                          = each.value.logging
  log_configuration                = each.value.log_configuration
  ecs_metric_topic_subscriptions   = each.value.ecs_metric_topic_subscriptions
  ecs_critical_topic_subscriptions = each.value.ecs_critical_topic_subscriptions
  ecs_services                     = [for v in var.ecs_services : v if v.ecs_cluster_name == each.key]
  tags                             = each.value.tags
  depends_on                       = [aws_ecr_repository.ecs_ecr_repos, aws_ecr_repository.ecr_repos, aws_iam_role_policy.role_policies]
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
(please use code fences)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so what’s missing from this example is usng var.attributes to a unique instance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

otherwise the resources will be duplicated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

At least, that was the way we intended it to be done. We haven’t tried other ways.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, word of caution - you’re building a factory inside of a root module: your terraform state will be massive, slow and brittle.

Juan Pablo Lorier avatar
Juan Pablo Lorier

Thanks. As I said, I’m actually using a module for all ECS related stuff. It’s consumed in the root module by this ecs_cluster module. let me look at the attributes so I can understand what is missing and the impact of modifying any of that. Again, thanks a lot

2024-06-11

2024-06-12

Release notes from terraform avatar
Release notes from terraform
09:33:29 AM

v1.9.0-rc1 No content.

2024-06-13

Soren Jensen avatar
Soren Jensen

I need a bit of help to debug an issue. I have created a RDS Postgres cluster with resource "aws_rds_cluster" "postgres_cluster" and set manage_master_user_password = true I’m now trying to get the master password from Secrets Manager to bootstrap the database with the Postgresql provider.

data "aws_secretsmanager_secret" "postgres_password_secret" {
  arn        = aws_rds_cluster.postgres_cluster.master_user_secret[0].secret_arn
  depends_on = [aws_rds_cluster.postgres_cluster]
}

data "aws_secretsmanager_secret_version" "postgres_password_version" {
  secret_id  = data.aws_secretsmanager_secret.postgres_password_secret.id
  depends_on = [data.aws_secretsmanager_secret.postgres_password_secret]
}

# Parse the secret JSON string to extract the username and password
locals {
  db_credentials = jsondecode(data.aws_secretsmanager_secret_version.postgres_password_version.secret_string)
}

Unfortunately the first data aws_secretsmanager_secret isn’t working on our self-hosted github runners. It works locally on my laptop, as well as in the default GitHub runners. I have spent a significant amount of time trying to narrow down differences in versions, reading the debug outputs to see why it doesn’t work. I see both on the self-hosted runner as well as locally that terraform correctly finds the cluster and resolve aws_rds_cluster.postgres_cluster.master_user_secret[0].secret_arn to the correct arn. Still terraform is stuck:

data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [10s elapsed]
data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [20s elapsed]
data.aws_secretsmanager_secret.postgres_password_secret: Still reading... [30s elapsed] 

Any ideas?

Soren Jensen avatar
Soren Jensen

I have the following debug print in all environments and they are identical aws-cli/2.16.7 Python/3.11.8 Linux/5.15.0-1063-aws exe/x86_64.ubuntu.20 { “UserId”: “AROAYKRB4XIA3M6HSSAPV:GitHubActions”, “Account”: “00000000000”, “Arn”: “arnawssts:assumed-role/prod-github-action/GitHubActions” } Terraform v1.5.7 on linux_amd64

Python 3.11.9

Anchor avatar

maybe a silly question here: did you try to ssh into the self-hosted github runner and retrieve the secret via CLI? I just want to ensure that is not a permission issue

Soren Jensen avatar
Soren Jensen

Anchor, not a silly question at all. I did and I can access the secret that way.

1
Soren Jensen avatar
Soren Jensen

No, I’m not using a VPC Endpoint.. Still haven’t resolved the issue but working around it instead.

Fizz avatar

Have you looked in cloudtrail to see what APIs TF is invoking and with which parameters? Setting TF_LOG to debug should also reveal info about the API being called

Soren Jensen avatar
Soren Jensen

I do see the API calls after setting TF_LOG to debug. I don’t see any requests in cloudtrail relating to secrets manager.

Fizz avatar

hmm. what about in cloudtrial? whats the list of API invocations made by the TF user?

2024-06-14

2024-06-17

Jackie Virgo avatar
Jackie Virgo

Has anyone used terrafomr-aws-s3-bucket module for creating bi-directional replication?

theherk avatar
theherk

Yes, and I would do my best to answer questions, but I can’t promise I remember many of the details. I was so confused by it at first that I had to ask some questions to the team at re:Invent. In my case, I also have to worry about tag replication because I use tags to mark objects as virus clean before anything other than the scanner can read it.

Jackie Virgo avatar
Jackie Virgo

Did you have your S3 buckets created already?

theherk avatar
theherk

No I manage the buckets too. So I create bucket A in region A along with a role to allow replication to it. Then I create bucket B in region B, a role to allow replication to it, and add the replication rule to bucket B toward bucket A. Then I add the replication rule to bucket A toward bucket B.

Jackie Virgo avatar
Jackie Virgo

Ok that makes sense, are you using a module or resource when you attach the replication rules. I was trying to do it all in 1 pass with the module in TF but because the buckets don’t exist yet I can’t make that work.

theherk avatar
theherk

I think you’re right that you won’t be able to do that. In my case, that isn’t a concern because they are separate states. Since this is bidirectional and cross-region, and each region (in this case) is managed by a separate state, I couldn’t do them all at once anyway. To answer your question, I am using a module, but it is internal, so I can’t easily share it. I try to put everything I can into the public, but this has some business logic in the module. I gave a talk about it internally too with some nice diagrams. Would be good to generalize the module and share the diagrams if the time arises.

Jackie Virgo avatar
Jackie Virgo

Makes sense. Thanks for the input!

1
theherk avatar
theherk

There isn’t anything interesting the replication part though. Here is the config for just that resource:

resource "aws_s3_bucket_replication_configuration" "this" {
  count = var.replica != null && var.versioning ? 1 : 0

  role   = var.replica.role_arn
  bucket = aws_s3_bucket.this.id

  rule {
    status = "Enabled"

    destination {
      bucket = var.replica.arn
    }

    dynamic "delete_marker_replication" {
      for_each = var.replica.delete_marker_replication ? [1] : []

      content {
        status = "Enabled"
      }
    }

    dynamic "filter" {
      for_each = var.replica.filters == null ? [1] : []

      content {}
    }

    dynamic "filter" {
      for_each = var.replica.filters != null ? [1] : []

      content {
        prefix = var.replica.filters.prefix != null && length(var.replica.filters.tags) == 0 ? var.replica.filters.prefix : null

        dynamic "tag" {
          for_each = var.replica.filters.prefix == null && length(var.replica.filters.tags) == 1 ? [var.replica.filters.tags[0]] : []

          content {
            key   = tag.key
            value = tag.value
          }
        }

        dynamic "and" {
          for_each = (var.replica.filters.prefix == null ? 0 : 1) + length(var.replica.filters.tags) > 1 ? [1] : []

          content {
            prefix = var.replica.filters.prefix
            tags   = var.replica.filters.tags
          }
        }
      }
    }

    dynamic "source_selection_criteria" {
      for_each = var.replica.modifications ? [1] : []

      content {
        replica_modifications {
          status = "Enabled"
        }
      }
    }
  }

  depends_on = [aws_s3_bucket_versioning.this]
}

with variable def:

variable "replica" {
  description = "Configuration for replication target."
  default     = null

  type = object({
    arn                       = string
    delete_marker_replication = optional(bool, true)
    modifications             = optional(bool, true)
    role_arn                  = string

    filters = optional(object({
      prefix = optional(string)
      tags   = optional(map(string), {})
    }))
  })
}

variable "versioning" {
  description = "Enable bucket versioning. defaults true"
  type        = bool
  default     = true
}

This is only part of the module and it is probably of no value to you, but just in case.

2024-06-18

Dhruv Tiwari avatar
Dhruv Tiwari

Hi, We are at POC stage of implementing the RDS cluster through cloudposse module, One critical requirement is to use RDS managed secrets manager credentials for dbs, I see there is a PR for this feature: https://github.com/cloudposse/terraform-aws-rds-cluster/pull/218 If possible, can any one share the approx ETA on this? ( Would help in planning our POC accordingly )

jose.amengual avatar
jose.amengual

I reviewed it and seems good but we pass the tests

Dhruv Tiwari avatar
Dhruv Tiwari

Thanks, will keep an eye on it

jose.amengual avatar
jose.amengual

do you want to create another PR addressing the feedback? and mention this current PR?

Dhruv Tiwari avatar
Dhruv Tiwari

I have subscribed to the above PR, I see there is one small change request for the default value of manage_admin_user_password , though I think, the issue is with the type rather than value - which should be boolean instead of string

jose.amengual avatar
jose.amengual

so master_password needs to be removed from the resource instantiation for this to work, but is you set it to null is basically like if it was not there but if you set it to "" then it has a value and it complains

Dhruv Tiwari avatar
Dhruv Tiwari

Yes, but shouldn’t the manage_admin_user_password be of type bool instead of string , Here: https://github.com/cloudposse/terraform-aws-rds-cluster/blob/523bb16c94d0a72066408b11a4b41bee3db4e07a/variables.tf#L64 As it could be a bit misleading with the description: https://github.com/cloudposse/terraform-aws-rds-cluster/blob/523bb16c94d0a72066408b11a4b41bee3db4e07a/variables.tf#L66

And set the condition something like:

master_password = (!var.manage_admin_user_password && !local.ignore_admin_credentials) ? var.admin_password : null
Dhruv Tiwari avatar
Dhruv Tiwari

Hey @jose.amengual, there hasn’t been any update on this, is this still in works?

jose.amengual avatar
jose.amengual

someone else will have to take it over

jose.amengual avatar
jose.amengual

you can go and create a new PR and mention this one as reference

Jason avatar

does anyone here work on the terraform provider stuff backend with Go?

Ive got to building my own provider using the hashicups stuff, so first I followed it word for word for the demo hashicups stuff THAT WORKED then I went off downloaded the framework again and been following it for my own provider and its not working and I cant figure out why. (edited)

Its confidential code so I cant share it publicly

I need help please

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Cloud Posse develops and maintains these two Terraform providers (written in Go

https://github.com/cloudposse/terraform-provider-utils

https://github.com/cloudposse/terraform-provider-awsutils

cloudposse/terraform-provider-utils
cloudposse/terraform-provider-awsutils
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jason it’s not easy to understand the issues you are facing. Maybe if you provided more details, we could point you in the right direction

Jason avatar

It’s fine I got it working

Jason avatar

Not sure how but it started working

1
Jason avatar

Basically the stupid terraform provider is still trying to look up

registry.terraform.io/hashicorp/hashicups 

even though ive shoved this in the main.go part

hashicorp.com/edu/gccloud

and I changed my .terraformrc file to be this:

provider_installation {

  dev_overrides {
      "hashicorp.com/edu/gccloud" = "/home/jason/go/bin"
  }

  # For all other providers, install them directly from their origin provider
  # registries as normal. If you omit this, Terraform will _only_ use
  # the dev_overrides block, and so no other providers will be available.
  direct {}
}
Jason avatar

This is the error I am getting:

 Error: Inconsistent dependency lock file
│ 
│ The following dependency selections recorded in the lock file are inconsistent with the current configuration:
│   - provider registry.terraform.io/hashicorp/gccloud: required by this configuration but no version is selected
│   - provider registry.terraform.io/hashicorp/hashicups: required by this configuration but no version is selected
│ 
│ To make the initial dependency selections that will initialize the dependency lock file, run:
│   terraform init

there is no lock file because when writing providers you dont ìnit

andrew_pintxo avatar
andrew_pintxo

Hello, I am creating ECS module

module "pdf_ecs_task" {
  source  = "cloudposse/ecs-alb-service-task/aws"
  version = "0.66.4"

with this attribute

task_policy_arns = [
    module.pdf_ecs_iam_policy.policy_arn
  ]

But it throws me this error:

Error: Invalid for_each argument
│
│ on .terraform/modules/pdf_ecs_task/main.tf line 162, in resource "aws_iam_role_policy_attachment" "ecs_task":
│ 162: for_each = local.create_task_role ? toset(var.task_policy_arns) : toset([])
│  ├────────────────
│  │ local.create_task_role is true
│  │ var.task_policy_arns is list of string with 1 element

` What can be the problem? Where should I look into? Thank you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ben Smith (Cloud Posse) any ideas on this one?

1
Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

It looks like this has been a problem with the module for a little while now: • Count BugSupposed BugfixFix for new bugSupport list or map I’d recommend updating to the latest module version, atleast "0.72.0" which is what our component currently uses. It also looks like the map variable will provide more consistent results and less drift in the future. task_exec_policy_arns_map so something like:

task_exec_policy_arns_map = {
"my-sid" = module.pdf_ecs_iam_policy.policy_arn
}
1
andrew_pintxo avatar
andrew_pintxo

Thank you for explanation. For the moment I am forced to use this version of a module )) But I will convince my dev fellows to do upgrades))

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Hmm if youre forced to use that version, maybe try hardcoding your task policy arns as a temporary debugging step. the logic defined here:

for_each = local.create_task_role ? toset(var.task_policy_arns) : toset([])
│  ├────────────────
│  │ local.create_task_role is true
│  │ var.task_policy_arns is list of string with 1 element

` looks fine, you should be iterating on a single set of string - the one arn. perhaps try a hardcoded arn.

Though I would suggest making a new component or module if you could with the upgrade

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

This is our main.tf of our ecs-service component

Jason avatar

Do you have the module code? And the locals code?

Jason avatar

It’s how your passing something in

Jason avatar

You have a bool variable and your trying to pass in a list with 1 element?

jose.amengual avatar
jose.amengual

if I have a pepe.auto.tfvars file where I have attributes = [ "one", "two"] and a env.tfvars that I use with -var-file where I also have attributes = ["four"] will terraform merge the values of both attributes?

loren avatar

I believe the behavior is for specs evaluated “later” (-var-file) to override specs evaluated “earlier” (.auto.tfvars)

jose.amengual avatar
jose.amengual

I think in TF 0.12 this was possible, I think

jose.amengual avatar
jose.amengual

but I can do a simple merge

loren avatar

Damn. 0.12 was 5 years ago. I don’t need that headache!

loren avatar

Looks like it was 0.12 that first had the current behavior, full override, no merge… https://developer.hashicorp.com/terraform/language/values/variables#variable-definition-precedence

Input Variables - Configuration Language | Terraform | HashiCorp Developerattachment image

Input variables allow you to customize modules without altering their source code. Learn how to declare, define, and reference variables in configurations.

jose.amengual avatar
jose.amengual

ohhh

jose.amengual avatar
jose.amengual

will, I will have to do a merge() , not a big deal

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe it merges, but it’s a shallow merge. Only top-level keys, not a deep merge like with atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also be advised that merge() also is a shallow merge.

jose.amengual avatar
jose.amengual

then join will have to be

2024-06-19

Release notes from terraform avatar
Release notes from terraform
07:13:33 AM

v1.10.0-alpha20240619 1.10.0-alpha20240619 (June 19, 2024) EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.

ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds….

Terraform Settings - Configuration Language | Terraform | HashiCorp Developerattachment image

The terraform block allows you to configure Terraform behavior, including the Terraform version, backend, integration with HCP Terraform, and required providers.

Mehak avatar

Can someone help me with sentinel policy to enforce multi-az on rds aurora and elasticsearch clusters. I will create policy in TF cloud?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you creating the clusters with Terraform?

If yes, you can add variable validation for these variables https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#rds-multi-az-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Validate modules with custom conditions | Terraform | HashiCorp Developerattachment image

Add condition blocks to a module that deploys an application in an AWS VPC to validate that DNS and EBS support are enabled, and that the appropriate number of subnets are configured.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using Atmos, you can also add OPA policies or JSONSchema to the components and validate the variables and even the correct combination of the variables

https://atmos.tools/core-concepts/components/validation

Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

Mehak avatar

no I just need a sentinel policy that can restric elasticsearch clusters from being created if they are not multi-az

andrew_pintxo avatar
andrew_pintxo

Does cloudposse has module to create a subdomain and attach it to existing LoadBalancer?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Attaching a subdomain to a LoadBalancer is managed by external-dns

1
Release notes from terraform avatar
Release notes from terraform
03:03:37 PM

v1.9.0-rc2 No content.

Release v1.9.0-rc2 · hashicorp/terraformattachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Release v1.9.0-rc2 · hashicorp/terraform

aj_baller23 avatar
aj_baller23

Hi, i’m using the aws terraform provider and i need help adding multiple external repositories to aws_codeartifact_repository . The documenation says external_connections - An array of external connections associated with the repository. Only one external connection can be set per repository. Does the external connection take an array or am i reading the documentation wrong. Thanks in advance

resource "aws_codeartifact_repository" "upstream" {
  repository = "upstream"
  domain     = aws_codeartifact_domain.test.domain
}

resource "aws_codeartifact_repository" "test" {
  repository = "example"
  domain     = aws_codeartifact_domain.example.domain

  external_connections {
    external_connection_name = "public:npmjs"
  }
}
loren avatar
AssociateExternalConnection - AWS CodeArtifact

Adds an existing external connection to a repository. One external connection is allowed per repository.

loren avatar


Adds an existing external connection to a repository. One external connection is allowed per repository.
A repository can have one or more upstream repositories, or an external connection.

aj_baller23 avatar
aj_baller23

is that a limitation of the terraform provider ? you are able to add multiple external repositories from the aws console :

loren avatar

i think the terraform docs are confused entirely because the aws docs are confused

aj_baller23 avatar
aj_baller23

I was confused reading the terraform doc lol cuz i was getting something different when ready the aws doc lol

loren avatar

from terraform’s perspective, that hcl is a block, and you can have multiple blocks of the same type. so if it works at all, the syntax would be something like:

resource "aws_codeartifact_repository" "test" {
  repository = "example"
  domain     = aws_codeartifact_domain.example.domain

  external_connections {
    external_connection_name = "public:npmjs"
  }

  external_connections {
    external_connection_name = "public:maven-central"
  }
}
loren avatar

the aws api doc is what i linked, and the wording is very close to the terraform doc

aj_baller23 avatar
aj_baller23

that’s what i tried but didn’t work for me

loren avatar

i imagine that whoever wrote the resource for the aws provider was relying on the aws api docs, which seem to be either wrong or just confusing

loren avatar

so, they might have an artificial limitation on the resource at the moment

aj_baller23 avatar
aj_baller23

thanks for the feedback

2024-06-20

Oleksandr Lytvyn avatar
Oleksandr Lytvyn

Please advice, does anyone know someone from Terraform community or Hashicorp? We have 3 Terraform providers in Registry, two of them deprecated, and 1 is actual. I’m trying to understand if there is a way to add some kind of message in TF Registry page that old TF provider is deprecated and that people should use new one (and provide link to new one) and who i should contact to make this happen.

PS. I understand this is probably somewhat not perfect place to post it, but giving it a shot anyways

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can email *Terraform Registry* [[email protected]](<mailto:[email protected])>

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’ve used it in the past

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Responses take a while.

Oleksandr Lytvyn avatar
Oleksandr Lytvyn

Thank you

jose.amengual avatar
jose.amengual

was there a trick to have locals scoped to the .tf file where are defined only? I’m dreaming or I remember there was a way?

loren avatar

Nothing in terraform has ever worked that way, far as I’m aware

Joe Perez avatar
Joe Perez

Yeah, it’ll read/combine all the tf files in the current working directory

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, no such thing as a file-broundary scope

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What do you want to accomplish?

jose.amengual avatar
jose.amengual

could be handy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You could achieve something similar by convention, naming a local per file, and using it as a map

jose.amengual avatar
jose.amengual

I wanted to reuse local values to format a policy document but one feed on the other local

2024-06-21

Release notes from terraform avatar
Release notes from terraform
07:13:32 PM

v1.9.0-rc3 No content.

Release v1.9.0-rc3 · hashicorp/terraformattachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Release v1.9.0-rc3 · hashicorp/terraform

2024-06-24

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Terraform Roadmap - roadmap.shattachment image

Step by step guide to learn Terraform in 2024. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.

2
1

2024-06-25

2024-06-26

andrew_pintxo avatar
andrew_pintxo

When creating ECS task with this module

module "pdf_ecs_task" {
  source  = "cloudposse/ecs-alb-service-task/aws"
  version = "0.73.0"

Task needs to run in a private subnet and be accessible over load balancer Terraform module to create an ECS Service for a web app (task), and an ALB target group to route requests. I am a bit confused, where this ALB target group is created and does it have some output? As well I cant understand how do I connects my ECS task with Load Balancer, as I found in documentation an attribute:

ecs_load_balancers = [
    {
      elb_name = null
      target_group_arn = "what target group should I provide here?"
      container_name = "container name"
      container_port = 80
    }
  ]

and I need to pass object like above. But have no idea where should I get this target group, cuz module says in a statement it will be created ))) Can someone help with explanation? Thank you

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

So, this module does need the support of other modules to work well. Rather than go extensively through it, it’s probably best to point you at the component that uses this module here: https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ecs-service/main.tf

```

Generic non company specific locals

locals { enabled = module.this.enabled

s3_mirroring_enabled = local.enabled && try(length(var.s3_mirror_name) > 0, false)

service_container = lookup(var.containers, “service”) # Get the first containerPort in var.container[“service”][“port_mappings”] container_port = try(lookup(local.service_container, “port_mappings”)[0].containerPort, null)

assign_public_ip = lookup(local.task, “assign_public_ip”, false)

container_definition = concat([ for container in module.container_definition : container.json_map_object ], [ for container in module.datadog_container_definition : container.json_map_object ], var.datadog_log_method_is_firelens ? [ for container in module.datadog_fluent_bit_container_definition : container.json_map_object ] : [], )

kinesis_kms_id = try(one(data.aws_kms_alias.selected[*].id), null)

use_alb_security_group = local.is_alb ? lookup(local.task, “use_alb_security_group”, true) : false

task_definition_s3_key = format(“%s/%s/task-definition.json”, module.ecs_cluster.outputs.cluster_name, module.this.id) task_definition_use_s3 = local.enabled && local.s3_mirroring_enabled && contains(flatten(data.aws_s3_objects.mirror[].keys), local.task_definition_s3_key) task_definition_s3_objects = flatten(data.aws_s3_objects.mirror[].keys)

task_definition_s3 = try(jsondecode(data.aws_s3_object.task_definition[0].body), {})

task_s3 = local.task_definition_use_s3 ? { launch_type = try(local.task_definition_s3.requiresCompatibilities[0], null) network_mode = lookup(local.task_definition_s3, “networkMode”, null) task_memory = try(tonumber(lookup(local.task_definition_s3, “memory”)), null) task_cpu = try(tonumber(lookup(local.task_definition_s3, “cpu”)), null) } : {}

task = merge(var.task, local.task_s3)

efs_component_volumes = lookup(local.task, “efs_component_volumes”, []) efs_component_map = { for efs in local.efs_component_volumes : efs[“name”] => efs } efs_component_remote_state = { for efs in local.efs_component_volumes : efs[“name”] => module.efs[efs[“name”]].outputs } efs_component_merged = [ for efs_volume_name, efs_component_output in local.efs_component_remote_state : { host_path = local.efs_component_map[efs_volume_name].host_path name = efs_volume_name efs_volume_configuration = [ #again this is a hardcoded array because AWS does not support multiple configurations per volume { file_system_id = efs_component_output.efs_id root_directory = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].root_directory transit_encryption = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption transit_encryption_port = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].transit_encryption_port authorization_config = local.efs_component_map[efs_volume_name].efs_volume_configuration[0].authorization_config } ] } ] efs_volumes = concat(lookup(local.task, “efs_volumes”, []), local.efs_component_merged) }

data “aws_s3_objects” “mirror” { count = local.s3_mirroring_enabled ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) prefix = format(“%s/%s”, module.ecs_cluster.outputs.cluster_name, module.this.id) }

data “aws_s3_object” “task_definition” { count = local.task_definition_use_s3 ? 1 : 0 bucket = lookup(module.s3[0].outputs, “bucket_id”, null) key = try(element(local.task_definition_s3_objects, index(local.task_definition_s3_objects, local.task_definition_s3_key)), null) }

module “logs” { source = “cloudposse/cloudwatch-logs/aws” version = “0.6.8”

# if we are using datadog firelens we don’t need to create a log group count = local.enabled && (!var.datadog_agent_sidecar_enabled || !var.datadog_log_method_is_firelens) ? 1 : 0

stream_names = lookup(var.logs, “stream_names”, []) retention_in_days = lookup(var.logs, “retention_in_days”, 90)

principals = merge({ Service = [“ecs.amazonaws.com”, “ecs-tasks.amazonaws.com”] }, lookup(var.logs, “principals”, {}))

additional_permissions = concat([ “logs:CreateLogStream”, “logs:DeleteLogStream”, ], lookup(var.logs, “additional_permissions”, []))

context = module.this.context }

module “roles_to_principals” { source = “../account-map/modules/roles-to-principals” context = module.this.context role_map = {} }

locals { container_chamber = { for name, result in data.aws_ssm_parameters_by_path.default : name => { for key, value in zipmap(result.names, result.values) : element(reverse(split(“/”, key)), 0) => value } }

container_aliases = { for name, settings in var.containers : settings[“name”] => name if local.enabled }

container_s3 = { for item in lookup(local.task_definition_s3, “containerDefinitions”, []) : local.container_aliases[item.name] => { container_definition = item } }

containers_priority_terraform = { for name, settings in var.containers : name => merge(local.container_chamber[name], lookup(local.container_s3, name, {}), settings, ) if local.enabled } containers_priority_s3 = { for name, settings in var.containers : name => merge(settings, local.container_chamber[name], lookup(local.container_s3, name, {})) if local.enabled } }

data “aws_ssm_parameters_by_path” “default” { for_each = { for k, v in var.containers : k => v if local.enabled } path = format(“/%s/%s/%s”, var.chamber_service, var.name, each.key) }

locals { containers_envs = merge([ for name, settings in var.containers : { for k, v in lookup(settings, “map_environment”, {}) : “${name},${k}” => v if local.enabled } ]…) }

data “template_file” “envs” { for_each = { for k, v in local.containers_envs : k => v if local.enabled }

template = replace(each.value, “$$”, “$”)

vars = { stage = module.this.stage namespace = module.this.namespace name = module.this.name full_domain = local.full_domain vanity_domain = var.vanity_domain # service_domain uses whatever the current service is (public/private) service_domain = local.domain_no_service_name service_domain_public = local.public_domain_no_service_name service_domain_private = local.private_domain_no_service_name } }

locals { env_map_subst = { for k, v in data.template_file.envs : k => v.rendered } map_secrets = { for k, v in local.containers_priority_terraform : k => lookup(v, “map_secrets”, null) != null ? zipmap( keys(lookup(v, “map_secrets”, null)), formatlist(“%s/%s”, format(“arnawsssm%s:parameter”, var.region, module.roles_to_principals.full_account_map[format(“%s-%s”, var.tenant, var.stage)]), values(lookup(v, “map_secrets”, null))) ) : null } }

module “container_definition” { source = “cloudposse/ecs-container-definition/aws” version = “0.61.1”

for_each = { for k, v in local.containers_priority_terraform : k => v if local.enabled }

container_name = each.value[“name”]

container_image = lookup(each.value, “ecr_image”, null) != null ? format( “%s.dkr.ecr.%s.amazonaws.com/%s”, module.roles_to_principals.full_account_map[var.ecr_stage_name], coalesce(var.ecr_region, var.region), lookup(local.containers_priority_s3[each.key], “ecr_image”, null) ) : lookup(local.containers_priority_s3[each.key], “image”)

container_memory = each.value[“memory”] container_memory_reservation = each.value[“memory_reservation”] container_cpu = each.value[“cpu”] essential = each.value[“essential”] readonly_root_filesystem = each.value[“readonly_root_filesystem”…

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I think the documentation could be improved on the module for sure, as it should really point at the supporting modules that create the ALB and other backing resources

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
module "alb_ingress" {
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse) do we need a task to update the documentation on the module?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

It doesn’t hurt to make one at this point. Just be sure to emphasize these points in the ticket: • the complete examples in most modules are really meant for terratest, and likely don’t express valid use cases. They shouldn’t be used as real-world examples unless the module is rather simple • the README.md refers to the alb module, but doesn’t actually provide sample configuration to it. That should be added • the README.md ought to point to the ecs-service component in the terraform-aws-components repo, which is a very complete and well documented example of how to use this module. I’d estimate it’s an hour of work to update the module’s docs

1
Release notes from terraform avatar
Release notes from terraform
05:03:31 PM

v1.9.0 1.9.0 (June 26, 2024) If you are upgrading from an earlier minor release, please refer to the Terraform v1.9 Upgrade Guide. NEW FEATURES:

Input variable validation rules can refer to other objects: Previously input variable validation rules could refer only to the variable being validated. Now they are general expressions, similar to those elsewhere in a module, which can refer to other…

2024-06-27

Kyle Johnson avatar
Kyle Johnson

is terraform cloud effectively no longer offering free state storage? logged in to it for the first time in many months and everything is plastered with ads for me to “upgrade to the new free plan” which stores… 500 objects

we dont use it to execute plans or anything, its solely used for remote state storage

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are not using terraform cloud, but if you are only using it for the state storage, it looks like you can migrate it to an S3 backend (or any other cloud backend where you provision your infrastructure) - fewer dependencies on 3rd-parties @Kyle Johnson

1
george.m.sedky avatar
george.m.sedky

We forced an LLM to generate 100% syntactically valid Terraform and it did this while we were testing its limits

1

2024-06-28

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@linen

jose.amengual avatar
jose.amengual

So, I know there is a difference between the for_each on a resource and the for_each on a dynamic block in the way they can deal with list(object()) I’m pretty sure someone at some point posted a link with a pretty detailed explanation as of why, could you share that again? if not, reply to this Thanks

loren avatar

resources:

The for_each meta-argument accepts a map or a set of strings, and creates an instance for each item in that map or set.

https://developer.hashicorp.com/terraform/language/meta-arguments/for_each

The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developerattachment image

The for_each meta-argument allows you to manage similar infrastructure resources without writing a separate block for each one.

loren avatar

dynamic blocks:

Since the for_each argument accepts any collection or structural value, you can use a for expression or splat expression to transform an existing collection.

https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks

Dynamic Blocks - Configuration Language | Terraform | HashiCorp Developerattachment image

Dynamic blocks automatically construct multi-level, nested block structures. Learn to configure dynamic blocks and understand their behavior.

jose.amengual avatar
jose.amengual

ok, so now the docs do explains it well

jose.amengual avatar
jose.amengual

thanks Loren

loren avatar

“why the difference” is because for resources the for_each key is the resource address and must be unique. that means sets and maps meet the key uniqueness requirement. lists do not.

loren avatar

for dynamic blocks, there is no resource address and so there is no uniqueness requirement, so any collection-type is allowed. and moreover, dynamic blocks may have an ordered component, which can be honored by the list type, if the provider honors ordering for the resource type

jose.amengual avatar
jose.amengual

thanks for the Answer Loren

2024-06-30

    keyboard_arrow_up