#terraform (2024-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-09-02

andrew_pintxo avatar
andrew_pintxo

Hi, have a question about "cloudposse/backup/aws" module I would like to create a separate plan under the same vault, and that plan would use different rules. Is it possible? Or is it a good practice, to have S3 and RDS backup under the same vault, but with different set of rules and plan? Thank you

1
Abhigya Wangoo avatar
Abhigya Wangoo

Hey everyone, has anyone found any good no-code alternatives to terraform? Could be as simple as just supporting resource version control.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@george.m.sedky is working on something, I think?

1
george.m.sedky avatar
george.m.sedky

hey @Abhigya Wangoo it’s not no-code; it’s visual, you can import stuff, and write code to edit resources if needed. but give it a try and let me know what you think

if you’re looking for something pure no-code try Brainboard

Stakpak - Design Studio for Terraform with a Copilot.attachment image

A Hybrid Intelligence that helps you create production-ready infrastructure, using knowledge contributed by you and other DevOps experts all over the world. A Hive Mind for DevOps and cloud-native infrastructure design.

Brainboard: Cloud Infrastructure Designerattachment image

Brainboard is an AI driven platform to visually design, generate terraform code and manage cloud infrastructure, collaboratively.

Paweł Rein avatar
Paweł Rein

Bump this question. @Abhigya Wangoo did you have success with any of the tools? Anyone else tried no-code / LLM based IaC tools with success? Is there a tool allowing for running on own infra with own LLM?

george.m.sedky avatar
george.m.sedky

@Paweł Rein how much GPU RAM do you have available for LLM-based tools? do you mean running it on your laptop? or on a self-hosted cloud gpu?

Paweł Rein avatar
Paweł Rein

I’m planning on experimenting with vLLM on EKS. Didn’t yet think of how much resources I can allocate

george.m.sedky avatar
george.m.sedky

for Stakpak, code generation specifically uses 70B parameter model + a mix of techniques to beat GPT4-o1 performance on generating/modifying Terraform code (this involves things like documentation RAG, output validation with resource schema, and grammar bounded generation, an example comparison here)

the 70B parameter model at full precision ( FP 16 ) requires 140 GB of GPU RAM (70 billion * 2 bytes in gigabytes) the 70B parameter model at half precision ( FP 8 ) requires 70 GB of GPU RAM (70 billion * 1 byte in gigabytes)

to give you an idea of how much resources is required

1
george.m.sedky avatar
george.m.sedky

I can help your team self-host it if you’re interested in running this on-prem, but some features like doc generation uses GPT4 we haven’t evaluated the performance of open weight models on doc generation yet

Paweł Rein avatar
Paweł Rein

Thanks! I’ll talk to the team, now that I know the requirements. Stakpak looks really good, at least from the videos on your channel.

Paweł Rein avatar
Paweł Rein

What I think could address the security need to not to have to run it on premise: some way to “anonimize” the imported infra (redact names, ids) before it leaves my git and de-anonymize it back when retrieved to my git as a PR. So that in case of a leak of our infra code the risk is limited. Would it be something you would be willing to maintain as an additional tool? I imagine it would be stateful. Not sure but maybe GHA or GH App could do, with state in GH?

george.m.sedky avatar
george.m.sedky

We use a Stakpak GitHub app to create PRs. Stakpak however needs to see your code to to index it and pass it to LLMs so that it can help you make changes or add stuff.

Having access to the code/configurations also allow you to edit code in Stakpak, edit AI recommendations etc… before persisting changes in a PR.

The code itself however is transferred over TLS and stored encrypted on disk (like how you’d secure any kind of sensitive user data). We’re also working on getting SOC2 to make it easier for people to trust us. but I know eventually some people would still want to self-host it.

george.m.sedky avatar
george.m.sedky

on another note, we don’t show one user’s private code to other users, and we don’t train LLMs on customer data.

2024-09-03

Jamie Jackson avatar
Jamie Jackson

hi folks, i’m working with the cloudposse/efs/aws module. we did some console tweaks that we’re trying to reflect in the TF but i’m struggling.

tf plan shows three diffs like this, because the tf doesn’t have the security group that was added in the console.

  ~ resource "aws_efs_mount_target" "default" {
        id                     = "fsmt-092000dbd046024f2"
      ~ security_groups        = [
          - "sg-0473cc37c73272716",
            # (1 unchanged element hidden)
        ]
        # (10 unchanged attributes hidden)
    }
Jamie Jackson avatar
Jamie Jackson

i thought that adding this might help but it doesn’t seem to have any effect:

  allowed_security_group_ids = [
    "sg-0473cc37c73272716"
  ]
Jamie Jackson avatar
Jamie Jackson

i used the wrong argument. never mind.

Jamie Jackson avatar
Jamie Jackson

same module, different question. what’s the right way to incorporate this console rule addition?

  # module.efs.module.security_group.aws_security_group.default[0] has changed
  ~ resource "aws_security_group" "default" {
        id                     = "sg-06fa9a39d9a9c6f77"
      ~ ingress                = [
          + {
              + cidr_blocks      = [
                  + "10.103.21.119/32",
                ]
              + description      = "Allow jenkins-data-sync to access EFS"
              + from_port        = 2049
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 2049
            },
            # (1 unchanged element hidden)
        ]
        name                   = "terraform-20240722194649647100000001"
        tags                   = {}
        # (8 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }
Jamie Jackson avatar
Jamie Jackson

adding this:

  additional_security_group_rules = [
...
    {
      type                     = "ingress"
      from_port                = 2049
      to_port                  = 2049
      protocol                 = "tcp"
      cidr_blocks      = [
        "10.103.21.119/32",
    ]

yields this:


  # module.efs.module.security_group.aws_security_group_rule.dbc["_list_[1]"] will be created
  + resource "aws_security_group_rule" "dbc" {
      + cidr_blocks              = [
          + "10.103.21.119/32",
        ]
      + description              = "Allow jenkins-data-sync to access EFS"
      + from_port                = 2049
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = "sg-06fa9a39d9a9c6f77"
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 2049
      + type                     = "ingress"
    }

which i’m not sure is expected or not.

Jamie Jackson avatar
Jamie Jackson

never mind this one, too.

2024-09-04

2024-09-05

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, I’m having an error with the

cloudposse/ec2-client-vpn/aws

When I try to use the module, I get an error in the awsutil dependency:

│ Error: Invalid provider configuration
│ 
│ Provider "registry.terraform.io/cloudposse/awsutils" requires explicit
│ configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.

This is not a dependency in the module and the examples have no reference to this provider. I have configured the provider and it still fails with the same error. Any hints?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Juan Pablo Lorier did you add this to [versions.tf](http://versions.tf) file

terraform {
  required_version = ">= 1.3.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.0"
    }
    awsutils = {
      source  = "cloudposse/awsutils"
      version = ">= 0.15.0"
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and this to [providers.tf](http://providers.tf) file

provider "aws" {
  region = var.region

  assume_role {
    role_arn = ...
  }
}

provider "awsutils" {
  region = var.region

  assume_role {
    role_arn = ...
  }
}
Juan Pablo Lorier avatar
Juan Pablo Lorier

the provider configuration is empty as I use OIDC to authenticate

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, you just need to add (assuming the role is not related to how to configure the provider itself)

provider "awsutils" {
  region = var.region
}
1
Juan Pablo Lorier avatar
Juan Pablo Lorier

Thanks! I might open an issue to add that to the documentation. (both to the provider and to the vpn client as a dependency)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to do this only if you are using some old version of the awsutils provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the latest version does not need the region config

Juan Pablo Lorier avatar
Juan Pablo Lorier

I’m using the latest version of the module. But it’s not the module you pointed.

https://registry.terraform.io/modules/cloudposse/ec2-client-vpn/aws/latest

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if you are using the latest version and it still requires the region config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i pointed to the component that wraps the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yes, the module https://github.com/clouddrove/terraform-aws-client-vpn does not have the provider config (this needs to be fixed)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the component (root-module), the provider config is added

awsutils = {
      source  = "cloudposse/awsutils"
      version = ">= 0.15.0"
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why the component works

Juan Pablo Lorier avatar
Juan Pablo Lorier

I’m using latest version. 1.0.0 and I’m facing the issue there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your root-module (that uses <ttps://github.com/clouddrove/terraform-aws-client-vpn>) try to add

awsutils = {
      source  = "cloudposse/awsutils"
      version = ">= 0.15.0"
    }
Juan Pablo Lorier avatar
Juan Pablo Lorier

about the clouddrove link, I pasted it by mistake, I was looking for alternatives to cloudposse as I was not able to get it to work

Juan Pablo Lorier avatar
Juan Pablo Lorier

source = “cloudposse/ec2-client-vpn/aws” version = “1.0.0”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your module, just add the awsutils config to [versions.tf](http://versions.tf)

Juan Pablo Lorier avatar
Juan Pablo Lorier

it failed with only versions.tf. It worked when I added the provider config

Juan Pablo Lorier avatar
Juan Pablo Lorier

awsutils = { source = “cloudposse/awsutils” version = “>= 0.19.1” } }

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Juan Pablo Lorier avatar
Juan Pablo Lorier

I see, but somehow, terraform was complaining. Maybe it’s related to using OIDC in terraform cloud?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should not be related to OIDC, but I’m glad you made it work

Juan Pablo Lorier avatar
Juan Pablo Lorier

thanks for the help.

1
Juan Pablo Lorier avatar
Juan Pablo Lorier

@Andriy Knysh (Cloud Posse) Sorry to bug you with this non terraform question. The module creates the certs and stores them in parameter store. But it creates only CA, root and server certs. Do I need to create the client certs manually?

2024-09-07

Veerapandian M avatar
Veerapandian M

I am a team looking for help with the yml pipeline for Azure DevOps to Azure static Apps service in the nextjs application.

2024-09-11

Release notes from terraform avatar
Release notes from terraform
01:23:33 PM

v1.10.0-alpha20240911 1.10.0-alpha20240911 (September 11, 2024) NEW FEATURES:

Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays ephemeral outputs. The value of an ephemeral output is always null unless a plan or apply is being run. Note that terraform output (without the -json) flag does not yet display ephemeral…

Release v1.10.0-alpha20240911 · hashicorp/terraformattachment image

1.10.0-alpha20240911 (September 11, 2024) NEW FEATURES:

Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terr…

1
susie-h avatar
susie-h

Is there a script that converts json into the formatting expected here? https://github.com/cloudposse/terraform-aws-iam-policy/blob/main/examples/complete/fixtures.us-east-2.tfvars

I tried this online one but i’s different than what the module expects https://flosell.github.io/iam-policy-json-to-terraform/

iam-policy-json-to-terraform - Easily convert AWS IAM policies to Terraform HCL

This tool converts standard IAM policies in JSON format (like what you’d find in the AWS docs) into more terraform native aws_iam_policy_document data source code

kevcube avatar
kevcube

I don’t think jsondecode() will give you exactly what you want, so within terraform this isn’t easy to solve. You can open a PR for a new variable accepting JSON input and I will review it!

iam-policy-json-to-terraform - Easily convert AWS IAM policies to Terraform HCL

This tool converts standard IAM policies in JSON format (like what you’d find in the AWS docs) into more terraform native aws_iam_policy_document data source code

kevcube avatar
kevcube

As you see here the module eventually converts inputs to json, so if you’d rather just provide the json yourself that’s a good feature-add

susie-h avatar
susie-h

i was looking for something like this but the syntax that the module expects. it’s slightly different than what terraform’s resource expects by default. i was able to to use that converter with a couple string replaces https://iampolicyconverter.com/

Convert IAM Policy JSON to Terraform | IAM Policy Converter

Effortlessly convert AWS IAM Policy JSON to Terraform AWS policy documents with our simple and effective tool. Simplify your infrastructure management process today.

Andrey Klyukin avatar
Andrey Klyukin

Hello everyone. Has anyone had to rotate aws access key in the module cloudposse/terraform-aws-iam-system-user ? I encountered this problem when trying to rotate a key. To do this, I try to do the following • Manually create a new key • Manually Update new key and new key_secret in ssm manager • Delete the old key from the state

 terraform state rm 'module.system_user.aws_iam_access_key.default[0]'

• Import the new key

terraform import 'module.system_user.aws_iam_access_key.default[0]' <new_key_id>

All of the above was successfully completed

But then when I try to do plan or apply I get this error

│ Error: Invalid combination of arguments
│ 
│   with module.system_user.module.store_write[0].aws_ssm_parameter.default["/<ssm-path>/secret_access_key"],
│   on .terraform/modules/system_user.store_write/main.tf line 13, in resource "aws_ssm_parameter" "default":
│   13: resource "aws_ssm_parameter" "default" {
│ 
│ "insecure_value": one of `insecure_value,value` must be specified
╵
╷
│ Error: Invalid combination of arguments
│ 
│   with module.system_user.module.store_write[0].aws_ssm_parameter.default["/<ssm-path>/secret_access_key"],
│   on .terraform/modules/system_user.store_write/main.tf line 21, in resource "aws_ssm_parameter" "default":
│   21:   value           = each.value.value
│ 
│ "value": one of `insecure_value,value` must be specified

And I don’t understand how to fix it As far as I can see, there are still records about the old access-key in the state file How can I update them correctly?

User creates whit following parameters:

module "system_user" {
  source  = "git::<https://github.com/cloudposse/terraform-aws-iam-system-user.git?ref=tags/1.2.0>"
  context = module.label.context

  ssm_base_path = "/${local.ssm_params_prefix}"
}

PS. its not possible to remove old key and create new, because the key uses a running application for which you can’t do a downtime

I would be grateful for any help

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The aws_iam_access_key docs say that secret and ses_smtp_password_v4 are not available for imported resources.

I think the procedure for rotating the key would simply be: • terraform state rm 'module.system_user.aws_iam_access_key.default[0]' to remove the current key from Terraform control • terraform apply to create the new key • Later, delete the old key manually via aws

Andrey Klyukin avatar
Andrey Klyukin

Thank you very much, @Jeremy G (Cloud Posse)! Everything worked great!

1

2024-09-12

Mike avatar

Hi Team, my company is deploying infra as code pipelines into AWS using Gitlab. We are reading lots of platform engineering blogs, lots of different choices to make. What is the guidance on latest and greatest to support multi accounts? We are currently thinking of self-hosted runners, using OIDC to auth to AWS accounts. With simple gitlab-ci-yml to run terraform plan and apply once MR is approved. Any big issues here? We are also considering Atlantis (but unsure about a public webhook into our build account), have been pointed to Atmos also. Any tips here would be great!

Michael avatar
Michael

Hey Mike!

I used a similar setup in a previous role, and it worked well enough, but it depends on how much you plan to scale your infrastructure repository. As our monorepo grew, we ran into limitations with the pipelines, and GitLab’s child pipelines could only take us so far. Our biggest regret was relying on GitLab’s managed state, which became a bottleneck when we moved to a polyrepo pattern for our microservices. This also led to dependency hell as we tried to maintain consistency across all repositories.

I’ve always been a fan of how Atmos offers straightforward design patterns and inheritance, making monorepo management much easier. However, not much work has been done with Atmos in the GitLab ecosystem, so documentation might be a bit scarce. But in the long run, I believe it will make scaling much smoother.

Just my two cents. Feel free to DM me if you want to chat more!

1
1
Tyrone Meijn avatar
Tyrone Meijn


Our biggest regret was relying on GitLab’s managed state, which became a bottleneck when we moved to a polyrepo pattern for our microservices.
Hey @Michael could you elaborate on why this became a bottleneck? I’m actually really liking the feature since you do not have to think about where to host the state and the integrated permission management. I use it for personal projects, however not (yet) at scale so I’m curious what you found out.

Michael avatar
Michael

It’s super easy to setup and simple to use, but when you begin to scale it at an enterprise level, a few things become apparent:

The statefile is protected by the same permissions GitLab uses, so everyone with maintainer access will be able to access the state. This also comes into play when you need to use remote state, because the user will need read access to utilize that feature which could lead to over provisioning of permissions. Other backend typically offer more granular access controls.

We also found remote state operations to be cumbersome across a large org and that the development cycle was longer because our changes needed to be pushed to the pipeline to actually test the code

susie-h avatar
susie-h

How can i override name variable from the concatenation of provided variables to a specific string I choose? For example, the module concats osprey-lb-policy-aws-load-balancer-controller@all I want to just call it “MyPolicy”.

https://github.com/cloudposse/terraform-aws-eks-iam-role/tree/main

1
Fizz avatar

In that module, check line 191 in context.tf. the label order variable lets you define what gets included in the id attribute

RB avatar

It uses a different null label for it.

https://github.com/cloudposse/terraform-aws-eks-iam-role/blob/a217e963b61d3a05f2963c529bd18eb9c9d04fda/main.tf#L10

You could technically override the order to remove the attributes, however, you’d be impacting both the policy name and the iam role name

susie-h avatar
susie-h

I’m trying to keep the tagging that [context.tf](http://context.tf) provides but override the name entirely. I want to provide a totally custom string irrelevant to the label order.

RB avatar

Understood. This is why you’d have to contribute a change to the specific null label module to get the intended effect.

RB avatar

See these code blocks

https://github.com/cloudposse/terraform-aws-eks-iam-role/blob/a217e963b61d3a05f2963c529bd18eb9c9d04fda/main.tf#L47-L60

module "service_account_label" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  # To remain consistent with our other modules, the service account name goes after
  # user-supplied attributes, not before.
  attributes = [local.service_account_id]

  # The standard module does not allow @ but we want it
  regex_replace_chars = "/[^-a-zA-Z0-9@_]/"
  id_length_limit     = 64

  context = module.this.context
}

https://github.com/cloudposse/terraform-aws-eks-iam-role/blob/a217e963b61d3a05f2963c529bd18eb9c9d04fda/main.tf#L107-L109

resource "aws_iam_policy" "service_account" {
  count       = local.iam_policy_enabled ? 1 : 0
  name        = module.service_account_label.id
RB avatar

You could expose a new input for the policy name perhaps

RB avatar

or you can avoid passing in the aws_iam_policy_document to disable the creation of the policy.

Then you can create the policy and attach it from outside of the module.

2024-09-13

2024-09-17

tamsky avatar

https://medium.com/thousandeyes-engineering/scaling-terraform-at-thousandeyes-b2a581b8b0b0 — I don’t think this has been discussed here yet (I checked the archives)… this reads like an opinionated implementation of the terraform preprocessor pattern. Interested in comments/discussion/comparison vs other solutions.

Scaling Terraform at ThousandEyesattachment image

by Ricard Bejarano, Lead Site Reliability Engineer, Infrastructure at Cisco ThousandEyes

SweetOps Slack Archive

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

tamsky avatar
Terraform Compiler Pattern: A maintainable and scalable architecture for Terraformattachment image

In this post, I will introduce a Terraform design pattern that I have been using successfully. This pattern has certain advantages …

Paweł Rein avatar
Paweł Rein

It would be great to hear about experiences from someone who tried for example Terramate and Cisco Stacks

2024-09-18

Release notes from terraform avatar
Release notes from terraform
08:43:28 AM

v1.9.6 1.9.6 (September 18, 2024) BUG FIXES:

plan renderer: Render complete changes within unknown nested blocks. (#35644) plan renderer: Fix crash when attempting to render unknown nested blocks that contain attributes forcing resource replacement. (<a href=”https://github.com/hashicorp/terraform/issues/35644“…

plan renderer: render unknown nested blocks properly by liamcervante · Pull Request #35644 · hashicorp/terraformattachment image

This PR updates the rendering logic so that it properly renders the contents of a nested block that is becoming unknown. Previously, we&#39;d only render a single item for the whole nested block. N…

Zing avatar

hey there. have a question around eks managed node groups and launch templates / bootstrapping (user data)

• is it better to use a custom launch template or the eks default one?

• how do i omit the second block device mapping when using bottlerocket, and use instance store volumes instead? (local NVME SSDs) bottlerocket just released support for local NVMEs, and i’d like to avoid that second EBS for the data vol. https://github.com/bottlerocket-os/bottlerocket/releases/tag/v1.22.0

i think i might just need to use virtual_name or no_device for the second block mapping? apparently NVME instances are auto configured… but im not too sure

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Release notes from terraform avatar
Release notes from terraform
01:33:32 PM

v1.10.0-alpha20240918 1.10.0-alpha20240918 (September 18, 2024) NEW FEATURES:

Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays ephemeral outputs. The value of an ephemeral output is always null unless a plan or apply is being run. Note that terraform output (without the -json) flag does not yet display ephemeral…

Rishav avatar

Ooh, “ephemeral values” sounds like a potential shift in paradigm with regards to how secrets handling can be done via Terraform without exposure in the state file.

4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, pretty awesome!

2024-09-19

Soren Jensen avatar
Soren Jensen

I have a made a list of api_passwords that I loop over creating PostgreSQL roles for each of our APIs. The problem is when I deploy, I’m getting write conflicts in the database. Anyone know how I can keep my loop, but do them sequential not in parallel.

# Create unique PostgreSQL role for each API
resource "postgresql_role" "api_read_write_roles" {
  for_each           = toset(var.api_list)
  name               = "${each.key}_read_write_role"
  password           = local.api_passwords[each.key]
  encrypted_password = true
  login              = true
  create_database    = false
  superuser          = false
  depends_on         = [data.terraform_remote_state.rds_postgresql]
}

# Grant database-level privileges (CONNECT) for each API
resource "postgresql_grant" "database_grants" {
  for_each    = toset(var.api_list)
  database    = "postgres"
  role        = postgresql_role.api_read_write_roles[each.key].name
  object_type = "database"
  privileges  = ["CONNECT"]
  depends_on  = [postgresql_role.api_read_write_roles]
}

# Grant schema-level privileges (USAGE) for each API
resource "postgresql_grant" "schema_grants" {
  for_each    = toset(var.api_list)
  database    = "postgres"
  schema      = "my_schema"
  role        = postgresql_role.api_read_write_roles[each.key].name
  object_type = "schema"
  privileges  = ["USAGE"]
  depends_on  = [postgresql_grant.database_grants]
}

# Grant table-level privileges (SELECT, INSERT, UPDATE) for each API
resource "postgresql_grant" "table_grants" {
  for_each    = toset(var.api_list)
  database    = "postgres"
  schema      = "my_schema"
  role        = postgresql_role.api_read_write_roles[each.key].name
  object_type = "table"
  privileges  = ["SELECT", "INSERT", "UPDATE"]
  depends_on  = [postgresql_grant.schema_grants]
}

I get the following type of errors for grant schema and table for a random amount of APIs, rerunning the deployment a few times deploys all the resources. But it would be great to have a smoother pipeline.

│ Error: could not execute revoke query: pq: tuple concurrently updated
│ 
│   with postgresql_grant.table_grants["ingestion_api"],
│   on postgres_roles.tf line 35, in resource "postgresql_grant" "table_grants":
│   35: resource "postgresql_grant" "table_grants" {
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov @Ben Smith (Cloud Posse) @Jeremy White (Cloud Posse)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I believe the relevant issue is here: https://github.com/hashicorp/terraform/issues/30841

#30841 Execution order within `for_each` loop

Current Terraform Version

v1.1.7

Use-cases

for_each is great for reducing the number of resource blocks within your modules.
But sometimes one is forced to split a for_each resource into multiple resources because you need depends_on.

Assume you have ten helm charts to install, each chart depends on the previous chart.
In an ideal world, you would simply do this:

resource "helm_release" "my_charts" {
  for_each = var.my_charts

  name    = each.value.name
  version = each.value.version
  ...
}

As far as I know, this won’t work when there are dependencies between the objects in var.my_charts, so in the case where there are 10 helm charts that need to be applied in order, you would need ten separate resource blocks.

Proposal

It would be nice if one could configure the order in which a for_each loop should be applied.

An idea that comes to mind is that resoucres/modules that accept for_each, also accept an option list of keys that determines execution order, eg:

resource "helm_release" "my_charts" {
  for_each = var.my_charts
  for_each_order = [
    "chart1",
   " chart2",
   ...
   "chart10"
  ]

  name    = each.value.name
  version = each.value.version
  ...
}

A more flexible approach that could cover situations where the keys are unknown, is to filter based on some key within each objects values:

resource "helm_release" "my_charts" {
  for_each = var.my_charts
  for_each_priority = each.value.sync_order

  name    = each.value.name
  version = each.value.version
  ...
}

1
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

For what it’s worth, there’s two ways I could think of right off the bat…

• put all these users in a separate root module and run that with parallelism of 1

• if you use atmos, you can just make instances of the users as components, then have a workflow that runs over all of them

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

it would likely be heavy handed to set up atmos just for this one scenario, but if you already have atmos set up, then it’s not as hard

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

one thing that surprises me, I haven’t heard anyone complain about our component that implements this using a module instance per user

Soren Jensen avatar
Soren Jensen

Interesting, I will have a look at your module and compare. Also using the module option in a different place should have thought of that instead of looping over the var.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

^ I’ve ran that before and not hit that error. Some discrepancies to keep in mind:

• the component is for ‘aurora’, but basically just uses the postgresql provider (the aws provider saves secrets to ssm)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

• the component does a lot of smaller operations per user on those modules, so likely there’s less of a chance that things happen concurrently

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I hope that helps out

Soren Jensen avatar
Soren Jensen

Me too, I will let you know when I get a change to test it out.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

you could likely just take our component module and strip out the ssm/iam resources so you could try it on any postgresql backend

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

but this is a long-standing complaint against terraform. If not postgresql users, folks also complain about ip assignment and other resources that are integral to devops

Soren Jensen avatar
Soren Jensen

It’s little comfort to know it’s not a me issue, but something everyone suffers with..

2024-09-24

2024-09-25

Chris M avatar
Chris M

@Erik Osterman (Cloud Posse) just a quick one, we’re using the pr branch in prod without issue https://github.com/cloudposse/terraform-aws-eks-node-group/pull/198

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Chris M Thank you for the PR.

We like to keep changes minimal, especially to the UserData, so that people running automatic upgrades to not get their ~clusters~ode groups rebuilt. Please try this PR and see if it solves your issue: https://github.com/cloudposse/terraform-aws-eks-node-group/pull/200

#200 Suppress EKS bootstrap when after bootstrap script is supplied

what

• Suppress EKS-supplied bootstrap when after bootstrap script is supplied

why

• Fixes #195

references

• Supersedes and closes #198

1
Chris M avatar
Chris M

Guessing you mean node group would be rebuilt rather than cluster. I’ll give that change a go in a bit and see which bits of my original PR still stand

1
1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Chris M My intention is to replace your PR with #200. This is a highly used module and I want to be as conservative as is reasonable when it comes to making changes.

RB avatar

I noticed that this terraform-docs config separates context inputs from the rest of the inputs which is nifty. I didn’t know the tool had that capability. Is this feature planned to roll out to the cp modules soon ?

https://github.com/cloudposse/docs/blob/23077a7881cbeee284cdbf0e6e4d70dd6c635741/scripts/docs-collator/templates/modules/terraform-docs.yml

1
RB avatar

I was hunting around for the cp module terraform docs config and tripped over the above

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse) @Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Dan Miller (Cloud Posse) knows better

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)


Is this feature planned to roll out to the cp modules soon ?
Yes but it’s deep in the backlog; I would not say soon. Recently we’ve just started the process of migrating components to a dedicated github organization, each with their own repo. We’ll likely do something similar with terraform-docs there and then eventually roll it out to modules. But it will likely be several months out

1

2024-09-26

Release notes from terraform avatar
Release notes from terraform
11:13:37 AM

v1.10.0-alpha20240926 1.10.0-alpha20240926 (September 26, 2024) NEW FEATURES:

Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays ephemeral outputs. The value of an ephemeral output is always null unless a plan or apply is being run. Note that terraform output (without the -json) flag does not yet display ephemeral…

1

2024-09-27

loren avatar

If you haven’t seen yet, the AWS provider is implementing a new proposal for a pattern to manage “exclusive” relationships between resources. Essentially, some resources are “containers” for other resources, like security groups and their rules, or iam roles and their policies, or route tables and their routes. Some of these resources have “inline” blocks to implement “exclusive” management. That will in the future be implemented with a new, separate “exclusive” resource. This allows both the “container” resource, e.g. aws_iam_role, and its “attachment” resources, e.g. aws_iam_role_policy, to both manage a single primary API action. The separate “exclusive” resource will manage the actions needed to remove unspecified attachments. https://github.com/hashicorp/terraform-provider-aws/issues/39376

I really love the new pattern, as I think it will make it easier to implement more “exclusive” attachments for more resource types. One downside, there isn’t a great way to migrate using moved blocks from the old inline block approach to the separate resource… So refactoring/updating existing modules is going to be a little painful for the module users… If you also would like that to be easier, please go upvote this feature request on terraform core to implement moved semantics for inline blocks… https://github.com/hashicorp/terraform/issues/35785

2
1
loren avatar

similar, yes! and would have been so nice then to have this kind of refactoring capability.

jose.amengual avatar
jose.amengual

yes, it was painful to do the s3 changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

1
loren avatar

Same feature request for opentofu…. https://github.com/opentofu/opentofu/issues/2034

#2034 Update moved semantics to allow moving a resource managed by an attribute or inline block to a separate resource

OpenTofu Version

v1.8.2

The problem in your OpenTofu project

The AWS provider recently approved a proposal for a pattern that will be used going forward to manage “exclusive relationships” between resources. The first implementation of this pattern was for aws_iam_role and their aws_iam_role_policy attachments.

I’m really excited about this new pattern, and would love to adopt it quickly. However, it introduces a dilemma for module authors. How do we refactor the module and minimize the migration pain for users? With the current features available in Terraform, module users would have to write import blocks for every attachment resource, which makes the refactoring backwards-incompatible and a major version bump.

Attempted Solutions

To demonstrate the problem, I attempted to migrate a module we author to use the new exclusive pattern with the aws_iam_role and aws_iam_role_policy resource. You can see that here, plus3it/terraform-aws-tardigrade-iam-principals#205. There’s also a discussion of the migration options on the aws provider repo, hashicorp/terraform-provider-aws#39376 (comment).

Proposal

I would like to propose an update for moved block semantics, allowing users to specify a from expression that uniquely identifies a resource managed by an inline block. Using aws_iam_role as an example, it might look like:

moved { from = aws_iam_role.example,inline_policy, to = aws_iam_role_policy.example_policy_name }

Those semantics largely follow the idea of import requirements for the target resource type. For an inline_policy block on an aws_iam_role resource, we only need to know the inline policy name to map the state to the new resource.

It occurs to me that moved does not currently support interpolation, or expressions, or for_each, the way import does. Since these inline blocks do not have state addresses, I think this proposal would also require updates to extend that support to moved blocks. A complete example, to make my pr testing the migration to the exclusive resource simply a feature release, might look like this:

moved { for_each = toset(var.inline_policies[*].name)

from = aws_iam_role.this,inline_policy,${each.value} to = aws_iam_role_policy.this[each.value] }

And then similarly for attachments of managed policies (once that “exclusive” resource is available), I’d write something like

moved { for_each = toset(var.managed_policy_arns)

from = aws_iam_role.this,managed_policy_arns,${each.value} to = aws_iam_role_policy_attachment.this[each.value] }

References

Andrew Chemis avatar
Andrew Chemis

Hey all -

having some IAM race condition issues I dont understand…

resource "aws_codepipeline" "default" {
...
...
  depends_on = [
    aws_iam_role_policy_attachment.default,
    aws_iam_role_policy_attachment.s3,
    aws_iam_role_policy_attachment.codebuild,
    aws_iam_role_policy_attachment.codebuild_s3,
    aws_iam_role_policy_attachment.codestar,
    aws_codepipeline.worker_image_pipeline,
    aws_codepipeline.manager_image_pipeline,
    module.codebuild_deploy,
    aws_cloudwatch_event_rule.ecr_image_pushed,
    aws_iam_role.code_pipeline
  ]
...
}

resource "aws_iam_role_policy_attachment" "codebuild" {
  count      = module.this.enabled ? 1 : 0
  role       = join("", aws_iam_role.code_pipeline[*].id)
  policy_arn = join("", aws_iam_policy.codebuild[*].arn)
}

resource "aws_iam_policy" "codebuild" {
  count  = module.this.enabled ? 1 : 0
  policy = data.aws_iam_policy_document.codebuild.json
}

data "aws_iam_policy_document" "codebuild" {
  statement {
    sid = "AllowCodeBuild"

    actions = [
      "codebuild:BatchGetBuildBatches",
      "codebuild:BatchGetBuilds",
      "codebuild:BatchGetProjects",
      "codebuild:Describe*",
      "codebuild:List*",
      "codebuild:RetryBuild",
      "codebuild:RetryBuildBatch",
      "codebuild:StartBuild",
      "codebuild:StartBuildBatch",
      "codebuild:StopBuild",
      "codebuild:StopBuildBatch",
    ]

    resources = [module.codebuild.project_id, module.codebuild_deploy.project_id]
    effect    = "Allow"
  }
}

I need the permissions in the aws_iam_role_policy_attachment.codebuild

The pipeline executes and I get Error calling codebuild:StartBuild ... because no identity-based policy allows the codebuild:StartBuild action

But if I look at the role the policy and actions exist and then if I retry the pipeline it succeeds, making me think the codepipeline gets created prior to the iam policy attachment

Why? What am I missing?

My code is a fork of https://github.com/cloudposse/terraform-aws-ecs-codepipeline/blob/main/main.tf#L271-L277

  depends_on = [
    aws_iam_role_policy_attachment.default,
    aws_iam_role_policy_attachment.s3,
    aws_iam_role_policy_attachment.codebuild,
    aws_iam_role_policy_attachment.codebuild_s3,
    aws_iam_role_policy_attachment.codebuild_extras
  ]
Andrew Chemis avatar
Andrew Chemis

Found in logs…

module.codebuild_deploy.aws_codebuild_project.default[0]: Creation complete after 9s","@module":"terraform.ui","@timestamp":"2024-09-27T16:27:10.422760Z"

aws_iam_role_policy_attachment.codebuild[0]: Creating...","@module":"terraform.ui","@timestamp":"2024-09-27T16:27:10.666361Z"

codepipeline.aws_codepipeline.ecs_pipeline: Creating...","@module":"terraform.ui","@timestamp":"2024-09-27T16:27:20.952451Z"

So an eventual consistency problem… Guess im implementing a sleep command.

  depends_on = [
    aws_iam_role_policy_attachment.default,
    aws_iam_role_policy_attachment.s3,
    aws_iam_role_policy_attachment.codebuild,
    aws_iam_role_policy_attachment.codebuild_s3,
    aws_iam_role_policy_attachment.codebuild_extras
  ]
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andrew Chemis do you still need support here?

Andrew Chemis avatar
Andrew Chemis

No, I’ve resorted to a hacky work-around that mostly solves the problem at hand. I could open a bug on the underlying cloudposse module as it likely is impacting other people, but my solution is not ideal

1

2024-09-30

Michael avatar
Michael

Shameless plug for a new tool I’ve been working on! If, like me, you enjoy integrating local LLMs into your development workflow, you might enjoy this (feel free to tear it up too).

Picture this: you spin up a Terraform resource, pull the basic config from the registry, and immediately start wondering what other parameters you should enable for better security and efficiency. Sure, you could use tools like tflint or tfsec, but kuzco saves you the hassle of combing through the Terraform registry and trying to make sense of vague options. The tool leverages local LLMs to recommend which parameters should be enabled and configured. It reviews your Terraform resources, compares them against the provider schema to detect unused parameters, and uses AI to suggest improvements for a more secure, reliable, and optimized setup. This tool started as a part of my local workflow, but I wanted to share if anyone is interested in giving it a try!

https://github.com/RoseSecurity/Kuzco

RoseSecurity/Kuzco

Kuzco reviews your Terraform resources, compares them to the provider schema to detect unused parameters, and uses AI to suggest improvements

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you customize the prompt?

RoseSecurity/Kuzco

Kuzco reviews your Terraform resources, compares them to the provider schema to detect unused parameters, and uses AI to suggest improvements

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

How are the LLMs installed?

Michael avatar
Michael

Ooh, I like the idea of adding dynamic prompt support. The LLMs are a prerequisite, so if you have Ollama installed locally (brew install ollama), you can simply provide the model name and it’s good to go. I wrote a little getting started guide on how to customize the LLM prompt to get stronger recommendations

    keyboard_arrow_up