#terraform (2024-10)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-10-01

Julio Chana avatar
Julio Chana

Hi!

First sorry if this is not the best place to ask, I’ll move it to a different channel if it’s better,

I’m running into this issue when deploying a helm chart with the module:

terraform {
  source = "git::<https://github.com/cloudposse/terraform-aws-helm-release.git//?ref=0.10.1>"
}

And I’m getting a constant drift for it in the metadata:

  # helm_release.this[0] will be updated in-place
  ~ resource "helm_release" "this" {
        id                         = "myapp"
      ~ metadata                   = [
          - {
              - app_version = "v2.8.6"
              - chart       = "myapp"
              - name        = "myapp"
              - namespace   = "myapp"
              - revision    = 16
              - values      = jsonencode(
                    {
...

Do you know what I can do so the information is properly understood and the drift only happening if there are real changes?

Thank you so much!

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I think that’s more about settings in the helm terraform provider. Be sure you don’t have any experiments enabled.

Igor Rodionov avatar
Igor Rodionov

@Julio Chana this comes from helm provider In version 2.13.2 metadata was output block https://registry.terraform.io/providers/hashicorp/helm/2.13.2/docs/resources/release#metadata In version 2.14.0 they changed it to list of objects https://registry.terraform.io/providers/hashicorp/helm/2.14.0/docs/resources/release#metadata Try pin helm provider to 2.13.2 and check if drift will be still there

Igor Rodionov avatar
Igor Rodionov
#1315 Suppress metadata changes at terraform plan?

Older helm provider versions didn’t show this metadata changes when terraform detected a change/run plan, and the new helm provider versions are posting a wall of text of metadata changes which have no real value (to me) and just clog up my tf plan output.

example doing a tf plan on a helm resource where we only updated the image tag var:

  # module.api-eks.helm_release.app will be updated in-place
  ~ resource "helm_release" "app" {
        id                         = "api"
      ~ metadata                   = [
          - {
              - app_version = "latest"
              - chart       = "api"
              - name        = "api"
              - namespace   = "production"
              - revision    = 1420
              - values      = jsonencode(
                    {
                      - appConfig          = {
                          - apiBackendReplicationTaskId       = "none"
                          - applicationMode                   = "none"
                          - baseApiUrl                        = "none"
                          - something = "else"
                          - foo = "bar"
                          - 
                          < it goes on for many >
                          < many >
                          < lines >
                          < and it's of no value >
                          < just noise on tf plan >
                          

let’s say we only have a field deploymentTimestamp updated. We’d rather see the changed field only on terraform plan, and suppress the whole metadata update, e.g. terraform plan should only show

  # module.api-eks.helm_release.app will be updated in-place
  ~ resource "helm_release" "app" {
        id                         = "api"
        [...]
        # (25 unchanged attributes hidden)

      + set {
          + name  = "deploymentTimestamp"
          + value = "19012024-225905"
        }

        # (62 unchanged blocks hidden)
        

This way the terraform plan is clear and concise, more human (easier to read/follow) without the metadata removal. Does it make sense?

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: v1.6.5
Helm Provider version: v2.12.0 (same on v2.12.1)
Kubernetes version: v2.24.0

Terraform configuration

resource “helm_release” “app” { namespace = var.namespace != “” ? var.namespace : terraform.workspace chart = var.chart_name version = var.chart_version name = var.app_name timeout = var.deployment_timeout cleanup_on_fail = var.helm_cleanup_on_fail atomic = var.helm_atomic_creation max_history = var.helm_max_history wait = var.helm_wait_for_completion

dynamic “set” { for_each = local.k8s_app

content {
  name  = set.key
  value =  set.value
}   }

values = var.some_ingress_values }

Question

Is there any way to suppress the metadata changes at terraform plan?

Igor Rodionov avatar
Igor Rodionov
#1344 `metadata` always recomputes, causing redeployment for every single plan-apply

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: v1.7.3
Provider version: v2.12.1
Kubernetes version: v1.29.2+k3s1

Affected Resource(s)

• helm_release

Terraform Configuration Files

terraform { required_providers { helm = { source = “hashicorp/helm” version = “2.12.1” } } }

provider “helm” { kubernetes { config_path = “~/.kube/config” } }

resource “helm_release” “this” { name = “redis” repository = “https://charts.bitnami.com/bitnami” chart = “redis” namespace = “cache” create_namespace = false version = “19.x” }

Debug Output

https://gist.github.com/meysam81/8b4c8805d7dcfa7dd116443c4cc42841

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

Steps to Reproduce

Either of the following:

terraform applyterraform plan -out tfplan && terraform apply tfplan

Expected Behavior

Even in the case where the latest chart version is the same as the running helm-release, it is still updating the metadata and trying to re-deploy the release. A frustrating experience really. Ansible helm module is doing a much better job in this regards when it comes to idempotency.

The funny thing is, if you pin the version to exact version, e.g. 19.0.1, this won’t happen and a No changes will be printed on the screen. But any version wildcard versioning causes the release to recompute the metadata. :shrug:

Actual Behavior

It finds metadata changed for every single plan-apply.

Important Factoids

Nothing special. I have tried this in different clusters with different versions. All have the same outcome.

References

GH-1097#1150

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Igor Rodionov avatar
Igor Rodionov
#1150 GH-1097 causes `metadata` to always be recomputed with some helm charts.

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.4.6
Provider version: v2.10.0
Kubernetes version: 1.23.17 (EKS)

Affected Resource(s)

• Helm Repository: https://dysnix.github.io/charts • Helm Chart Version: 0.3.1 • Helm Chart: raw

Terraform Configuration Files

resource “helm_release” “filebeat” { chart = “raw” name = var.filebeat.name namespace = var.filebeat.namespace repository = “https://dysnix.github.io/charts” version = “0.3.1”

values = [ «-EOF ${yamlencode({ resources = [local.filebeat] })} EOF ] }

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

N/A

Steps to Reproduce

  1. Use the above define Helm chart to deploy something. You can really deploy any type of Kubernetes resource.
  2. Rerun terraform plan
  3. Observe that the metadata is going to be regenerated when it shouldn’t.

Downgrading from 2.10.0 to 2.9.0 causes the issue to go away.

Expected Behavior

I would expect that rerunning Terraform where there are no changes to the Helm values that the metadata should not be recomputed.

Actual Behavior

Observe that the metadata gets regeneratred

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
Terraform will perform the following actions:
  # module.kubernetes_filebeat_autodiscovery_cluster_1.helm_release.filebeat will be updated in-place
  ~ resource "helm_release" "filebeat" {
        id                         = "autodiscover"
      ~ metadata                   = [
          - {
              - app_version = ""
              - chart       = "raw"
              - name        = "autodiscover"
              - namespace   = "elastic-monitors"
              - revision    = 3
              - values      = jsonencode(
                    {
                      - resources = [
                          - {
                              - apiVersion = "beat.k8s.elastic.co/v1beta1"
                              - kind       = "Beat"
                              - metadata   = {
                                  - labels      = null
                                  - name        = "autodiscover"
                                  - namespace   = "default"
                                }
                                ... <spec_removed>
                            },
                        ]
                    }
                )
              - version     = "v0.3.1"
            },
        ] -> (known after apply)
        name                       = "autodiscover"
        # (27 unchanged attributes hidden)
    }
Plan: 0 to add, 1 to change, 0 to destroy.

Important Factoids

References

GH-1097

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Igor Rodionov avatar
Igor Rodionov

There are several issues with this problem ^

Julio Chana avatar
Julio Chana

Thank you so much! I’m testing this. I was using version “2.12.1”

1
Julio Chana avatar
Julio Chana

I’ve tried with both versions and it’s still happening. I’m still investigating how to fix this.

Is it also happening to you?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Julio Chana can you show me logs of your terraform plan?

Stan V avatar

Guys, would you be able to help me with issues I’m facing? I’m trying to deploy AWS EKS cluster with the LB, but I’m getting this error.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Stan V you didn’t describe the error, so I don’t know where to start, but I would guess this question is more appropriate for the #kubernetes channel.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)
locals {
  cluster_name = var.cluster_name
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.15.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  cluster_addons = {
    aws-ebs-csi-driver = {
      service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_group_defaults = {
    ami_type = var.ami_type

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }
  }
}

module "lb_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  role_name                              = "shop_eks_lb"
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }

  depends_on = [
    module.eks
  ]
}

resource "kubernetes_service_account" "service-account" {
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    labels = {
      "app.kubernetes.io/name"      = "aws-load-balancer-controller"
      "app.kubernetes.io/component" = "controller"
    }
    annotations = {
      "eks.amazonaws.com/role-arn"               = module.lb_role.iam_role_arn
      "eks.amazonaws.com/sts-regional-endpoints" = "true"
    }
  }

  depends_on = [
    module.lb_role
  ]
}

resource "helm_release" "alb-controller" {
  name       = "aws-load-balancer-controller"
  repository = "<https://aws.github.io/eks-charts>"
  chart      = "aws-load-balancer-controller"
  namespace  = "kube-system"

  set {
    name  = "region"
    value = "eu-west-3"
  }

  set {
    name  = "vpcId"
    value = module.vpc.vpc_id
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }

  set {
    name  = "clusterName"
    value = local.cluster_name
  }

  depends_on = [
    kubernetes_service_account.service-account
  ]
}
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Ah, well, the issue here is that you cannot deploy resources to an EKS cluster in the same Terraform plan as where you create the cluster. I mean, you can hack something that usually works, but it is not officially supported.

Best practice is to have one root module that creates the EKS cluster, and then additional modules that install things into the cluster.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Stan V FYI

Stan V avatar
Stan V avatar
locals {
  cluster_name = var.cluster_name
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.15.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  cluster_addons = {
    aws-ebs-csi-driver = {
      service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_group_defaults = {
    ami_type = var.ami_type

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }
  }
}

module "lb_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  role_name                              = "shop_eks_lb"
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }

  depends_on = [
    module.eks
  ]
}

resource "kubernetes_service_account" "service-account" {
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    labels = {
      "app.kubernetes.io/name"      = "aws-load-balancer-controller"
      "app.kubernetes.io/component" = "controller"
    }
    annotations = {
      "eks.amazonaws.com/role-arn"               = module.lb_role.iam_role_arn
      "eks.amazonaws.com/sts-regional-endpoints" = "true"
    }
  }

  depends_on = [
    module.lb_role
  ]
}

resource "helm_release" "alb-controller" {
  name       = "aws-load-balancer-controller"
  repository = "<https://aws.github.io/eks-charts>"
  chart      = "aws-load-balancer-controller"
  namespace  = "kube-system"

  set {
    name  = "region"
    value = "eu-west-3"
  }

  set {
    name  = "vpcId"
    value = module.vpc.vpc_id
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }

  set {
    name  = "clusterName"
    value = local.cluster_name
  }

  depends_on = [
    kubernetes_service_account.service-account
  ]
}

2024-10-02

Release notes from terraform avatar
Release notes from terraform
06:03:28 PM

v1.9.7 1.9.7 (October 2, 2024) BUG FIXES:

config generation: escape map keys with whitespaces (#35754)

config generation: escape map keys with whitespace by liamcervante · Pull Request #35754 · hashicorp/terraformattachment image

This PR updates the config generation package so map keys with whitespace are escaped with quotes. This is already handled automatically for the normal attribute generation, and nested blocks are a…

2024-10-03

Mauricio Wyler avatar
Mauricio Wyler

Hi. I’m using ATMOS for a multi-account multi-enviroment project on AWS… And I love it! Now I need to deploy multiple times (mostly in the same environment) the same group of terraform components (about 20 components) by changing only the name (I’m using the tenant context to achieve this). The idea is to have for example, demo-a, demo-b, demo-c, etc…

So, I used GO templates and it works…

ecs/service/application{{ if .tenant }}/{{ .tenant }}{{ end }}:
  ...
ecs/service/api{{ if .tenant }}/{{ .tenant }}{{ end }}:
  ...
...

And then

atmos terraform apply ecs/service/application/demo-a -s uw2-dev
atmos terraform apply ecs/service/api/demo-a -s uw2-dev
...

But I have the feeling there must be a better way to do this… (and probably easier) Any idea? Thanks.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Mauricio Wyler

I think this is a good way to do it, we also use Go templates in the component names when we need to dynamically generate many Atmos components If it’s works for you, then it’s fine

Mauricio Wyler avatar
Mauricio Wyler

Thanks @Andriy Knysh (Cloud Posse) for your reply… Good to know I’m on the correct path!

1
Mauricio Wyler avatar
Mauricio Wyler

@Andriy Knysh (Cloud Posse) do workflows support Go templates? Env vars? Or something to help me make them dynamic?

In:

name: Stack workflows
description: |
  Deploy application stack

workflows:

  apply-resources:
    description: |
      Run 'terraform apply' on core resources for a given stack
    steps:
      - command: terraform apply ecs/service/application/demo-a -auto-approve
      - command: terraform apply ecs/service/api/demo-a -auto-approve

I would like pass demo-a as a parameter or env variable… to prevent me from creating repeated workflows for demo-b, demo-c, etc…

Thanks again!

Leo Przybylski avatar
Leo Przybylski

Is anyone familiar with setting up flink workspaces on confluent?

I noticed there are some flink resources available through the confluent provider. I am not familiar enough to know if this is the best pattern to follow. For example, would it be better to run flink on kubernetes or on self hosted AWS resources? If anyone has some experience with this and can give some insight, I would appreciate it.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse) @Andriy Knysh (Cloud Posse) @Matt Calhoun

1
RB avatar

Did anyone see this ai atlantis song made by a community member? We need more ai songs about our tools lol

https://youtu.be/fThdaeqLDPs

3

2024-10-04

Rishav avatar

Took a while to find the right combination of actions, but happy to share my guide on securing cloud-provisioning pipeline with GitHub Automation, which spans:

• “keyless” AWS authentication

• Terraform/Tofu IaC workflow

• deployment protections. (this is my first blog/article in years and super-keen for any feedback, from content to formatting and anything in between – thank you!)

Secure cloud provisioning pipeline with GitHub automationattachment image

Master best practices to secure cloud provisioning, automate pipelines, and deploy infrastructure-as-code in DevOps lifecycle.

setheryops avatar
setheryops

Anyone know if the Pluralith project is still alive? They havent had a new release since March of 2023 so im guessing not. It also looks like they are not responding to any issues either. If it is dead does anyone know of a good alternative?

Pluralith - Visualize Terraform Infrastructureattachment image

Pluralith lets you visualise and document your Terraform infrastructure in a completely automated way.

Jeremy Albinet avatar
Jeremy Albinet

I can only recommend our solution: https://www.brainboard.co

Brainboard: Cloud Infrastructure Designerattachment image

Brainboard is an AI driven platform to visually design, generate terraform code and manage cloud infrastructure, collaboratively.

1

2024-10-07

tretinha avatar
tretinha

Hey, I’m trying Atmos for the first time and I’m trying to set it up with opentofu. I have tofu available in my current path and I have a pretty straightforward (I guess) atmos.yaml file:

base_path: "./"

components:
  terraform:
    command: "tofu"
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: false

stacks:
  base_path: "stacks"
  included_paths:
    - "deploy/**/*"
  # excluded_paths:
  #   - "**/_defaults.yaml"
  name_pattern: "{stage}/{region}"

logs:
  file: "/dev/stderr"
  level: Debug

However, when I try something like atmos terraform init -s dev/us-east-1 or atmos terraform init -s dev or atmos terraform init , I get:

exec: "terraform": executable file not found in $PATH

Any ideas? I’m not entirely sure about what I’m missing to make this work. Thank you!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please make sure you don’t have command: in the YAML manifests, similar to this

components:
  terraform:
    vpc:
      command: "terraform"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure OpenTofu | atmos

Atmos natively supports OpenTofu, similar to the way it supports Terraform. It’s compatible with every version of opentofu and designed to work with multiple different versions of it concurrently, and can even work alongside with HashiCorp Terraform.

tretinha avatar
tretinha

I don’t think I have any

tretinha avatar
tretinha

could this be related with faulty definitions in other places?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please run atmos describe component <component> -s <stack> and check what value is in the command field in the output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh wait

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a wrong command

atmos terraform init -s dev/us-east-1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it needs a component and a stack

tretinha avatar
tretinha

oh!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform init <component> -s dev/us-east-1
tretinha avatar
tretinha

ah! that was it. thanks a lot @Andriy Knysh (Cloud Posse)

2
tretinha avatar
tretinha

silly mistake

2024-10-08

tretinha avatar
tretinha

Hey, I’m trying to execute a plan and I’m getting the following output:

% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1

Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
    - 172.80.0.0/16
    - 172.81.0.0/16
vpc_id: [redacted]

Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/tofu init -reconfigure

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0

OpenTofu has been successfully initialized!

Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg

Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME

  Select a different OpenTofu workspace.

Options:

    -or-create=false    Create the OpenTofu workspace if it doesn't exist.

    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg


Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME

  Create a new OpenTofu workspace.

Options:

    -lock=false         Don't hold a state lock during the operation. This is
                        dangerous if others might concurrently run commands
                        against the same workspace.

    -lock-timeout=0s    Duration to retry a state lock.

    -state=path         Copy an existing state file into the new workspace.


    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg

exit status 1

goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
        runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
        github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
        github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
        github.com/cloudposse/atmos/main.go:9 +0x1c

I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(Please use atmos)

tretinha avatar
tretinha

sure!

2024-10-09

Release notes from terraform avatar
Release notes from terraform
04:43:29 PM

v1.10.0-alpha20241009 1.10.0-alpha20241009 (October 9, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ephemeral resources always produce ephemeral values. Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays…

2024-10-11

toka avatar

I was trying to use https://registry.terraform.io/modules/cloudposse/stack-config/yaml/1.6.0/submodules/remote-state and found out that actually it’s a submodule of yaml-stack-config module.

Since every submodule of yaml-stack-config is using context (null-label) that got me thinking: if terraform-provider-context is meant to supersede null-label , should I even start using yaml-stack-config when starting my codebase pretty much from scratch?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Matt Calhoun

Matt Calhoun avatar
Matt Calhoun

The tl;dr is that yes, the terraform-provider-context is designed to replace null-label but the Cloud Posse modules and components have not yet been updated to support its use, so it’s probably premature to start using it if you are also using Cloud Posse open source terraform modules. We are currently in the planning phase of how those upgrades will occur, but it will be some time before they are rolled out to our open source modules.

toka avatar

Got it, thanks!

1
RB avatar
paololazzari/terraform-repl

A terraform console wrapper for a better REPL experience

1
RB avatar

This is easier to use than creating a new directory of files with test code

paololazzari/terraform-repl

A terraform console wrapper for a better REPL experience

Joe Perez avatar
Joe Perez

And probably eliminates the need for troubleshooting with outputs, I’ll have to try it out

muhaha avatar

Ola wave I am struggling with loop ( over subnets ) and this structure

variable "vpcs" {
  description = "List of VPCs"
  type        = list(map(any))

  default = [
    {
      name    = "vpc-1"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-1"
          cidr = "10.0.1.0/24"
        },
        {
          name = "subnet-2"
          cidr = "10.0.2.0/24"
        }
      ]
    },
    {
      name    = "vpc-2"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-3"
          cidr = "10.0.3.0/24"
        },
        {
          name = "subnet-4"
          cidr = "10.0.4.0/24"
        }
      ]
    }
  ]
}

any ideas? something like

for_each = { for v in var.vpcs, s in v.subnets : "${v.name}-${s.name}" => s }
1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

this structure is a little confusing, list(map(any)). How about changing the variable to a list(object({...})) ?

variable "vpcs" {
  description = "List of VPCs"
  type = list(object({
    name    = string
    cidr    = string
    subnets = list(object({
      name = string
      cidr = string
    }))
  }))

  default = [
    {
      name    = "vpc-1"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-1"
          cidr = "10.0.1.0/24"
        },
        {
          name = "subnet-2"
          cidr = "10.0.2.0/24"
        }
      ]
    },
    {
      name    = "vpc-2"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-3"
          cidr = "10.0.3.0/24"
        },
        {
          name = "subnet-4"
          cidr = "10.0.4.0/24"
        }
      ]
    }
  ]
}

then you can loop over it like this

for_each = {
  for vpc in var.vpcs : 
    for subnet in vpc.subnets : "${vpc.name}-${subnet.name}" => subnet
}
haque.zubair avatar
haque.zubair

haque.zubair avatar
haque.zubair

^ agree

muhaha avatar

nice, thanks

1

2024-10-14

2024-10-15

mark avatar

Would love to get some feedback on my latest project – Terraform Module Releaser – a GitHub Action that automates versioning, tags, releases, and docs for Terraform modules in monorepos! Great for module collections in a single repo with tags representing logical folders/tf roots. Spent a lot of time to make this literally a drop-in for anyone using GHA with defaults sanely configured.

Check it out here: [https://github.com/techpivot/terraform-module-releaser] Feel free to star if you find it useful! Happy to answer any Qs in as well!

techpivot/terraform-module-releaser

GitHub Action to automate versioning, releases, and documentation for Terraform modules in monorepos.

1
1

2024-10-16

Prashant avatar
Prashant

wave Hello, team! This is about Rotate .pem for AWS EKS nodes, and the.pub and .private keys are generated using the source URL listed below.

https://github.com/cloudposse/terraform-aws-key-pair/blob/main/main.tf Whenever we update the key pair name in the keypair.tf, the key pair name changes, but the .private and.pub keys remain the same as earlier.

Need change the .pub and .private keys for the EKS nodes.

locals {
  enabled = module.this.enabled
  public_key_filename = format(
    "%s/%s",
    var.ssh_public_key_path,
    coalesce(var.ssh_public_key_file, join("", [module.this.id, var.public_key_extension]))
  )

  private_key_filename = format(
    "%s/%s%s",
    var.ssh_public_key_path,
    module.this.id,
    var.private_key_extension
  )
}

resource "aws_key_pair" "imported" {
  count      = local.enabled && var.generate_ssh_key == false ? 1 : 0
  key_name   = module.this.id
  public_key = file(local.public_key_filename)
  tags       = module.this.tags
}

resource "tls_private_key" "default" {
  count     = local.enabled && var.generate_ssh_key == true ? 1 : 0
  algorithm = var.ssh_key_algorithm
}

resource "aws_key_pair" "generated" {
  count      = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on = [tls_private_key.default]
  key_name   = module.this.id
  public_key = tls_private_key.default[0].public_key_openssh
  tags       = module.this.tags
}

resource "local_file" "public_key_openssh" {
  count      = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on = [tls_private_key.default]
  content    = tls_private_key.default[0].public_key_openssh
  filename   = local.public_key_filename
}

resource "local_sensitive_file" "private_key_pem" {
  count           = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on      = [tls_private_key.default]
  content         = tls_private_key.default[0].private_key_pem
  filename        = local.private_key_filename
  file_permission = "0600"
}

resource "aws_ssm_parameter" "private_key" {
  count = local.enabled && var.generate_ssh_key && var.ssm_parameter_enabled == true ? 1 : 0
  name  = format("%s%s", var.ssm_parameter_path_prefix, module.this.id)
  type  = "SecureString"
  value = tls_private_key.default[0].private_key_pem
  tags  = module.this.tags
}

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Yonatan Koren @Andriy Knysh (Cloud Posse)

locals {
  enabled = module.this.enabled
  public_key_filename = format(
    "%s/%s",
    var.ssh_public_key_path,
    coalesce(var.ssh_public_key_file, join("", [module.this.id, var.public_key_extension]))
  )

  private_key_filename = format(
    "%s/%s%s",
    var.ssh_public_key_path,
    module.this.id,
    var.private_key_extension
  )
}

resource "aws_key_pair" "imported" {
  count      = local.enabled && var.generate_ssh_key == false ? 1 : 0
  key_name   = module.this.id
  public_key = file(local.public_key_filename)
  tags       = module.this.tags
}

resource "tls_private_key" "default" {
  count     = local.enabled && var.generate_ssh_key == true ? 1 : 0
  algorithm = var.ssh_key_algorithm
}

resource "aws_key_pair" "generated" {
  count      = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on = [tls_private_key.default]
  key_name   = module.this.id
  public_key = tls_private_key.default[0].public_key_openssh
  tags       = module.this.tags
}

resource "local_file" "public_key_openssh" {
  count      = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on = [tls_private_key.default]
  content    = tls_private_key.default[0].public_key_openssh
  filename   = local.public_key_filename
}

resource "local_sensitive_file" "private_key_pem" {
  count           = local.enabled && var.generate_ssh_key == true ? 1 : 0
  depends_on      = [tls_private_key.default]
  content         = tls_private_key.default[0].private_key_pem
  filename        = local.private_key_filename
  file_permission = "0600"
}

resource "aws_ssm_parameter" "private_key" {
  count = local.enabled && var.generate_ssh_key && var.ssm_parameter_enabled == true ? 1 : 0
  name  = format("%s%s", var.ssm_parameter_path_prefix, module.this.id)
  type  = "SecureString"
  value = tls_private_key.default[0].private_key_pem
  tags  = module.this.tags
}

Prashant avatar
Prashant

Can you please update on the request?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Prashant Thank you for reaching out! As a small team, we prioritize our paying clients, which means community support might take a bit longer. I’ll be sure to share your request with our engineers, but please bear with us as we work through our current projects. While we can’t provide a specific ETA, we’ll do our best to get back to you as soon as we can. Thank you for your understanding!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Prashant when you create an EKS cluster, you can specify a KMS key to encrypt the volumes, for example

https://github.com/cloudposse/terraform-aws-eks-cluster/blob/main/main.tf#L85

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you change the KMS key, it invoves a lot of steps (some of them are manual) to correctly update the cluster to use the new key

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
New Key Only Affects New Volumes:

Updating the encryption key will not retroactively change the encryption on existing EBS volumes.
Old volumes will still use the original KMS key unless you migrate them.
Backup and Migration:

If you need to re-encrypt existing volumes with the new key, you'll have to create new encrypted volumes using the new key and migrate data.
Impact on Pods/Workloads:

If you're using persistent volumes (PVs), updating encryption keys involves moving data, which may lead to downtime unless handled properly (e.g., by draining nodes).
IAM Role Changes:

Ensure that the EKS node IAM roles and other services have appropriate permissions for both the old and new KMS keys during the migration.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

these steps need to be done

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are not supported by the https://github.com/cloudposse/terraform-aws-eks-cluster TF module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
<https://github.com/cloudposse/terraform-aws-key-pair/blob/main/main.tf>
Whenever we update the key pair name in the keypair.tf, the key pair name changes, but the .private and.pub keys remain the same as earlier.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you provide more info on the above? (e.g. there is no [keypair.tf](http://keypair.tf) file in the module)

Prashant avatar
Prashant

Thank @Andriy Knysh (Cloud Posse) @Gabriela Campana (Cloud Posse) for reaching out to me.

@Andriy Knysh (Cloud Posse) To add context to this. The SSH keys (.Pub and.Privte) are saved in the Secrets Manager with the name of the keypair, which is indicated in keypair.tf, as well as key stories in the key pairs service in the AWS console. When EC2 instances are created, the keypair is assigned to them automatically.

When we edit the key pair name (aws) (-${var.env}-${var.app}-${random_id.uniquekeypair.hex}-keypair) in the keypair.tf, the name changes, but the private and pub keys remain the same as before.

We need something different. The pub and private keys are then the same as those stored in the Secrets Manager.

resource "random_id" "uniquekeypair" {
  byte_length = 4
}

module "ssh_key_pair" {
  source                = "git::<https://github.com/cloudposse/terraform-aws-key-pair.git?ref=main>"
  name                  = "aws-${var.env}-${var.app}-${random_id.uniquekeypair.hex}-keypair"
  ssh_public_key_path   = "./secrets"
  generate_ssh_key      = "true"
  private_key_extension = ".pem"
  public_key_extension  = ".pub"
  tags                  = local.core_tags
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The aws-key-pair module is deprecated because of the various ways it leaks the keys. You have run into one of several issues with this module. The easiest path forward is just to

terraform taint "...tls_private_key.default[0]"

where the ... must be replaced by whatever prefix you see when you run terraform state list.

Prashant avatar
Prashant

@Jeremy G (Cloud Posse) Thanks.I will try this.

1
Release notes from terraform avatar
Release notes from terraform
02:33:30 PM

v1.9.8 1.9.8 (October 16, 2024) BUG FIXES:

init: Highlight missing subdirectories of registry modules in error message (#35848) init: Prevent crash when loading provider_meta blocks with invalid names (<a href=”https://github.com/hashicorp/terraform/pull/35842” data-hovercard-type=”pull_request”…

Backport of init: detect and report invalid submodules into v1.9 by github-actions[bot] · Pull Request #35848 · hashicorp/terraformattachment image

Backport This PR is auto-generated from #35839 to be assessed for backporting due to the inclusion of the label 1.9-backport.

Warning automatic cherry-pick of commits failed. If the first commit…

provider meta: don't return providers with invalid names by liamcervante · Pull Request #35842 · hashicorp/terraformattachment image

This PR updates the parsing of the provider_meta blocks so that providers with invalid names aren&#39;t returned. This prevents a crash that occurs later when the names are parsed, and doesn&#39;t …

Rishav avatar

Say you have Terraform running as part of your provisioning workflow, and the plan output is returned as a PR comment.

For a large enough project, you’re likely to run into GitHub’s character limit for comments. In that situation, would you prefer:

the first thousands of characters

the last thousands of characters Before rushing to the second option because the plan summary is stored within the last few lines, consider that plan summary is pulled out at the top of the PR comment already.

So which “end” is more useful to you? And would the same “end” be useful from the apply command’s output as well?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s plan B

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Plan A, is don’t use GitHub comments. It’s use github job summaries instead

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Job summaries can be much larger, I believe as large as 1MB of markdown text

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When it’s larger than that, I would just attach the logs as an artifact, and shame them for having root module that is too large.

Rishav avatar

Boy am I glad you mentioned that, because outputting the TF command output to the workflow job summary is the default behaviour. The PR comment is for ease of reference within the conversation thread.

For context, I’m pulling together a reusable workflow for plan/applying Terraform+Tofu, so I have no say/judgement over how it gets used by the broader audience. I just want the dev-experience to be smooth, even in edge-cases where the plan output may be longer than character limits, and needs to be truncated.

For added support, both the PR comment and job summary includes a direct link to the view the workflow log in its entirety. Additionally, the last 1-line summary of the plan is pulled out to the very top, as that’s probably the most relevant piece of info you’re after.

akhan4u avatar
akhan4u

You could also add a summary of your terraform plan changes with something like tf-summarize. (Particularly like this) & save your larger plan run as an artifact and point to it via a comment.

Rishav avatar

Thanks @akhan4u, that summarize is undeniably cool , and I’ve seen it taken even further with the likes of prettyplan, though it’s outside the formatting scope of a mere PR comment.

A rather simpler approach to outlining the plan I’d been considering was to extract the lines starting with # and applying some basic regex to generate diff like so.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The other option is to just to post an “anchor” comment in the PR, with the TL;DR, a link to the full job summary, and a link to the job.

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what vercel does

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I kind of like that. Not overwhelming in the PR. It’s convenient to quickly naviate.

Rishav avatar

In the same vein, when pushing multiple commits to a PR branch, would you prefer the Terraform plan output to:

update the existing comment (bonus revision/edit history)

delete the existing comment and create a new one (easier to scan chronologically).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it should update the top comment (same as vercel)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So close to

2024-10-17

2024-10-18

leonkatz avatar
leonkatz

I’m running into an issue when adding routes for peering connections. I get the existing routes that exist for all subnets in a VPC and then try to use count to go through them and add a route. But it won’t let me use a data element to get routes unless I comment out the aws_route that I’m trying to create then, after it gets the data I can uncomment to run it. This is the error

The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the
│ -target argument to first apply only the resources that the count depends on.
leonkatz avatar
leonkatz

I tried assigning the data request to a local var, but still got the error

2024-10-19

2024-10-23

Release notes from terraform avatar
Release notes from terraform
01:23:30 PM

v1.10.0-alpha20241023 1.10.0-alpha20241023 (October 23, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ephemeral resources always produce ephemeral values. Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays…

Release v1.10.0-alpha20241023 · hashicorp/terraformattachment image

1.10.0-alpha20241023 (October 23, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ep…

Bob Berg avatar
Bob Berg

Hello, I previously submitted a pull request for Cloud Posse Terraform Datadog Platform, https://github.com/cloudposse/terraform-datadog-platform/pull/107. May I get some eyes on it or be directed to where I may find an approval? It’s a simple one-line-change that fixes Advanced Scheduling for Synthetics Tests.

#107 fix(synthetics): Adjust timeframes loop to include child objects

what

Change loop to use child objects instead of timeframes object.

why

Currently, the Synthetics Test module is only capable of setting an advanced schedule for one day. It is unable to accept or set multiple days. This change allows for multiple days to be set in an advanced schedule.

references

Closes #100

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Bob Berg Thank you for this PR.

At the moment, the automated tests are failing. The test failures do not appear to be related to this PR, but still, we do not like to accept/merge PRs when the tests are failing. I am looking into it; no further action is required on your part at this time. However, because the failure involves Datadog rejecting what was previously a valid test, it will take quite a while to figure out.

2024-10-24

SlackBot avatar
SlackBot
07:43:00 PM

This message was deleted.

1

2024-10-25

Roberts Jānis Sīklis avatar
Roberts Jānis Sīklis

Hi folks, fairly new to terraform and have a few questions on how to approach a problem.

I wish to create a web portal where it is possible to provision an entire project with different environments (dev, prod, e.t.c). Idea is to create a singular internal module which would include all of my needed public modules for example EC2, RDS, S3 and others. For DEV and PROD environments I want default variables to already contain all the logic what these environments need and if needed the module caller can edit those values. Result could be for example module caller could deploy and entire dev environment without providing any variables.

Question is it a fine approach to create this singular module this way? Any suggestions would help! Thanks you!

Jan Costandius avatar
Jan Costandius

I did a quick read through, and it sounds like you are describing something like Atmos or Terragrunt.

cloudposse/atmos

Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Roberts Jānis Sīklis you can join the atmos channel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and review the quick start https://atmos.tools/quick-start/mindset

Thinking Like Atmos | atmos

Atmos can change how you think about the Terraform code you write to build your infrastructure.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are at Stage #1 of the Terraform Journey.

https://atmos.tools/introduction/why-atmos/

Why Does Atmos Exist? | atmos

Learn about the typical Terraform journey when not using Atmos.

Roberts Jānis Sīklis avatar
Roberts Jānis Sīklis

Thanks! Will take a look!

2024-10-29

Vitalii avatar
Vitalii

hello guys I am playing Atmos and GitHub for my project currently, I am having a problem with posting comments to GitHub pull requests with atmos terraform plan <stack> -s ##### I cant parse output in a readable this relative to terraform -no-color my question is: can I run atmos terraform plan -s ##### -no-color` or any other arguments that will be relative to `-no color` In the documentation, I didn`t find anything about it If you know any other way how I can post comments to pull requests in a readable way via `atmos`, or some parse way to parse please help appreciate any help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best to use the atmos channel for this

1

2024-10-30

Brett Au avatar
Brett Au

I am having a problem with the elasticache-redis module, when I turn on the variable

allow_ingress_from_this_vpc

I get the following error

│ Error: Inconsistent conditional result types
│
│   on .terraform/modules/redis_clusters.redis.aws_security_group/normalize.tf line 81, in locals:
│   81:   all_ingress_rules = local.inline ? [for r in local.all_inline_rules : r if r.type == "ingress"] : []
│     ├────────────────
│     │ local.all_inline_rules is tuple with 3 elements
│     │ local.inline is false
│
│ The true and false result expressions must have consistent types. The 'true' tuple has length 2, but the 'false' tuple has length 0.
╵
exit status 1
Brett Au avatar
Brett Au

I chased it to through the following files

starts here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/elasticache-redis/main.tf#L41-L43

Goes to this module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/elasticache-redis/modules/redis_cluster/main.tf#L12

Goes to this module https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/main/main.tf#L36

to here https://github.com/cloudposse/terraform-aws-security-group/blob/main/normalize.tf#L81

but it seems the default value for a variable called inline_rules_enableddefaults to false https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/1.4.1/security_group_inputs.tf#L158

And it never gets set by the component, or the upstream modules to true. So the if statement on L81 will always be false, and the contents appear to mixmatch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB any clue?

1

2024-10-31

Release notes from terraform avatar
Release notes from terraform
04:53:29 PM

v1.10.0-beta1 1.10.0-beta1 (October 31, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ephemeral resources always produce ephemeral values. Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays ephemeral…

Release v1.10.0-beta1 · hashicorp/terraformattachment image

1.10.0-beta1 (October 31, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ephemeral …

    keyboard_arrow_up