#terraform (2024-10)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-10-01

Julio Chana avatar
Julio Chana

Hi!

First sorry if this is not the best place to ask, I’ll move it to a different channel if it’s better,

I’m running into this issue when deploying a helm chart with the module:

terraform {
  source = "git::<https://github.com/cloudposse/terraform-aws-helm-release.git//?ref=0.10.1>"
}

And I’m getting a constant drift for it in the metadata:

  # helm_release.this[0] will be updated in-place
  ~ resource "helm_release" "this" {
        id                         = "myapp"
      ~ metadata                   = [
          - {
              - app_version = "v2.8.6"
              - chart       = "myapp"
              - name        = "myapp"
              - namespace   = "myapp"
              - revision    = 16
              - values      = jsonencode(
                    {
...

Do you know what I can do so the information is properly understood and the drift only happening if there are real changes?

Thank you so much!

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I think that’s more about settings in the helm terraform provider. Be sure you don’t have any experiments enabled.

Igor Rodionov avatar
Igor Rodionov

@Julio Chana this comes from helm provider In version 2.13.2 metadata was output block https://registry.terraform.io/providers/hashicorp/helm/2.13.2/docs/resources/release#metadata In version 2.14.0 they changed it to list of objects https://registry.terraform.io/providers/hashicorp/helm/2.14.0/docs/resources/release#metadata Try pin helm provider to 2.13.2 and check if drift will be still there

Igor Rodionov avatar
Igor Rodionov
#1315 Suppress metadata changes at terraform plan?

Older helm provider versions didn’t show this metadata changes when terraform detected a change/run plan, and the new helm provider versions are posting a wall of text of metadata changes which have no real value (to me) and just clog up my tf plan output.

example doing a tf plan on a helm resource where we only updated the image tag var:

  # module.api-eks.helm_release.app will be updated in-place
  ~ resource "helm_release" "app" {
        id                         = "api"
      ~ metadata                   = [
          - {
              - app_version = "latest"
              - chart       = "api"
              - name        = "api"
              - namespace   = "production"
              - revision    = 1420
              - values      = jsonencode(
                    {
                      - appConfig          = {
                          - apiBackendReplicationTaskId       = "none"
                          - applicationMode                   = "none"
                          - baseApiUrl                        = "none"
                          - something = "else"
                          - foo = "bar"
                          - 
                          < it goes on for many >
                          < many >
                          < lines >
                          < and it's of no value >
                          < just noise on tf plan >
                          

let’s say we only have a field deploymentTimestamp updated. We’d rather see the changed field only on terraform plan, and suppress the whole metadata update, e.g. terraform plan should only show

  # module.api-eks.helm_release.app will be updated in-place
  ~ resource "helm_release" "app" {
        id                         = "api"
        [...]
        # (25 unchanged attributes hidden)

      + set {
          + name  = "deploymentTimestamp"
          + value = "19012024-225905"
        }

        # (62 unchanged blocks hidden)
        

This way the terraform plan is clear and concise, more human (easier to read/follow) without the metadata removal. Does it make sense?

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: v1.6.5
Helm Provider version: v2.12.0 (same on v2.12.1)
Kubernetes version: v2.24.0

Terraform configuration

resource “helm_release” “app” { namespace = var.namespace != “” ? var.namespace : terraform.workspace chart = var.chart_name version = var.chart_version name = var.app_name timeout = var.deployment_timeout cleanup_on_fail = var.helm_cleanup_on_fail atomic = var.helm_atomic_creation max_history = var.helm_max_history wait = var.helm_wait_for_completion

dynamic “set” { for_each = local.k8s_app

content {
  name  = set.key
  value =  set.value
}   }

values = var.some_ingress_values }

Question

Is there any way to suppress the metadata changes at terraform plan?

Igor Rodionov avatar
Igor Rodionov
#1344 `metadata` always recomputes, causing redeployment for every single plan-apply

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: v1.7.3
Provider version: v2.12.1
Kubernetes version: v1.29.2+k3s1

Affected Resource(s)

• helm_release

Terraform Configuration Files

terraform { required_providers { helm = { source = “hashicorp/helm” version = “2.12.1” } } }

provider “helm” { kubernetes { config_path = “~/.kube/config” } }

resource “helm_release” “this” { name = “redis” repository = “https://charts.bitnami.com/bitnami” chart = “redis” namespace = “cache” create_namespace = false version = “19.x” }

Debug Output

https://gist.github.com/meysam81/8b4c8805d7dcfa7dd116443c4cc42841

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

Steps to Reproduce

Either of the following:

terraform applyterraform plan -out tfplan && terraform apply tfplan

Expected Behavior

Even in the case where the latest chart version is the same as the running helm-release, it is still updating the metadata and trying to re-deploy the release. A frustrating experience really. Ansible helm module is doing a much better job in this regards when it comes to idempotency.

The funny thing is, if you pin the version to exact version, e.g. 19.0.1, this won’t happen and a No changes will be printed on the screen. But any version wildcard versioning causes the release to recompute the metadata. :shrug:

Actual Behavior

It finds metadata changed for every single plan-apply.

Important Factoids

Nothing special. I have tried this in different clusters with different versions. All have the same outcome.

References

GH-1097#1150

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Igor Rodionov avatar
Igor Rodionov
#1150 GH-1097 causes `metadata` to always be recomputed with some helm charts.

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.4.6
Provider version: v2.10.0
Kubernetes version: 1.23.17 (EKS)

Affected Resource(s)

• Helm Repository: https://dysnix.github.io/charts • Helm Chart Version: 0.3.1 • Helm Chart: raw

Terraform Configuration Files

resource “helm_release” “filebeat” { chart = “raw” name = var.filebeat.name namespace = var.filebeat.namespace repository = “https://dysnix.github.io/charts” version = “0.3.1”

values = [ «-EOF ${yamlencode({ resources = [local.filebeat] })} EOF ] }

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

N/A

Steps to Reproduce

  1. Use the above define Helm chart to deploy something. You can really deploy any type of Kubernetes resource.
  2. Rerun terraform plan
  3. Observe that the metadata is going to be regenerated when it shouldn’t.

Downgrading from 2.10.0 to 2.9.0 causes the issue to go away.

Expected Behavior

I would expect that rerunning Terraform where there are no changes to the Helm values that the metadata should not be recomputed.

Actual Behavior

Observe that the metadata gets regeneratred

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
Terraform will perform the following actions:
  # module.kubernetes_filebeat_autodiscovery_cluster_1.helm_release.filebeat will be updated in-place
  ~ resource "helm_release" "filebeat" {
        id                         = "autodiscover"
      ~ metadata                   = [
          - {
              - app_version = ""
              - chart       = "raw"
              - name        = "autodiscover"
              - namespace   = "elastic-monitors"
              - revision    = 3
              - values      = jsonencode(
                    {
                      - resources = [
                          - {
                              - apiVersion = "beat.k8s.elastic.co/v1beta1"
                              - kind       = "Beat"
                              - metadata   = {
                                  - labels      = null
                                  - name        = "autodiscover"
                                  - namespace   = "default"
                                }
                                ... <spec_removed>
                            },
                        ]
                    }
                )
              - version     = "v0.3.1"
            },
        ] -> (known after apply)
        name                       = "autodiscover"
        # (27 unchanged attributes hidden)
    }
Plan: 0 to add, 1 to change, 0 to destroy.

Important Factoids

References

GH-1097

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Igor Rodionov avatar
Igor Rodionov

There are several issues with this problem ^

Julio Chana avatar
Julio Chana

Thank you so much! I’m testing this. I was using version “2.12.1”

1
Julio Chana avatar
Julio Chana

I’ve tried with both versions and it’s still happening. I’m still investigating how to fix this.

Is it also happening to you?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Julio Chana can you show me logs of your terraform plan?

Stan V avatar

Guys, would you be able to help me with issues I’m facing? I’m trying to deploy AWS EKS cluster with the LB, but I’m getting this error.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Stan V you didn’t describe the error, so I don’t know where to start, but I would guess this question is more appropriate for the #kubernetes channel.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)
locals {
  cluster_name = var.cluster_name
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.15.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  cluster_addons = {
    aws-ebs-csi-driver = {
      service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_group_defaults = {
    ami_type = var.ami_type

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }
  }
}

module "lb_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  role_name                              = "shop_eks_lb"
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }

  depends_on = [
    module.eks
  ]
}

resource "kubernetes_service_account" "service-account" {
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    labels = {
      "app.kubernetes.io/name"      = "aws-load-balancer-controller"
      "app.kubernetes.io/component" = "controller"
    }
    annotations = {
      "eks.amazonaws.com/role-arn"               = module.lb_role.iam_role_arn
      "eks.amazonaws.com/sts-regional-endpoints" = "true"
    }
  }

  depends_on = [
    module.lb_role
  ]
}

resource "helm_release" "alb-controller" {
  name       = "aws-load-balancer-controller"
  repository = "<https://aws.github.io/eks-charts>"
  chart      = "aws-load-balancer-controller"
  namespace  = "kube-system"

  set {
    name  = "region"
    value = "eu-west-3"
  }

  set {
    name  = "vpcId"
    value = module.vpc.vpc_id
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }

  set {
    name  = "clusterName"
    value = local.cluster_name
  }

  depends_on = [
    kubernetes_service_account.service-account
  ]
}
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Ah, well, the issue here is that you cannot deploy resources to an EKS cluster in the same Terraform plan as where you create the cluster. I mean, you can hack something that usually works, but it is not officially supported.

Best practice is to have one root module that creates the EKS cluster, and then additional modules that install things into the cluster.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Stan V FYI

Stan V avatar
Stan V avatar
locals {
  cluster_name = var.cluster_name
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.15.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  cluster_addons = {
    aws-ebs-csi-driver = {
      service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_group_defaults = {
    ami_type = var.ami_type

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.medium"]

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size
    }
  }
}

module "lb_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  role_name                              = "shop_eks_lb"
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }

  depends_on = [
    module.eks
  ]
}

resource "kubernetes_service_account" "service-account" {
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    labels = {
      "app.kubernetes.io/name"      = "aws-load-balancer-controller"
      "app.kubernetes.io/component" = "controller"
    }
    annotations = {
      "eks.amazonaws.com/role-arn"               = module.lb_role.iam_role_arn
      "eks.amazonaws.com/sts-regional-endpoints" = "true"
    }
  }

  depends_on = [
    module.lb_role
  ]
}

resource "helm_release" "alb-controller" {
  name       = "aws-load-balancer-controller"
  repository = "<https://aws.github.io/eks-charts>"
  chart      = "aws-load-balancer-controller"
  namespace  = "kube-system"

  set {
    name  = "region"
    value = "eu-west-3"
  }

  set {
    name  = "vpcId"
    value = module.vpc.vpc_id
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }

  set {
    name  = "clusterName"
    value = local.cluster_name
  }

  depends_on = [
    kubernetes_service_account.service-account
  ]
}

2024-10-02

Release notes from terraform avatar
Release notes from terraform
06:03:28 PM

v1.9.7 1.9.7 (October 2, 2024) BUG FIXES:

config generation: escape map keys with whitespaces (#35754)

config generation: escape map keys with whitespace by liamcervante · Pull Request #35754 · hashicorp/terraformattachment image

This PR updates the config generation package so map keys with whitespace are escaped with quotes. This is already handled automatically for the normal attribute generation, and nested blocks are a…

2024-10-03

Mauricio Wyler avatar
Mauricio Wyler

Hi. I’m using ATMOS for a multi-account multi-enviroment project on AWS… And I love it! Now I need to deploy multiple times (mostly in the same environment) the same group of terraform components (about 20 components) by changing only the name (I’m using the tenant context to achieve this). The idea is to have for example, demo-a, demo-b, demo-c, etc…

So, I used GO templates and it works…

ecs/service/application{{ if .tenant }}/{{ .tenant }}{{ end }}:
  ...
ecs/service/api{{ if .tenant }}/{{ .tenant }}{{ end }}:
  ...
...

And then

atmos terraform apply ecs/service/application/demo-a -s uw2-dev
atmos terraform apply ecs/service/api/demo-a -s uw2-dev
...

But I have the feeling there must be a better way to do this… (and probably easier) Any idea? Thanks.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @Mauricio Wyler

I think this is a good way to do it, we also use Go templates in the component names when we need to dynamically generate many Atmos components If it’s works for you, then it’s fine

Mauricio Wyler avatar
Mauricio Wyler

Thanks @Andriy Knysh (Cloud Posse) for your reply… Good to know I’m on the correct path!

1
Mauricio Wyler avatar
Mauricio Wyler

@Andriy Knysh (Cloud Posse) do workflows support Go templates? Env vars? Or something to help me make them dynamic?

In:

name: Stack workflows
description: |
  Deploy application stack

workflows:

  apply-resources:
    description: |
      Run 'terraform apply' on core resources for a given stack
    steps:
      - command: terraform apply ecs/service/application/demo-a -auto-approve
      - command: terraform apply ecs/service/api/demo-a -auto-approve

I would like pass demo-a as a parameter or env variable… to prevent me from creating repeated workflows for demo-b, demo-c, etc…

Thanks again!

Leo Przybylski avatar
Leo Przybylski

Is anyone familiar with setting up flink workspaces on confluent?

I noticed there are some flink resources available through the confluent provider. I am not familiar enough to know if this is the best pattern to follow. For example, would it be better to run flink on kubernetes or on self hosted AWS resources? If anyone has some experience with this and can give some insight, I would appreciate it.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse) @Andriy Knysh (Cloud Posse) @Matt Calhoun

1
RB avatar

Did anyone see this ai atlantis song made by a community member? We need more ai songs about our tools lol

https://youtu.be/fThdaeqLDPs

3

2024-10-04

Rishav avatar

Took a while to find the right combination of actions, but happy to share my guide on securing cloud-provisioning pipeline with GitHub Automation, which spans:

• “keyless” AWS authentication

• Terraform/Tofu IaC workflow

• deployment protections. (this is my first blog/article in years and super-keen for any feedback, from content to formatting and anything in between – thank you!)

Secure cloud provisioning pipeline with GitHub automationattachment image

Master best practices to secure cloud provisioning, automate pipelines, and deploy infrastructure-as-code in DevOps lifecycle.

setheryops avatar
setheryops

Anyone know if the Pluralith project is still alive? They havent had a new release since March of 2023 so im guessing not. It also looks like they are not responding to any issues either. If it is dead does anyone know of a good alternative?

Pluralith - Visualize Terraform Infrastructureattachment image

Pluralith lets you visualise and document your Terraform infrastructure in a completely automated way.

Jeremy Albinet avatar
Jeremy Albinet

I can only recommend our solution: https://www.brainboard.co

Brainboard: Cloud Infrastructure Designerattachment image

Brainboard is an AI driven platform to visually design, generate terraform code and manage cloud infrastructure, collaboratively.

1

2024-10-07

tretinha avatar
tretinha

Hey, I’m trying Atmos for the first time and I’m trying to set it up with opentofu. I have tofu available in my current path and I have a pretty straightforward (I guess) atmos.yaml file:

base_path: "./"

components:
  terraform:
    command: "tofu"
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: false

stacks:
  base_path: "stacks"
  included_paths:
    - "deploy/**/*"
  # excluded_paths:
  #   - "**/_defaults.yaml"
  name_pattern: "{stage}/{region}"

logs:
  file: "/dev/stderr"
  level: Debug

However, when I try something like atmos terraform init -s dev/us-east-1 or atmos terraform init -s dev or atmos terraform init , I get:

exec: "terraform": executable file not found in $PATH

Any ideas? I’m not entirely sure about what I’m missing to make this work. Thank you!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please make sure you don’t have command: in the YAML manifests, similar to this

components:
  terraform:
    vpc:
      command: "terraform"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure OpenTofu | atmos

Atmos natively supports OpenTofu, similar to the way it supports Terraform. It’s compatible with every version of opentofu and designed to work with multiple different versions of it concurrently, and can even work alongside with HashiCorp Terraform.

tretinha avatar
tretinha

I don’t think I have any

tretinha avatar
tretinha

could this be related with faulty definitions in other places?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please run atmos describe component <component> -s <stack> and check what value is in the command field in the output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh wait

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a wrong command

atmos terraform init -s dev/us-east-1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it needs a component and a stack

tretinha avatar
tretinha

oh!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform init <component> -s dev/us-east-1
tretinha avatar
tretinha

ah! that was it. thanks a lot @Andriy Knysh (Cloud Posse)

2
tretinha avatar
tretinha

silly mistake

2024-10-08

tretinha avatar
tretinha

Hey, I’m trying to execute a plan and I’m getting the following output:

% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1

Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
    - 172.80.0.0/16
    - 172.81.0.0/16
vpc_id: [redacted]

Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/tofu init -reconfigure

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0

OpenTofu has been successfully initialized!

Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg

Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME

  Select a different OpenTofu workspace.

Options:

    -or-create=false    Create the OpenTofu workspace if it doesn't exist.

    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg


Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME

  Create a new OpenTofu workspace.

Options:

    -lock=false         Don't hold a state lock during the operation. This is
                        dangerous if others might concurrently run commands
                        against the same workspace.

    -lock-timeout=0s    Duration to retry a state lock.

    -state=path         Copy an existing state file into the new workspace.


    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg

exit status 1

goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
        runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
        github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
        github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
        github.com/cloudposse/atmos/main.go:9 +0x1c

I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(Please use atmos)

tretinha avatar
tretinha

sure!

2024-10-09

Release notes from terraform avatar
Release notes from terraform
04:43:29 PM

v1.10.0-alpha20241009 1.10.0-alpha20241009 (October 9, 2024) NEW FEATURES:

Ephemeral resources: Ephemeral resources are read anew during each phase of Terraform evaluation, and cannot be persisted to state storage. Ephemeral resources always produce ephemeral values. Ephemeral values: Input variables and outputs can now be defined as ephemeral. Ephemeral values may only be used in certain contexts in Terraform configuration, and are not persisted to the plan or state files.

terraform output -json now displays…

2024-10-11

toka avatar

I was trying to use https://registry.terraform.io/modules/cloudposse/stack-config/yaml/1.6.0/submodules/remote-state and found out that actually it’s a submodule of yaml-stack-config module.

Since every submodule of yaml-stack-config is using context (null-label) that got me thinking: if terraform-provider-context is meant to supersede null-label , should I even start using yaml-stack-config when starting my codebase pretty much from scratch?

RB avatar
paololazzari/terraform-repl

A terraform console wrapper for a better REPL experience

1
RB avatar

This is easier to use than creating a new directory of files with test code

paololazzari/terraform-repl

A terraform console wrapper for a better REPL experience

Joe Perez avatar
Joe Perez

And probably eliminates the need for troubleshooting with outputs, I’ll have to try it out

muhaha avatar

Ola wave I am struggling with loop ( over subnets ) and this structure

variable "vpcs" {
  description = "List of VPCs"
  type        = list(map(any))

  default = [
    {
      name    = "vpc-1"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-1"
          cidr = "10.0.1.0/24"
        },
        {
          name = "subnet-2"
          cidr = "10.0.2.0/24"
        }
      ]
    },
    {
      name    = "vpc-2"
      cidr    = "10.0.0.0/16"
      subnets = [
        {
          name = "subnet-3"
          cidr = "10.0.3.0/24"
        },
        {
          name = "subnet-4"
          cidr = "10.0.4.0/24"
        }
      ]
    }
  ]
}

any ideas? something like

for_each = { for v in var.vpcs, s in v.subnets : "${v.name}-${s.name}" => s }
    keyboard_arrow_up