#terraform (2024-05)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-05-01

Pradeepvarma Senguttuvan avatar
Pradeepvarma Senguttuvan

Hi Team, i am trying to create read replica for document DB with different instance class than primary

module "documentdb_cluster" {
  source                          = "cloudposse/documentdb-cluster/aws"

since instance_class is string i cannot have different instance class for my read replica - any suggestions on this ? how do i have different instance class for my replica (edited)

Release notes from terraform avatar
Release notes from terraform
12:23:30 AM

v1.9.0-alpha20240501 1.9.0-alpha20240501 (May 1, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily intended…

Release v1.9.0-alpha20240501 · hashicorp/terraformattachment image

1.9.0-alpha20240501 (May 1, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that…

terraform console: Multi-line entry support by apparentlymart · Pull Request #34822 · hashicorp/terraformattachment image

The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…

2024-05-02

loren avatar

Super cool experiment in the 1.9 alpha release, they’re looking for feedback if you want to give it a go… https://discuss.hashicorp.com/t/experiment-feedback-input-variable-validation-can-cross-reference-other-objects/66644

Experiment Feedback: Input Variable Validation can cross-reference other objects

Hi everyone, In yesterday’s Terraform CLI v1.9.0-alpha20240501 there is an experimental implementation of the long-requested feature of allowing input variable validation rules to refer to other values in the same module as the variable declaration. For example, it allows the validation rule of one variable to refer to another: terraform { # This experiment opt-in will be required as long # as this remains experimental. If the experiment # is successful then this won’t be needed in the …

2024-05-03

dinodam avatar
dinodam

:wave: Hello, team!

I am having a slight issue with your terraform-aws-lambda-elasticsearch-cleanup module. It use to work fine, but since I upgrade the TF AWS provider to 5.47.0 from 4.20.1 and bumped the pinned module version to 0.14.0 from 0.12.3 I am getting the following error. I am using Terraform version 1.8.2

 Error: External Program Execution Failed
│ 
│   with module.lambda-elasticsearch-cleanup.module.artifact.data.external.curl[0],
│   on .terraform/modules/lambda-elasticsearch-cleanup.artifact/main.tf line 3, in data "external" "curl":
│    3:   program    = concat(["curl"], var.curl_arguments, ["--write-out", "{\"success\": \"true\", \"filename_effective\": \"%%{filename_effective}\"}", "-o", local.output_file, local.url])
│ 
│ The data source received an unexpected error while attempting to execute
│ the program.
│ 
│ Program: /usr/bin/curl
│ Error Message: curl: (22) The requested URL returned error: 404
│ 
│ State: exit status 22

Have I missed an upgrade step somewhere or is there an issue with the file?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm… I have a theory. We recently rolled out some new workflows, maybe the artifact wasn’t produced.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov can you take a look?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Essentially, what the module does is a clever hack to download the artifact from S3 based on the commit SHA of the module version you are pulling

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For some reason, the commit sha corresponding to that module version does not exist, which leads me to believe there’s a problem with the artfact and something wrong with the pipeline

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Gabriela Campana (Cloud Posse)

1
dinodam avatar
dinodam

Thanks for getting back to me. Would this not effect all users then?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

….all users using it at that version

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For now, try an older version

dinodam avatar
dinodam

ok… standby

dinodam avatar
dinodam

Issue with an older version of the module is that AWS TF provider 5 has depreciated some calls.

"source_json" and override_json

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I gotcha… so this is a problem, especially if you manage the terraform-aws-lambda-elasticsearch-cleanup in the same life cycle as, say, your elastic search cluster. While that’s not unreasonable, and what most probably are doing, it’s an example of why we like to break root modules out by lifecycle, reducing the tight coupling and dependencies on provider versions. That said, I totally get why this is a problem, just explaining why we (cloud posse) are less affected by these types of changes.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Components | atmos

Components are opinionated building blocks of infrastructure as code that solve one specific problem or use-case.

dinodam avatar
dinodam

Makes sense . I just grouped things together that went together. Ending up in this situation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for understanding… We’ll get this fixed, just cannot commit to when that will be.

1
dinodam avatar
dinodam

No issues, this is just on my upgrade TF branch and not on master. So I am good for now

2024-05-04

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Announcement: In support of using OpenTofu, starting with Geodesic v2.11.0, we are pre-installing package repos to allow you to easily install OpenTofu in your Dockerfile.

ARG OPEN_TOFU_VERSION=1.6.2
RUN apt-get update && apt-get install tofu=${OPEN_TOFU_VERSION}
5

2024-05-05

miko avatar

Guys is this normal behavior? In AWS EKS I have upgraded my nodes into t3.large from t3.medium, I saw before confirming “yes” that terraform will destroy the old nodes in order to proceed with the upgrade but I didn’t expect it to delete the volumes as well, good thing it only happened in our testing environment, my question is this normal behavior if I upgrade the instance_types? Because I was hoping to be able to upgrade it without affecting my persistent volumes

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

This is really more of a #kubernetes question, but I will take a crack at it here.

It seems to me you are confusing the EBS volumes associated with EC2 Instances as root volume, providing ephemeral storage (e.g. emptyDir) for Kubernetes, with EBS volumes associated with PersistentVolumes. The former have lifecycles tied to the instances: When new instances are created (e.g. when the AutoScaling Group scales up), new EBS volumes are created, and when the instances are deleted, so are the EBS volumes.

Kubernetes PersistentVolumes, which may be implemented as EBS volumes or something else, should persist until their PersistentVolumeClaims are deleted, and then only if the reclaim policy is set to “delete”.

miko avatar

Thanks @Jeremy G (Cloud Posse), though I’m using StatefulSet for my deployment (I have EBS CSI driver setup as well) so I thought this should be using EBS that from what I understood should be independent with my EC2 lifecycle?

In my StatefulSet deployment I have defined volumeClaimTemplates that from what I understood should be using the EBS volume? Thank you for the answer though should I post this to K8 (my bad for posting here because I was using Terraform to maintain our infra) and continue the discussion there? :o

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: authentication-postgres
  labels:
    app: authentication-postgres-app
  namespace: postgres
spec:
  serviceName: authentication-postgres-svc
  replicas: 1
  selector:
    matchLabels:
      app: authentication-postgres-app
  template:
    metadata:
      labels:
        app: authentication-postgres-app
    spec:
      containers:
        - name: authentication-postgres-container
          image: postgres:16.2-bullseye
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              value: authentication_db
            - name: POSTGRES_USER
              value: postgres
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: authentication-postgres-secret
                  key: postgres-password
          volumeMounts:
            - name: data
              mountPath: /mnt/authentication-postgres-data
      imagePullSecrets:
      - name: docker-reg-cred
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "5Gi"
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

What I’m saying is that on upgrade, some EBS volumes will get deleted and some will not. What leads you to believe that your PersistentVolumes, with active PersistentVolumeClaims, are the ones being deleted?

miko avatar

Ohhh I see! I came to this conclusion because the PostgresSQL database lost its data after I upgraded the nodes, which means it’s one of those EBS that unfortunately got cleared? Is there a way for me to avoid that so that when I upgrade nodes the EBS are safe?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t know how you deployed PostgreSQL. It seems like you deployed it to use ephemeral storage rather than dedicated PersistentVolumes, despite having a volumeClaimTempate, but this gets into the details of your PostgreSQL deployment, and maybe Helm chart. Which is why I directed you to #kubernetes

2024-05-06

2024-05-07

Pradeepvarma Senguttuvan avatar
Pradeepvarma Senguttuvan
09:12:11 AM

any luck on this ?

Hi Team, i am trying to create read replica for document DB with different instance class than primary

module "documentdb_cluster" {
  source                          = "cloudposse/documentdb-cluster/aws"

since instance_class is string i cannot have different instance class for my read replica - any suggestions on this ? how do i have different instance class for my replica (edited)

Ercan Ermis avatar
Ercan Ermis

Hello all,

This is my first message here in Slack! I found a little bug on memcached module. Issue is opened: https://github.com/cloudposse/terraform-aws-elasticache-memcached/issues/78 is someone can check and help to me for send a PR? my changes are ready on my local. Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you are able to open a PR, post it in #pr-reviews and someone will review it promptly

1
1
Ercan Ermis avatar
Ercan Ermis

PR sent. thank you so much @Erik Osterman (Cloud Posse)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t see one from you in #pr-reviews

Ercan Ermis avatar
Ercan Ermis

yep, because i was little bit sleepy last night. approves are welcome.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#79 fix: elasticache_subnet_group creation

what

• If we pass elasticache_subnet_group_name, the aws_elasticache_subnet_group.default[0] won’t be created anymore

why

• Who needs a new elasticache_subnet_group even we already created before and just want to pass a name

references

• Check issue #78

2024-05-08

Release notes from terraform avatar
Release notes from terraform
09:13:31 AM

v1.8.3 1.8.3 (May 8, 2024) BUG FIXES:

terraform test: Providers configured within an overridden module could panic. (#35110) core: Fix crash when a provider incorrectly plans a nested object when the configuration is null (<a href=”https://github.com/hashicorp/terraform/issues/35090” data-hovercard-type=”pull_request”…

Don't evaluate providers within overridden modules by jbardin · Pull Request #35110 · hashicorp/terraformattachment image

While we don’t normally encounter providers within modules, they are technically still supported, and could exist within a module which has been overridden for testing. Since the module is not bein…

core: prevent panics with null objects in nested attrs by jbardin · Pull Request #35090 · hashicorp/terraformattachment image

When descending into structural attributes, don’t try to extract attributes from null objects. Unlike with blocks, nested attributes allow the possibility of assigning null values which could be ov…

2024-05-10

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, not sure if this is an issue but I’m having cycles every time I try to destroy a service and after a long work, I discovered that it’s related to the security groups. If I manually remove the service SG and rules, the cycles are gone. This is related to the ecs alb service module

Julien Bonnier avatar
Julien Bonnier

I have that problem a lot when using a security group created in another backend.

ie:

Backend A: contains a resource group Backend B: uses the resource group

One pattern I’ve been using to avoid problems is

Backend A: contains a resource group Backend B: attaches rules to the resources group and uses these.

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule

Don’t know if that helps ¯_(ツ)_/¯

Juan Pablo Lorier avatar
Juan Pablo Lorier

thanks! I will look into it. I manage all the resources in the same backend but I have little control on the creation/destruction as the cloudposse modules are the ones managing the resources

2024-05-13

susie-h avatar
susie-h

This might have been talked about in an earlier thread already, but is anyone else seeing some weird behavior in their editor within terragrunt-cache for the module download for terraform-null-label? VSCode is throwing an error in the cache folder, when i tunnel down it takes me to /examples/autoscalinggroup/main.tf line 28:

# terraform-null-label example used here: Set tags on everything that can be tagged
  tag_specifications {
    for_each = ["instance", "volume", "elastic-gpu", "spot-instance-request"]

with the error message “Unexpected attribute: An attribute named “for_each” is not expected here. Terraform”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe post a link to this message in #terragrunt

susie-h avatar
susie-h

ok, just to cross-post? or am i in the wrong channel?

2024-05-14

Veerapandian M avatar
Veerapandian M

Hi Expertis,

I would like to learn about the IaC (Terraform with Terragrunt), but I have no experience with it. If possible, please help me continue to explore the next step profile.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hi @Veerapandian M welcome to the community!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Definitely feel free to ask pointed questions as you continue your journey.

Prasanna avatar
Prasanna

Hello Team I am beginner to terraform, I want to set up the environment specified in https://github.com/cloudposse/terraform-datadog-platform. Can some one help me to point out the documentation. I know its stupid question

Kindly provide me basic flow and installation, set up. I am referring this solution so that I can customize to read swagger.Json file and convert it synthetic tests automatically. Its end goal. want to build solution for same

cloudposse/terraform-datadog-platform

Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML configuration, complete with automated tests.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy White (Cloud Posse)

cloudposse/terraform-datadog-platform

Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML configuration, complete with automated tests.

2024-05-15

Sergio avatar

Hello everyone! I hope you’re all doing well. I’m currently facing an issue with creating simple infrastructure using terragrunt as wraper for terraform. Goal is to create 2 subnetworks in 2 different zones and create for 3 vms in each subnetwork. Subnetworks created without issues, the problem arises when I try to create 3vms in each subnetworks.

Project has the following structure

├── environmentsLive │ ├── dev │ ├── net │ │ └── terragrunt.hcl │ ├── vms │ │ └── terragrunt.hcl │ └──terragrunt.hcl └── modules ├── network │ ├── main.tf │ ├── outputs.tf │ ├── variables.tf │ └── versions.tf ├── vm ├── main.tf ├── outputs.tf ├── variables.tf └── versions.tf

I am running terragrunt run-all apply inside of the dev folder in order to have state file for each module specified in dev folder and it works. The problem is that for “vms” module I need iterate over output “subnet_id” variable of “net” module which is

subnet_id = [
  "projects/playground-s-11-59f50f2a/regions/us-central1/subnetworks/dev-subnet-us-central1",
  "projects/playground-s-11-59f50f2a/regions/us-east1/subnetworks/dev-subnet-us-east1",
]

But in inputs{} block of “terragrunt.hcl” file of vms module is expected only one value per variable

The content of “terragrunt.hcl” file of vms module is:

include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "/home/app/terr/Terraform/src/ModulesBySeperateState/modules/vm"
}

dependency "vpc" {
  config_path = "/home/app/terr/Terraform/src/ModulesBySeperateState/environmentsLive/dev/net"

}


inputs = {
  subnet_id = dependency.vpc.outputs.subnet_id
  first_zone_per_region = dependency.vpc.outputs.first_zone_per_region
  regions = dependency.vpc.outputs.regions
}

The main.tf for vm module looks like this:

resource "google_compute_instance" "vm" {
  for_each = var.names
  name = "${each.value.name}-${var.environment}-${var.first_zone_per_region[var.regions]}"
  machine_type = each.value.type
  zone = var.first_zone_per_region[var.regions]
  network_interface {
    subnetwork = var.subnet_id
	}

  } 

I’ve tried to create a wraper module for vms to iterate over subnet_id and provide output for vms.

module   "wrapvms" {
  source = "./emptyVmModuleForWrap"
  environment           = var.environment
  count                 = length(var.subnet_id)
  region                = var.regions[count.index]
  subnet_id             = subnet_id[count.index]
  first_zone_per_region = var.first_zone_per_region
  names                 = var.names

}

But due to lack of my experience it doesn’t work. Could someone please offer some assistance or guidance? Any help would be greatly appreciated. Thank you in advance!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please use #terragrunt

2024-05-16

Release notes from terraform avatar
Release notes from terraform
02:13:33 PM

v1.9.0-alpha20240516 1.9.0-alpha20240516 (May 16, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening parentheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily…

Release v1.9.0-alpha20240516 · hashicorp/terraformattachment image

1.9.0-alpha20240516 (May 16, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening parentheses/etc th…

terraform console: Multi-line entry support by apparentlymart · Pull Request #34822 · hashicorp/terraformattachment image

The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…

    keyboard_arrow_up