#terraform (2022-10)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2022-10-03

José avatar

I was looking for the https://github.com/cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms but looks to me 404 not found. Is this expected?

Sudhakar Isireddy avatar
Sudhakar Isireddy
cloudposse/terraform-aws-sns-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for SNS

RB avatar
cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms

Terraform module to create CloudWatch Alarms on ECS Service level metrics.

José avatar

Yes. Mostly because I found the cloudwatch RDS module with already some alarms in it. And I was wonder the ec2 with already pre-defined alarms, threshold and so. And, no, any of the previous one has that. Looks like the module is now private?

José avatar

Also wonder if I can collaborate somehow to do a PR and add some more event_categories in the https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms. Since I am looking to add/use

• availability

• configuration change

• deletion

• read replica Since the one current available are:

• recovery

• failure

• maintenance

• notification

• failover

• low storage

cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB

cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic

1
Tom Vaughan avatar
Tom Vaughan

Having an issue with terraform-aws-tfstate-backend module and s3 bucket. We put our state files in the same s3 bucked under different keys. For example: state-file-bucket/vpc state-file-bucket/rds

I am using the latest release of the module and it is throwing an error saying the the s3 bucket already exists, which it does b/c it was created previously. Using this module in this way hasn’t been a problem with previous versions. Is there something I need to set for it to work? I tried setting bucket_enabled to false but the ends up clearing the bucket value in backend.tf which then throws another error. So, I am going in circles here. When using version 0.37.0 I don’t have this issue.

RB avatar

Please post the plan

RB avatar

Maybe its the log bucket from the 0.38.0 release ? Just guessing tho.

https://github.com/cloudposse/terraform-aws-tfstate-backend/releases

Tom Vaughan avatar
Tom Vaughan

Plan with bucket_enabled = true:

Terraform will perform the following actions:

  # module.tfstate-backend.aws_s3_bucket.default[0] will be created
  + resource "aws_s3_bucket" "default" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "XXXXXXXX-terraform"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "s3:PutObject"
                      + Condition = {
                          + StringNotEquals = {
                              + "s3:x-amz-server-side-encryption" = [
                                  + "AES256",
                                  + "aws:kms",
                                ]
                            }
                        }
                      + Effect    = "Deny"
                      + Principal = {
                          + AWS = "*"
                        }
                      + Resource  = "arn:aws:s3:::XXXXXXXX-terraform/*"
                      + Sid       = "DenyIncorrectEncryptionHeader"
                    },
                  + {
                      + Action    = "s3:PutObject"
                      + Condition = {
                          + Null = {
                              + "s3:x-amz-server-side-encryption" = "true"
                            }
                        }
                      + Effect    = "Deny"
                      + Principal = {
                          + AWS = "*"
                        }
                      + Resource  = "arn:aws:s3:::XXXXXXXXX-terraform/*"
                      + Sid       = "DenyUnEncryptedObjectUploads"
                    },
                  + {
                      + Action    = "s3:*"
                      + Condition = {
                          + Bool = {
                              + "aws:SecureTransport" = "false"
                            }
                        }
                      + Effect    = "Deny"
                      + Principal = {
                          + AWS = "*"
                        }
                      + Resource  = [
                          + "arn:aws:s3:::XXXXXXX-terraform/*",
                          + "arn:aws:s3:::XXXXXXX-terraform",
                        ]
                      + Sid       = "EnforceTlsRequestsOnly"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Attributes" = "state"
          + "Name"       = "XXXXXXXX-terraform"
          + "Namespace"  = "fp"
          + "Stage"      = "dev"
        }
      + tags_all                    = {
          + "Attributes" = "state"
          + "Name"       = "XXXXXXXX-terraform"
          + "Namespace"  = "fp"
          + "Stage"      = "dev"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + cors_rule {
          + allowed_headers = (known after apply)
          + allowed_methods = (known after apply)
          + allowed_origins = (known after apply)
          + expose_headers  = (known after apply)
          + max_age_seconds = (known after apply)
        }

      + grant {
          + id          = (known after apply)
          + permissions = (known after apply)
          + type        = (known after apply)
          + uri         = (known after apply)
        }

      + lifecycle_rule {
          + abort_incomplete_multipart_upload_days = (known after apply)
          + enabled                                = (known after apply)
          + id                                     = (known after apply)
          + prefix                                 = (known after apply)
          + tags                                   = (known after apply)

          + expiration {
              + date                         = (known after apply)
              + days                         = (known after apply)
              + expired_object_delete_marker = (known after apply)
            }

          + noncurrent_version_expiration {
              + days = (known after apply)
            }

          + noncurrent_version_transition {
              + days          = (known after apply)
              + storage_class = (known after apply)
            }

          + transition {
              + date          = (known after apply)
              + days          = (known after apply)
              + storage_class = (known after apply)
            }
        }

      + logging {
          + target_bucket = (known after apply)
          + target_prefix = (known after apply)
        }

      + object_lock_configuration {
          + object_lock_enabled = (known after apply)

          + rule {
              + default_retention {
                  + days  = (known after apply)
                  + mode  = (known after apply)
                  + years = (known after apply)
                }
            }
        }

      + replication_configuration {
          + role = (known after apply)

          + rules {
              + delete_marker_replication_status = (known after apply)
              + id                               = (known after apply)
              + prefix                           = (known after apply)
              + priority                         = (known after apply)
              + status                           = (known after apply)

              + destination {
                  + account_id         = (known after apply)
                  + bucket             = (known after apply)
                  + replica_kms_key_id = (known after apply)
                  + storage_class      = (known after apply)

                  + access_control_translation {
                      + owner = (known after apply)
                    }

                  + metrics {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }

                  + replication_time {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }
                }

              + filter {
                  + prefix = (known after apply)
                  + tags   = (known after apply)
                }

              + source_selection_criteria {
                  + sse_kms_encrypted_objects {
                      + enabled = (known after apply)
                    }
                }
            }
        }

      + server_side_encryption_configuration {
          + rule {
              + apply_server_side_encryption_by_default {
                  + sse_algorithm = "AES256"
                }
            }
        }

      + versioning {
          + enabled    = true
          + mfa_delete = false
        }

      + website {
          + error_document           = (known after apply)
          + index_document           = (known after apply)
          + redirect_all_requests_to = (known after apply)
          + routing_rules            = (known after apply)
        }
    }

  # module.tfstate-backend.aws_s3_bucket_public_access_block.default[0] will be created
  + resource "aws_s3_bucket_public_access_block" "default" {
      + block_public_acls       = true
      + block_public_policy     = true
      + bucket                  = (known after apply)
      + id                      = (known after apply)
      + ignore_public_acls      = true
      + restrict_public_buckets = true
    }

  # module.tfstate-backend.local_file.terraform_backend_config[0] will be created
  + resource "local_file" "terraform_backend_config" {
      + content              = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0644"
      + filename             = "./backend.tf"
      + id                   = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Plan with bucket_enabled = false

Terraform will perform the following actions:

  # module.tfstate-backend.local_file.terraform_backend_config[0] will be created
  + resource "local_file" "terraform_backend_config" {
      + content              = <<-EOT
            terraform {
              required_version = ">= 0.12.2"

              backend "s3" {
                region         = "us-east-1"
                bucket         = ""
                key            = "rds/terraform.tfstate"
                dynamodb_table = "XXXXXX-rds-state-lock"
                profile        = "XXXXXX-dev"
                role_arn       = ""
                encrypt        = "true"
              }
            }
        EOT
      + directory_permission = "0777"
      + file_permission      = "0644"
      + filename             = "./backend.tf"
      + id                   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Probably not related but I am getting this deprecation warning:

│ Warning: Argument is deprecated
│
│   with module.tfstate-backend.aws_s3_bucket.default,
│   on .terraform\modules\tfstate-backend\main.tf line 156, in resource "aws_s3_bucket" "default":
│  156: resource "aws_s3_bucket" "default" {
│
│ Use the aws_s3_bucket_server_side_encryption_configuration resource instead
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

interesting as I’ve used the terraform-aws-tfstate-backend module ourselves and don’t recall an issue… You said you’re using the same bucket but you’re using different keys… are you trying to call the module twice?

Tom Vaughan avatar
Tom Vaughan

no, not calling module twice.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

in my case we have 2 instances of terraform-aws-tfstate-backend… deployed in 2 different accounts. Then our key is specified in our deployments via partial backend configurations. Each deployment stores it’s statefile in a subdirectory/folder of the bucket

2022-10-04

Allan Swanepoel avatar
Allan Swanepoel

Description

AWS Just introduced the capability to enable IMDSv2 when an EC2 instance is created from specific AMIs:
https://aws.amazon.com/about-aws/whats-new/2022/10/amazon-machine-images-support-instance-metadata-service-version-2-default/

Affected Resource(s) and/or Data Source(s)

aws_amiaws_ami_copyaws_ami_from_instance

Potential Terraform Configuration

resource "aws_ami" "example" {
  name                = "terraform-example"
  virtualization_type = "hvm"
  root_device_name    = "/dev/xvda"
  imds_support        = "v2.0"

  ebs_block_device {
    device_name = "/dev/xvda"
    snapshot_id = "snap-xxxxxxxx"
    volume_size = 8
  }
}

References

https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RegisterImage.html#API_RegisterImage_RequestParameters (see parameter ImdsSupport)

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-new-instances.html#configure-IMDS-new-instances-ami-configuration

Would you like to implement a fix?

Yes

1
1
Jonas Steinberg avatar
Jonas Steinberg

Hey everyone,

Looking to have a bit of a debate on the topic of monitoring as code and whether or not *it actually matters*. More specifically: whether having monitors, dashboards, service level objects and the like actually need to be backed by IaC and within a GitOps workflow.

Many of us have monitoring products like datadog or cloudwatch in which the vast majority of monitors, dashboards, SLOs and the like have been clickops’d. For example at my current shop there are about 350 dashboards and almost none are in IaC and what’s more we don’t really know which ones are critical and which ones can be deleted. And the same goes for monitors and SLOs.

Now imagine that you used Terraformer (or equivalent, if there even is such a thing for Cloudformation) to get all these things into terraform and into all the appropriate repos. And then you even took that a step further and developed a system to do this continuously and also to clean up your monitoring product in the meanwhile, e.g. delete any dashboard not label critical or something.

My questions to the community are:

• so what? All of those clickops’d dashboards are backed up by the CSP or 3rd party; if they have a catastrophic event they’ll probably be able to get them back to you?

• and do we really want to be writing dashboards as code? It gets fairly ridiculous.

• and as for labeling them and then automating their cleanup: will it be that much of a feng shui or cost improvement? Curious about people’s thoughts regarding this topic because now that I have everything in IaC and a potential solution for automating parity and cleanup I find myself asking, “Who cares?” And of course if there are other reasons for storing monitors, dashboards, SLOs and the like as code please bring those up as I’m always interested in learning how other people are solving problems!

Jonas Steinberg avatar
Jonas Steinberg

@jsreed hey thanks mind placing that in a thread response to my post?

jsreed avatar

Sure

jsreed avatar

AGREE! doing things for the sake of doing them seems to be popular when doing any kind of IAC. Putting something into version controlled code is busy work for tools like datadog that are SaaS services anyways. Everything as code is a dev mind set, while there are great efficiencies to be had from that, there are places where it doesn’t make sense. Pick the right tool for the job, spend the money on it if it’s comercial, avoid writing your own. The more home brewed tooling you make the worse it is to mantain when the brain trust that made it moves on, and they WILL move on, it creates massive tech debt and added cost down the road.

Jonas Steinberg avatar
Jonas Steinberg

So code reuse is one “pro” I didn’t cover that a friend brought up so I figured I’d add it here.

1
Jonas Steinberg avatar
Jonas Steinberg

In other words if everything is in IaC then when new services come online they can basically just copy-pasta their way into fast-and-easy IaC.

1
Jonas Steinberg avatar
Jonas Steinberg

I can definitely see this being useful for making large dashboards. Although they can just be cloned in the UI!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes; management at scale is an important thing. Also, different platforms make it harder or easier to enforce roles and policies. By having it in code, it is easier to enforce general guardrails and empower teams to be autonomous without needing to provide direct admin access

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

atmos has added json schema and opa for this reason

jsreed avatar

I find that pattern more common when teams make thier own tooling… and leverage graphana etc… and totally right copy paste is a big plus with that kind of deployment… but with comercial tooling it’s not necessary. I always try to keep focus on what does the business “do”? Is it developing IT tooling, great then making this stuff makes sense. If you’re running an online quilting supply store, it doesn’t make sense wasting money/cycles on developing and maintaining tooling.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Something like the terraformer approach described above is only talking about BCP/DR, but not how to manage monitoring at scale. The most common pattern for that is some kind of abstraction that works for you business.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lets discuss on #office-hours tomorrow

Jonas Steinberg avatar
Jonas Steinberg

@Erik Osterman (Cloud Posse) yeah I’ll be there. Curious to get people’s feedback on this because now that I’m staring at it I’m not sure I really care. But maybe I just don’t see the bigger picture.

I think in my case too the SCIM portion is not relevant here because we have an IT team that handles all that via Okta clickops.

Although maybe they wouldn’t if we offered them a TF approach? Not sure – all their scim is of course backed by AD.

Anyway yeah I’ll bring it up at office hours.

Jonas Steinberg avatar
Jonas Steinberg

Also yeah Erik fyi my focus is not at all about BCP/DR. It’s about managing observability at scale: just like you put it.

Basically: so everyone’s dashboards are in datadog and browser memory only: who cares? I mean do we really need to lock down who can change what in a monitor or dashboard? Are we really going to make datadog read only and push all changes through IaC? Who cares – no one really goes into dashboards and screws them up. I can’t think of one time where that’s happened. I suppose it happens from time to time with metrics, monitors, SLOs etc. but meh – not that much.

Ray Botha avatar
Ray Botha

I’m also keen to hear thoughts on this in #office-hours Maybe one benefit to the IaC approach is the ability to document and structure what’s configured with more detail, close to the infrastructure and services that are being monitored. It’s easy to end up with 300 dashboards and monitors that aren’t immediately clear in what they’re for, and afaik DataDog doesn’t have many good options for this (though I haven’t been using it recently).

Jonas Steinberg avatar
Jonas Steinberg

Awesome.

Yeah so far the main arguments are:

• reusability

• auditing

• easier search, i.e. easier to search github? dubious claim lol So basically nothing new there from a devops best practices perspective.

Chris Dobbyn avatar
Chris Dobbyn

We use IaC for monitoring / alerting when developing our various service offerings. This ensure consistency between various resources.

We click-ops’d this for a long time and just had really big rules covering multiple resources. This ended up over time resulting in less granularity / visibility into problems in our environment as it grew to multiple different companies.

We found that the initial setup of this was painful, but when considering the long term because we also leverage gitops this resulted in a significant reduction in hands on for monitoring/alerting setup and upkeep.

Chris Dobbyn avatar
Chris Dobbyn

I will advise of one caution though. If your monitoring provider decides to significantly change their monitoring/alerting (see newrelic) this is pain.

Jonas Steinberg avatar
Jonas Steinberg

Awesome, thanks @Chris Dobbyn. Yeah I guess if one needed to roll a ton of monitors and dashboards maybe that would be useful.

Jonas Steinberg avatar
Jonas Steinberg

One thing I’m starting to understand is that not many people have experience coding up a ton of observability and really staying on top of that, at least not as far as I can tell. So there isn’t a ton of data on how useful it is compared to other potential work.

For example this article, by head of product at dynatrace, is utterly uninspiring:

https://thenewstack.io/why-developers-need-observability-as-code/

Why Developers Need Observability-as-Codeattachment image

The goal of Observability-as-Code is to track every function and request in the full context of the stack, and generate the most comprehensive and actionable insights, which correspond with intelligence from across teams.

1
david.gregory_slack avatar
david.gregory_slack

Few of the reasons I’ve always imagined I want devs to define dashboards in code:

• same dashboards (and alarms) deployed across stages (dev/prod parity) - seen lots of things in prod that could have been caught in dev (but maybe I’m kidding myself that they would have been)

• sense of devs owning the dashboards that they ‘hand over’ to the rest of the business as part of the ‘this thing is done now, here’s how we know it’s working’

• Keep the dashboard definition with the app code If I ever get to the maturity level where this is happening, I’ll let you know whether it worked

Jonas Steinberg avatar
Jonas Steinberg

Thank you @david.gregory_slack. I’ll be broaching this topic during office hours today in case you’re interested.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyone at the Hashiconf Global mixer tonight?

managedkaos avatar
managedkaos

Was planning to attend but gonna miss it! Will try to catch you tomorrow! wave

1
Sebastian Maniak avatar
Sebastian Maniak

i wish.. couldn’t make it.. next year

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@managedkaos let me know if you make it here today

managedkaos avatar
managedkaos

I’m here! Was wondering where you would broadcast from. I’ll be on zoom for sure.

2022-10-05

2022-10-06

jim carl avatar
jim carl

Hello everyone !

jim carl avatar
jim carl

Is anybody plans to take terraform exam this month or soon ??

Release notes from terraform avatar
Release notes from terraform
05:13:33 PM

v1.3.2 1.3.2 (October 06, 2022) BUG FIXES: Fixed a crash caused by Terraform incorrectly re-registering output value preconditions during the apply phase (rather than just reusing the already-planned checks from the plan phase). (#31890) Prevent errors when the provider reports that a deposed instance no longer exists (<a…

core: Don't re-register checkable outputs during the apply step by apparentlymart · Pull Request #31890 · hashicorp/terraformattachment image

Once again we’re caught out by sharing the same output value node type between the plan phase and the apply phase. To allow for some slight variation between plan and apply without drastic refactor…

1
Kyrylo avatar

:wave: Hello, team!

Is it possible to create two eks_node_group for one eks_cluster? I’ve seen that in case of such scenario documentation recommends to use eks_workers, but I need to use eks_node_group instead.

When I’ve tried to create two eks_node_group I’ve got an error that default IAM role already exists and as there are no documentation how to replace it I’m stuck with a solution. Could you give me a hints?

https://github.com/cloudposse/terraform-aws-eks-node-group

cloudposse/terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group

Chris Dobbyn avatar
Chris Dobbyn

Set the node_role_arn variable. You should be able to get this from the output of your first node group eks_node_group_role_arn.

cloudposse/terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group

2022-10-07

2022-10-10

José avatar

Hello Team. Question related to the RDS repo and the parameter_group_name and option_group_name vars. It looks like that every time a new plan is deployed, a new option_group and parameter_group is created with the namespace-environment-stage-name format.

What if the default one want to be used? I don’t see any way to current use this. And because of this some limits are reached which is not the expected result, but use the default ones, to keep things simple.

Any idea?

Allan Swanepoel avatar
Allan Swanepoel

Personally, my advice is to copy the defaults to new at any rate, thereby reducing the requirement to change it off of default should you need to change these in the future, especially when you get to db performance optimisations

this1
Alex Jurkiewicz avatar
Alex Jurkiewicz

when you need to tweak the parameters of one database at 3am during an outage, you will appreciate having a dedicated set of parameter groups for each cluster. Request the limit increase and be happy you are successful enough to need limit increases

José avatar

Many thanks. Then I will proceed.

2022-10-11

Herman Smith avatar
Herman Smith

Can a provider configuration depend upon a module output, or do providers need to be fully initialized before any modules can be applied?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmm… we definitely source provider configurations from the output of other modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  profile = module.iam_roles.profiles_enabled ? coalesce(var.import_profile_name, module.iam_roles.terraform_profile_name) : null
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…that’s an example

Herman Smith avatar
Herman Smith

Hey @Erik Osterman (Cloud Posse), so I think a key constraint in my use-case was that the modules (say: A and B) belonged to the same state. Which I believe renders it impossible in my case - as plans would become impossible, given that the provider for module B depends upon post-apply data from module A

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that begs the question then is this the right architecture of root modules?

Herman Smith avatar
Herman Smith

I’m considering two different states: one containing a module named db, and the other containing a module db_config. db creates the generic underlying data store, and db_config applies additional configuration on top of it (e.g. schemas). This enables the provider passed to db_config to successfully obtain post-apply data for the user store (i.e. hostname) for connecting to it and planning its provisioning operations, upon plan of db_config (given that db will already be successfully applied by this stage)

Is that the kind of architecture you might suggest here?

Adam Panzer avatar
Adam Panzer

Hiya!

I’m using the https://github.com/cloudposse/terraform-aws-acm-request-certificate module and I have this setup:

Domain: bar.com Subdomain: foo.baz.bar.com

In v0.16.2 this apply worked just fine.

module "acm_request_certificate_east_coast" {
  source = "cloudposse/acm-request-certificate/aws"

  version         = "0.16.2"

  domain_name                       = "foo.baz.bar.com"
  process_domain_validation_options = true
  ttl                               = "300"
  subject_alternative_names         = ["*.foo.baz.bar.com", "*.bar.com"]

  providers = {
    aws = aws.use1
  }
}

Now in v0.17.0 it gives me an error saying it can’t find a zone for baz.bar.com which weird cuz, I don’t need that zone.

Adam Panzer avatar
Adam Panzer

The issue appears to be related to work done recently to support SANs and complex deployments: https://github.com/cloudposse/terraform-aws-acm-request-certificate/compare/0.16.2...0.17.0

Adam Panzer avatar
Adam Panzer

Ah found nitrocode @RB

RB avatar

@Adam Panzer we probably need a new input to verify the zone id using the data source

Adam Panzer avatar
Adam Panzer

Lemme look and see if I can figure this out

RB avatar

The problem is that we need a we need to be able to get the r53 zone id to create the necessary r53 records

RB avatar

And how do we distinguish between one zone id and another if the sans are in diff subdomains

Adam Panzer avatar
Adam Panzer

oh, it’s the split

Adam Panzer avatar
Adam Panzer

two sans in two different zones

Adam Panzer avatar
Adam Panzer

it’s always dns

RB avatar

Feel free to put in a pr if you have a good fix and we can continue the discussion there

Adam Panzer avatar
Adam Panzer

yup. thanks for the fast response

Adam Panzer avatar
Adam Panzer

Question: Instead of guessing / trying to figure out which zone to use, why not just have the person give it to you, be explicit:

subject_alternative_names = [
  {
    zone_to_lookup = "foo.baz.com",
    names          = ["a.foo.baz.com", "b.foo.baz.com"]
  },
  {
    zone_to_lookup = "*.baz.com",
    names          = ["bob.baz.com", "alice.baz.com"]
  }
]
Adam Panzer avatar
Adam Panzer

The iterating might be … messy tho

RB avatar

@Adam Panzer could you create a ticket for now in the upstream module

RB avatar

you can also set var.process_domain_validation_options=false to disable appending the acm cert verification record to each route53 zone to get around the failure

Adam Panzer avatar
Adam Panzer

Yeah. Will do. Which upstream module though? It’s just <https://github.com/cloudposse/terraform-aws-acm-request-certificate>

RB avatar
cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation

Adam Panzer avatar
Adam Panzer

will do

Adam Panzer avatar
Adam Panzer

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When you have two SANs that belong to different zones, the module tries to add validation records to the incorrect zone.

Expected Behavior

It should add validation records to zones:

foo.baz.bar.com and bar.com

Steps to Reproduce

Steps to reproduce the behavior:
Say you have these two domains:

Domain: bar.com
Subdomain: foo.baz.bar.com

apply this:

module "acm_request_certificate_east_coast" {
  source = "cloudposse/acm-request-certificate/aws"

  domain_name                       = "foo.baz.bar.com"
  process_domain_validation_options = true
  ttl                               = "300"
  subject_alternative_names         = ["*.foo.baz.bar.com", "*.bar.com"]

  providers = {
    aws = aws.use1
  }
}

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Mac OS

RB avatar

thank you

RB avatar

please mention if var.process_domain_validation_options=false workaround works for you

Adam Panzer avatar
Adam Panzer

That probably would work but I’ve pinned it to the previous version for now too, which worked fine

Adam Panzer avatar
Adam Panzer

I had other modules on that version anyways

Adam Panzer avatar
Adam Panzer

(other copies of that mod)

RB avatar

could you also explain which zone each of your domain and subdomains are in

RB avatar

and could you show what zones it tried to add the records to ?

RB avatar

any additional info that can help understand the issue would be helpful

RB avatar

including your proposal in this slack thread

RB avatar

please and thank you.

Adam Panzer avatar
Adam Panzer

I will say, I’m slightly worried I’m just not understanding DNS that well

Adam Panzer avatar
Adam Panzer

<haiku_its_dns.jpg>

RB avatar

I think you’re correct. I think the best way is to specify the zone id or zone name to look up for each san.

Adam Panzer avatar
Adam Panzer

@RB wait, is trying to traverse one back from the domain? Should the domain actually be bar.com not the subdomain?

RB avatar

depends on where you want the record. i think right now, it assumes you want it in a hosted zone one level up from the domain.

e.g. [foo.baz.bar.com](http://foo.baz.bar.com) domain would be in [baz.bar.com](http://baz.bar.com) zone

Adam Panzer avatar
Adam Panzer

Yeah, and if the Domain were bar.com it wouldn’t be possible to go “one up”

Adam Panzer avatar
Adam Panzer

I guess is it going to do the same for the SANs too

RB avatar

@Adam Panzer another workaround, could you specify the zone_name or zone_id inputs and see if that works as expected for you ?

Adam Panzer avatar
Adam Panzer

Yup, gimme a few

RB avatar

Did it work

Adam Panzer avatar
Adam Panzer

Sorry. Startup life is chaotic. I’ll get to this today!

1
Adam Panzer avatar
Adam Panzer

I’m going to pull this up now

mrwacky avatar
mrwacky

Is there a way to use TF to attach Control Tower guardrails to OUs? Best I can see is to recreate with SCPs manually

mrwacky avatar
mrwacky

Closes #26612
Dependent on #26975, #26978

References Output from Acceptance Testing

$ TF_ACC=1 make testacc TESTS=TestAccControlTowerControl_basic PKG=controltower
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./internal/service/controltower/... -v -count 1 -parallel 20 -run='TestAccControlTowerControl_basic'  -timeout 180m
=== RUN   TestAccControlTowerControl_basic
=== PAUSE TestAccControlTowerControl_basic
=== CONT  TestAccControlTowerControl_basic
--- PASS: TestAccControlTowerControl_basic (207.35s)
PASS
ok  	github.com/hashicorp/terraform-provider-aws/internal/service/controltower	210.019s

...

2022-10-12

taskiner avatar
taskiner

Hi all! I wonder if anyone experienced weird cycle errors when trying to disable a module with count. I drawed cycles but its not something readable. any hints for debugging these much appreciated!, error cycle in thread.

taskiner avatar
taskiner
│ Error: Cycle: module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_subnet.default["subnet-XXXXXXXX"] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_iam_role.cloudwatch_delivery[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_vpc.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_kms_key.cloudtrail (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_security_group.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_subnet.default["subnet-0247c69d71dbc51dd"] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_cloudwatch_log_group.default_vpc_flow_logs[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.securityhub_baseline_eu-west-1[0].aws_securityhub_finding_aggregator.main[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.securityhub_baseline_eu-west-1[0].aws_securityhub_standards_subscription.aws_foundational[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.securityhub_baseline_eu-west-1[0].aws_securityhub_account.main (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_route_table.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_network_acl.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.route_table_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.unauthorized_api_calls[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.security_group_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.console_signin_failures[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.cloudtrail_cfg_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_cloudwatch_log_group.cloudtrail_events[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_default_subnet.default["subnet-09dc30b5e67ab1b4e"] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.aws_config_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.network_gw_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.iam_baseline[0].aws_iam_account_password_policy.default[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.s3_bucket_policy_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.s3_baseline[0].aws_s3_account_public_access_block.this (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_acl.access_log (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_public_access_block.access_log (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_server_side_encryption_configuration.access_log (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket.access_log (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.iam_baseline[0].aws_iam_role_policy_attachment.support_policy[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.iam_baseline[0].aws_iam_role.support[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.root_usage[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.organizations_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.console_signin_failures[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.s3_bucket_policy_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.organizations_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.route_table_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.cloudtrail_cfg_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.network_gw_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.root_usage[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.security_group_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.aws_config_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.unauthorized_api_calls[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_sns_topic.alarms (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_sns_topic_policy.alarms (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.iam_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.iam_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.disable_or_delete_cmk[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.disable_or_delete_cmk[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.analyzer_baseline_eu-west-1[0].aws_accessanalyzer_analyzer.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.vpc_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.vpc_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.guardduty_baseline_eu-west-1[0].aws_guardduty_detector.default (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.nacl_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.nacl_changes[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_policy.access_log_policy (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_metric_alarm.no_mfa_console_signin[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.alarm_baseline[0].aws_cloudwatch_log_metric_filter.no_mfa_console_signin[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.securityhub_baseline_eu-west-1[0].aws_securityhub_standards_subscription.cis[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].aws_iam_role_policy.flow_logs_publish_policy[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.ebs_baseline_eu-west-1[0].aws_ebs_encryption_by_default.this (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.vpc_baseline_eu-west-1[0].aws_flow_log.default_vpc_flow_logs[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_server_side_encryption_configuration.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_versioning.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].aws_s3_bucket_policy.audit_log[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_logging.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_public_access_block.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket_acl.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.audit_log_bucket[0].aws_s3_bucket.content (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_sns_topic_policy.local-account-cloudtrail[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_cloudtrail.global (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_sns_topic.cloudtrail-sns-topic[0] (destroy), module.awesome_project_dev4_config.module.secure_baseline[0].module.cloudtrail_baseline[0].aws_iam_role_policy.cloudwatch_delivery_policy[0] (destroy), module.awesome_project_dev4.aws_organizations_account.account, module.awesome_project_dev4.output.account_admin_role_arn (expand), provider["registry.terraform.io/hashicorp/aws"].eu-west-1, module.awesome_project_dev4_config.module.secure_baseline[0].aws_iam_role.flow_logs_publisher[0] (destroy)
taskiner avatar
taskiner

more context, I am trying to make a wrapper module for https://github.com/nozaq/terraform-aws-secure-baseline/

taskiner avatar
taskiner

I think I have found answer in the documentation

A module intended to be called by one or more other modules must not contain any provider blocks. A module containing its own provider configurations is not compatible with the for_each, count, and depends_on arguments that were introduced in Terraform v0.13
Stephen Bennett avatar
Stephen Bennett

Hi, Is there a new way of writing in data or resources to objects to stop tflint interpolation-only error:

ie code:

resource "aws_s3_bucket_policy" "opensearch-backup" {
  bucket = aws_s3_bucket.opensearch-backup.id
  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "BUCKET-POLICY"
    Statement = [
      {
        Sid       = "EnforceTls"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          "${aws_s3_bucket.opensearch-backup.arn}/*",
          "${aws_s3_bucket.opensearch-backup.arn}"
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      }
    ]
  })
}

Gets warnings:

Warning: Missing version constraint for provider "aws" in "required_providers" (terraform_required_providers)

  on s3-snapshot-bucket.tf line 72:
  72: resource "aws_s3_bucket_policy" "opensearch-backup" {

Reference: <https://github.com/terraform-linters/tflint-ruleset-terraform/blob/v0.1.1/docs/rules/terraform_required_providers.md>

Warning: Interpolation-only expressions are deprecated in Terraform v0.12.14 (terraform_deprecated_interpolation)

  on s3-snapshot-bucket.tf line 85:
  85:           "${aws_s3_bucket.opensearch-backup.arn}"

Reference: <https://github.com/terraform-linters/tflint-ruleset-terraform/blob/v0.1.1/docs/rules/terraform_deprecated_interpolation.md>
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

"${aws_s3_bucket.opensearch-backup.arn}"aws_s3_bucket.opensearch-backup.arn isn’t that all you need to do?

Manoj Kumar avatar
Manoj Kumar

wave Hello, team!

wave1
Manoj Kumar avatar
Manoj Kumar

I am trying to use this module: https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account

Can someone please post IAM-ROLE example code for terraform that should be used in this example

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers

Max avatar
arn:aws:iam::123456789012:role/S3Access
cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers

Manoj Kumar avatar
Manoj Kumar

Hi,

I am having problems with the module to assume the roles:

Error: Error refreshing state: 2 error(s) occurred:

* module.vpc_peering_cross_account.provider.aws.accepter: The role "arn:aws:iam::YYYYYYYYYYYYY:role/cross_account_role" cannot be assumed.

  There are a number of possible causes of this - the most common are:
    * The credentials used in order to assume the role are invalid
    * The credentials do not have appropriate permission to assume the role
    * The role ARN is not valid
* module.vpc_peering_cross_account.provider.aws.requester: The role "arn:aws:iam::XXXXXXXXXXX:role/cross_account_role" cannot be assumed.

  There are a number of possible causes of this - the most common are:
    * The credentials used in order to assume the role are invalid
    * The credentials do not have appropriate permission to assume the role
    * The role ARN is not valid

Using the same credentials and roles I can assume the roles using aws cli.

Any idea what can cause it?

Thanks

2022-10-13

Manoj Kumar avatar
Manoj Kumar

Terraform resource has aws_instance which have an argument ip_address . Any have example how to use it?

Denis avatar

Which aws provider version is this? I don’t see the aws_instance having the ip_address argument? The example in docs is:

resource "aws_network_interface" "foo" {
  subnet_id   = aws_subnet.my_subnet.id
  private_ips = ["172.16.10.100"]

  tags = {
    Name = "primary_network_interface"
  }
}

resource "aws_instance" "foo" {
  ami           = "ami-005e54dee72cc1d00" # us-west-2
  instance_type = "t2.micro"

  network_interface {
    network_interface_id = aws_network_interface.foo.id
    device_index         = 0
  }

  credit_specification {
    cpu_credits = "unlimited"
  }
}
Gabriel Zabal avatar
Gabriel Zabal

I’ve used to use the

private_ip = "x.x.x.x"
Mallikarjuna M avatar
Mallikarjuna M

Hi All is it possible to send mails using terraform?

Mallikarjuna M avatar
Mallikarjuna M

After creating AWS IAM users, we want to send credentials via terraform is there any possibilities?

Denis avatar

You can use local-exec.

Joe Niland avatar
Joe Niland
cloudposse/terraform-null-smtp-mail

Terraform module to send transactional emails via an SMTP server (e.g. mailgun)

Chris Gray avatar
Chris Gray

I’m 90% sure this issue is just because I’ve been looking at it too long but here’s hoping someone can help. I’m looking at creating AWS SSO users but ideally I want to do something like the following where I define a user in a local e.g.

locals {
  users = {
    user1 = {
      userdetail = "some info"
      groups = [
        "admin",
        "developer"
      ]
   }
  }
}

Then I would create the group membership based on what is set in the groups list but right now I’m just unable to grok the logic I need, anyone done something similar? I would just use the group list directly but aws_identitystore_group_membership seems to need a string

Joe Perez avatar
Joe Perez

maybe the join function can help here?

> join(",", ["group1", "group2", "group3"])
"group1,group2,group3"
Chris Gray avatar
Chris Gray

I actually think I’m probably overbuilding this, pivoted my solution now, cheers anyway!

2022-10-14

tennisonyu avatar
tennisonyu

Hey folks, out of curiosity, has anyone seen this error before when making a simple update like changing the instance type? I am using this module: https://github.com/cloudposse/terraform-aws-eks-node-group

│Error: error creating EKS Node Group: ResourceInUseException: NodeGroup already exists with name xxxx and cluster name yyyy
cloudposse/terraform-aws-eks-node-group

Terraform module to provision a fully managed AWS EKS Node Group

tennisonyu avatar
tennisonyu

Is the recommended way to destroy and recreate the cluster?

2022-10-15

Konrad Bloor avatar
Konrad Bloor

Hey! A sanity check amongst you experts would be great. Say I have a java based lambda. The source is in github. A github action creates the jar file in each release. I have another github action to, on each release, deploy to a specific environment (using the serverless framework).

Terraform creates this environment inside github, with the github provider.

I’d like terraform, when setting up a bit of infrastructure, to also trigger a deploy (of the latest release, using a github actions workflow) into the environment it created, in order to fully setup that part of the system.

Is this a good way of doing things? Should I be doing things differently?

2022-10-16

Muhammad Taqi avatar
Muhammad Taqi

Hy folks, I’m trying to create a VPC and dynamic subnets using full example, but i got this error again and agian, is this bcz of latest versions?

RB avatar

Hi Muhammad. It’s difficult to see what is causing the error without all of the inputs to the module. Please share them when you have time

2022-10-17

Herman Smith avatar
Herman Smith

With a resource for_each‘d, being driven from a set of strings, I can see that adding a new item to that set of strings (and thus creating a new resource, without affecting the old) leads to data sources (similarly for_each‘d on the keys() of the list of those resources) to be considering unchanged resources (with their original key, no less) as changed, and thus forcing data source reads and “known after apply” for everything unnecessarily. Is this a known issue?

I’d have expected the “resource changed” check to apply to the specific key of that resource, not if any key associated to that resource is changed!

Edit: corrected description at https://sweetops.slack.com/archives/CB6GHNLG0/p1666024029538429?thread_ts=1666022676.252819&cid=CB6GHNLG0

I was slightly incorrect in my description: I have a data source with for_each = my_resource and that resource has a for_each = [var.my](http://var.my)_map). The keys match, given the use of maps - action_reason is read_because_dependency_pending on the data source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is expected behavior - when you use a list/set in for_each, TF creates a list of resources with the same indexes as the list - when the list indexes are changed (removed, moved, etc.), TF has to recreate all the resources

I was slightly incorrect in my description: I have a data source with for_each = my_resource and that resource has a for_each = [var.my](http://var.my)_map). The keys match, given the use of maps - action_reason is read_because_dependency_pending on the data source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use maps in for_each - this way, TF creates a map of resources, and since the keys in the map do not change, TF will not recreate them

Herman Smith avatar
Herman Smith

Interesting, I expected a set to function similarly (as the abstract notion of a set doesn’t have indices)

Herman Smith avatar
Herman Smith

I was slightly incorrect in my description: I have a data source with for_each = my_resource and that resource has a for_each = [var.my](http://var.my)_map). The keys match, given the use of maps - action_reason is read_because_dependency_pending on the data source

Herman Smith avatar
Herman Smith

The data source refers to each.value.some_attribute - so given that the keys align, and my_resource["the_key"] isn’t being changed in the plan - each.value.some_attribute should be fetching a known value of that unchanged attribute already in the state

Herman Smith avatar
Herman Smith

I can manually construct the some_attribute value inside of the data source, relying only upon the key (from for_each = my_resource), but to not be able to take advantage of depending upon some_attribute when the resource isn’t even changed is bizarre

Herman Smith avatar
Herman Smith

Sort-of solved, but a mystery with the AWS provider remains.

Concretely, the data source was an aws_iam_policy_document. Removing that data source and jsonencode()‘ing a similar document within the referring AWS resource worked fine.

Interestingly, the attributes of that data source all appear to be known except json and id (from observing terraform show -json on the plan).

How json couldn’t be known, yet all the attributes that make it up are known, is a mystery to me. Well, avoiding that data source and its strange behavior is a workaround..!

Chris Dobbyn avatar
Chris Dobbyn

I’ve seen this before. It often manifests where AWS changes your applied document. It could appear as an added or remove newline, extra spaces. It’s quite frustrating when it appears.

Sometimes this can be fixed prior to apply, but your workaround works as well.

Alex Jurkiewicz avatar
Alex Jurkiewicz


this is expected behavior - when you use a list/set in for_each, TF creates a list of resources with the same indexes as the list - when the list indexes are changed (removed, moved, etc.), TF has to recreate all the resources
I don’t think this is correct @Andriy Knysh (Cloud Posse).

With a simple configuration like

resource "null_resource" "this" {
  for_each = toset(["a", "b"])
}

if you apply this, then remove a and add c, it won’t touch b. The resources are indexed by item value.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, you are correct, I’ve mixed that up with lists/sets in count

1

2022-10-18

Zach B avatar

When separating configuration from code, such as in the case of what atmos aims to accomplish, I’ve seen a lot of “stacks” or “components” implementations. The same ideas can be applied to terragrunt projects.

What I haven’t yet figured out is: Every single example where these “components” are used/referenced, they take on the singular noun and it almost seems impossible to create multiple of the same component in the same stack and therefore multiple of the same underlying resources unless you customize that component specifically.

i.e. If I wanted to deploy 14 CloudFront distributions to a single account and region, would you recommend I:

  1. Create a single component using for_each and allow input variables to determine how many are created?
  2. Create a separate component and explicitly define these 14 distributions inline?
  3. Create 14 different stacks that reference the same component? ^ If option #1, then I have trouble understanding why it is not a “standard” approach to apply for_each on all components
Zach B avatar

From this atmos example, if I wanted to create multiple VPCs from the same component in this stack, how would I accomplish that?:

https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/prod/us-east-2.yaml

import:
  - mixins/region/us-east-2
  - orgs/cp/tenant1/prod/_defaults
  - catalog/terraform/top-level-component1
  - catalog/terraform/test-component
  - catalog/terraform/test-component-override
  - catalog/terraform/test-component-override-2
  - catalog/terraform/test-component-override-3
  - catalog/terraform/vpc
  - catalog/helmfile/echo-server
  - catalog/helmfile/infra-server
  - catalog/helmfile/infra-server-override

components:
  terraform:
    "infra/vpc":
      vars:
        cidr_block: 10.8.0.0/18
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:
  - catalog/terraform/vpc/defaults

components:
  terraform:
    vpc1:
      matadata:
        component: vpc  # point to the terraform component
        inherits:
          - vpc/defaults # inherit the default vars from the component described in catalog/terraform/vpc/defaults
      vars:
        cidr_block: 10.8.0.0/18
        name: vpc1

    vpc2:
      matadata:
        component: vpc  # point to the terraform component
        inherits:
          - vpc/defaults # # inherit the default vars from the component described in catalog/terraform/vpc/defaults
      vars:
        cidr_block: 10.9.0.0/18
        name: vpc2
1
Zach B avatar

Of course, thank you.

What’s your opinion of something like atmos vs a well-configured terragrunt project?

I’m using modules / components in terragrunt and I feel forced to use for_each on all of my components because you cannot use for_each in terragrunt.hcl. Nor is there another method that I know of like atmos uses to reference the same component multiple times in a stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

our opinion is to use atmos, but we are biased since we created it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need help with atmos and with configuring components and stacks

Zach B avatar

@Andriy Knysh (Cloud Posse) That would be great. I’d like to evaluate atmos and Spacelift. Would you have the time to join a video call?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sorry not right now (having a bunch of meetings). @Linda Pham (Cloud Posse) please schedule a call, maybe tomorrow

Zach B avatar

No worries. Thank you. Let me know what time works best for you @Linda Pham (Cloud Posse)

Linda Pham (Cloud Posse) avatar
Linda Pham (Cloud Posse)

@Zach B, can you send out the invite to @Andriy Knysh (Cloud Posse)? He’s available between 8a-1p PST. I’ll DM you his email

Zach B avatar

@Linda Pham (Cloud Posse) Yes of course

José avatar

Hello Team. Looking the elastic-beanstalk repo, how is possible to customize the userdata launchconfiguration for the autoscaling?

• I use already ebextensions that works flawless during deploy time, but after a scaling event, the ebextension customization is lost (/weird).

• I’ve tried this already to spin a new environment, deploy, after manual terminate the instance and wait the ASG to spin a new instance, the internal customization is gone. Any advice would be nice. Thanks

2022-10-19

Release notes from terraform avatar
Release notes from terraform
06:03:33 PM

v1.3.3 1.3.3 (October 19, 2022) BUG FIXES: Fix error when removing a resource from configuration which has according to the provider has already been deleted. (#31850) Fix error when setting empty collections into variables with collections of nested objects with default values. (<a href=”https://github.com/hashicorp/terraform/issues/32033“…

"The graph node for ... has no configuration attached to it" while destroying · Issue #31850 · hashicorp/terraformattachment image

Terraform Version 1.3.0 Terraform Configuration Files resource &quot;aws_ssoadmin_managed_policy_attachment&quot; &quot;org_admin_service_catalog&quot; { instance_arn = tolist(data.aws_ssoadmin_ins…

Update go-cty to latest version by liamcervante · Pull Request #32033 · hashicorp/terraformattachment image

Fixes #31924 Target Release 1.3.3 Draft CHANGELOG entry BUG FIXES Fixed setting empty collections into variables with collections of nested objects with default values.

Allan Swanepoel avatar
Allan Swanepoel
cdktf/cdktf-provider-template

Prebuilt Terraform CDK (cdktf) provider for template.

Karina Titov avatar
Karina Titov

hi. i have this pr for aws-iam-role module, what i’m trying to do here is to have the ability provide a custom name for the iam role policy, that is being created https://github.com/cloudposse/terraform-aws-iam-role/pull/50

have a chance to configure the name of the policy

what

• With this change i want to have an ability to provide a custom name for the policy

why

• the resources i’m working with were not created in the same way this module assumes • to have a chance to configure the name of the policy

Chris Dobbyn avatar
Chris Dobbyn

You should post this in #pr-reviews

have a chance to configure the name of the policy

what

• With this change i want to have an ability to provide a custom name for the policy

why

• the resources i’m working with were not created in the same way this module assumes • to have a chance to configure the name of the policy

Karina Titov avatar
Karina Titov

alright, thanks

2022-10-20

Berjan B avatar
Berjan B

Hello Folks,

Berjan B avatar
Berjan B

I am creating s3 bucket with CDN using terraform but where to define IAM Policy, Role

Gary Cuga-Moylan avatar
Gary Cuga-Moylan

My experience is that a policy using OAI is set up by default when using the cloudfront-s3-cdn module.

Looks something like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3GetObjectForCloudFront",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity **************"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::**************/***************"
        },
        {
            "Sid": "S3ListBucketForCloudFront",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity **************"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::**************"
        },
        {
            "Sid": "ForceSSLOnlyAccess",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::**************/*",
                "arn:aws:s3:::**************"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}
Berjan B avatar
Berjan B
provider aws {
  region = "us-east-1"
  alias  = "us-east-1"
}

# create acm and explicitly set it to us-east-1 provider
module "acm_request_certificate" {
  source = "cloudposse/acm-request-certificate/aws"
  providers = {
    aws = aws.us-east-1
  }
  # Cloud Posse recommends pinning every module to a specific version
  domain_name                       = "cdn.xxx.mn"
  process_domain_validation_options = true
  ttl                               = "300"
}

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"

  namespace         = "tbf"
  stage             = "prod"
  name              = "cdn-bucket"
  aliases           = ["cdn.xxxxx.mn"]
  dns_alias_enabled = true
  # parent_zone_name  = "xxxxxx.mn"
  parent_zone_id	= var.aws_route53_hosted_zone_id
  cloudfront_access_logging_enabled	= false
  acm_certificate_arn = module.acm_request_certificate.arn
  # depends_on = [module.acm_request_certificate]
}
Fair Deal Home Buyer avatar
Fair Deal Home Buyer

any body who famylliar with lambda forwarder datodog?

managedkaos avatar
managedkaos

I can’t say that i’m familiar with the DD lambda forwarder per se, however: i just tried to set up a DD->AWS integration yesterday using the DD provided CloudFormation template and the stack deployment failed because the lambda forwarder did not install properly. any chance you are having the same problem?

2022-10-21

Omar Hountondji avatar
Omar Hountondji

Hello guys,

Omar Hountondji avatar
Omar Hountondji

I am new to this group, basically I am looking for ways to create multiples performance alerts for Azure VMs in terraform. Can anybody point me to the right resources/repos for that?

Gary Cuga-Moylan avatar
Gary Cuga-Moylan

Posted this in #aws before I realized there was a terraform channel : https://sweetops.slack.com/archives/CCT1E7JJY/p1666364196926439

Hello. Anyone know how to modify an existing S3 policy using the cloudfront-s3-cdn module?

I’m trying to use the cloudfront-s3-cdn module to create two CloudFront distros - pointing at different directories in the same S3 bucket.

I have successfully created the two CF distros, and have them pointing at the correct origins, and can see that the Response Header Policies are working correctly. The problem I am running into is I cannot figure out how to modify the existing S3 policy to allow the second CF distro access.

When I set override_origin_bucket_policy to true and run terraform plan it looks like the existing policy will be wiped out and automatically replaced (which would break the integration between the first CF distro and the bucket).

When I set additional_bucket_policy and run terraform plan it appears to have no effect.

See example code in thread

2022-10-24

OliverS avatar
OliverS

I’m getting this error by terraform plan :

│ Error: AccessDenied: Access Denied
│ 	status code: 403, request id: XXX, host id: YYY

Setting TF_LOG_PROVIDER=trace does not give additional info (eg the 403 does not appear anywhere in the trace output)

Cloudtrail does not show an access denied operation even after several minutes (I’ve used it many times before for this type of issue, so I’m pretty sure – although no guarantee - that I’m querying it correctly)

Any ideas?

OliverS avatar
OliverS

On TF_LOG_CORE I’m getting a 403, looking into this

OliverS avatar
OliverS

Turns out it was a really tricky mistake: I had forgotten to set the bucket name variable value for a remote state data source, and the default value was an existing bucket in *another* aws account.

1
OliverS avatar
OliverS

This explains why cloudtrail did not pick anything up in the account where we were running terraform: the request was going to the other account! (luckily that account was also my client’s, otherwise some org might have been wondering who kept trying to access their state bucket! )

OliverS avatar
OliverS

Also, the access denied error appeared in the core logs rather than provider logs… actually there is a lot of aws-provider output in the core logs, wtf? This greatly decreases the usefulness of TF_LOGS_PROVIDER recently introduced, I’ll continue to use TF_LOGS in the future.

RB avatar

If TF_LOGS_PROVIDER is a recent new feature, perhaps providers need time to utilize this new env var over the old TF_LOG one?

Glad you were able to resolve your issue!

OliverS avatar
OliverS

Yeah @RB I suspect you’re right

Soren Jensen avatar
Soren Jensen

Is this possible to do in a tfvars file? I want to check if my environment variable env.ENV_PREFIX is prod. If env.ENV_PREFIX = prod set a variable production = true, if env.ENV_PREFIX is dev set production = false..

Soren Jensen avatar
Soren Jensen
if env.ENV_PREFIX = prod
  production = true
else
  production = false
RB avatar

You could pass the env value as a tfvar

RB avatar
export TF_VAR_env=prod
RB avatar

Then you can use it in terraform

locals {
  production = var.env == "prod"
}
Soren Jensen avatar
Soren Jensen

I need to set production to either true or false

Soren Jensen avatar
Soren Jensen

It’s part of a set of tags, so in case the env var tell us we are deploying to prod, we tag the resource with production: true

Soren Jensen avatar
Soren Jensen
production = "{{ env.ENV_PREFIX }}" == "prod" ? true : false

Got this in my tfvars

RB avatar
  tags = {
    production = local.production
  }
Soren Jensen avatar
Soren Jensen

Ahh I see, that’s going to work.. Thanks a million

1
OliverS avatar
OliverS


I need to set production to either true or false
Ronak’s statement does that. The local.production will be a boolean:

locals {
  production = var.env == "prod"
}

There is no need for a more complicated expression. Moreover you can set TF_VAR_env in the shell to a value, and terraform will set var.env to that value automatically.

1

2022-10-25

2022-10-26

Nitin avatar

terraform-aws-vpc-peering-multi-account module does not create IPv6 interal routes?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have not done too much with IPv6 yet, but would probably accept PRs extending it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Erik Osterman (Cloud Posse) I’m not sure we want to continue to support this module. Either way, this is better handled by @Andriy Knysh (Cloud Posse).

Nitin avatar

@Jeremy G (Cloud Posse) so is there any other moduel do the same in cloudposse ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Nitin I was confusing terraform-aws-vpc-peering-multi-account with terraform-aws-vpc-peering-multi-account-request. I think it we probably should continue to support terraform-aws-vpc-peering-multi-account and adding IPv6 support is a natural extension. Unfortunately, this module is not something we can test via our current automated test suite (because it requires 2 accounts), so it makes modifying it more dangerous. This means we need to take extra care, and are likely to take a very long time responding to PR requests from the community.

What you could do, if you are inspired, is open a PR and then use the PR branch in your own environment and report on how it goes. That is the approach we are taking with this change to terraform-aws-eks-cluster that is hard to test for other reasons.

what

• Use newer kubernetes_config_map_v1_data to force management of config map from a single place and have field ownership • Implement self reference as described here: #155 (comment) in order to detect if any iam roles were removed from terraform config • Preserve any existing config map setting that was added outside terraform; basically terraform will manage only what is passed in variables

why

Mainly the reasons from #155.

references

*closes #155

Nitin avatar

what

• Allow IPv6 communication b/w VPCs over VPC peering

why

• Application hosted in VPC-1 wants to access resources hosted in private subnet of VPC-2 over IPv6.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Nitin Thank you for the PR. I’m really resistant to using regexes to distinguish IPv4 from IPv6, but at the moment I don’t see a better way, so I guess I will have to live with it.

I’m more confused/concerned about why you had to go beyond cidr_block_associations to find the IPv6 CIDR when it seemed to be all that was needed to find the IPv4 CIDR.

Nitin avatar

@Jeremy G (Cloud Posse) we want to communicate between two IPv6 resouces over vpc peering

Nitin avatar

as both resources or destination might be in private subnet

jose.amengual avatar
jose.amengual

We are looking for companies that want to share their experience with Atlantis and success histories and that they might want to add their logo to the Atlantis page to showcase companies using Atlantis, I’m one of the contributors for the Atlantis project, please PM me

2022-10-27

Herman Smith avatar
Herman Smith

This returns all values held within some_list of the various some_map entries:

value = toset([for k, v in var.some_object.some_map : v.some_list])

But this returns an error - This map does not have an element with the key "some_list" :

value = toset(var.some_object.some_map[*].some_list)

Should the splat expression not be equivalent to the first?

Mallikarjuna M avatar
Mallikarjuna M

Hello Everyone, Does anyone know about the best and easy way of VPN configuration?

Michael Galey avatar
Michael Galey

I added a simple tailscale setup for just a few users, but this has been amazing for us. Very easy and performant.

Mallikarjuna M avatar
Mallikarjuna M

Can you please share me the resource link?

Michael Galey avatar
Michael Galey
Tailscale

Tailscale is a zero config VPN for building secure networks. Install on any device in minutes. Remote access from any network or physical location.

Mallikarjuna M avatar
Mallikarjuna M

Is it free?

Mallikarjuna M avatar
Mallikarjuna M

Thanks.

Michael Galey avatar
Michael Galey
Tailscale

Tailscale makes connecting your team and devices easy. It’s free to get started and use by yourself, and scales with you as your needs grow.

Allan Swanepoel avatar
Allan Swanepoel

Tailscale is awesome, they’re using wiregaurd. If you want to roll yourr own, I would suggest looking at pritunl, which supports both openvpn as well as wirregaurd

Herman Smith avatar
Herman Smith

Is required_providers required even in root terraform modules which don’t directly use resources, but simply rely on modules (which themselves have required_providers)? Seems redundant

loren avatar

not required, but can serve a different purpose. in reusable modules, the version constraints are typically a min version. in a root module, the constraints tend to be much tighter since they are specific to a given deployment

Herman Smith avatar
Herman Smith

Thanks. Pet peeve with terraform at the moment is that it actually allows modules to define providers

loren avatar

holdover from old versions, kept for compatibility reasons

Herman Smith avatar
Herman Smith

Yeah, I gathered as much. I wish everybody else writing terraform got the memo

loren avatar

aye. don’t use provider blocks in reusable modules

Herman Smith avatar
Herman Smith

If a reusable module uses two providers (A & B), and the providers are configured in the root module, can provider B be configured with outputs of that reusable module (from provider A operations), before running its own operations in that same reusable module?

Herman Smith avatar
Herman Smith

I can try playing that one out tomorrow, bit wary of this refactor becoming a rabbit-hole with dead-ends etc.

loren avatar

like this, you mean? if so, yes no problem

module "a" {
  providers = {
    aws = aws.a
  }
}

module "b" {
  providers = {
    aws = aws.b
  }

  input = module.a.output
}
loren avatar

the source of each module is rather immaterial to that use case. it can be the same source (and so the same reusable module), or different

Herman Smith avatar
Herman Smith

One provider is AWS, the other is Kubernetes

The kubernetes provider requires a hostname to connect to, which is produced after an AWS provider operation.

The kubernetes provider config block refers to an aws data source to find that hostname (as part of initializing the provider itself)

Where both providers are within the same module, the Kubernetes provider has visibility on that data source to fetch the value it needs to initialize

However, if I initialize the providers outside of the module, that Kubernetes provider will still need to get that hostname to initialize itself

If that data source value was exposed as an output, I was wondering if I could refer to [module.foo.my](http://module.foo.my)_output in the root module’s Kubernetes provider config block

Herman Smith avatar
Herman Smith

This would mean that module foo would have to do the AWS stuff, which would be referred to by an output,, and then the Kubernetes provider configured in the root module could get that output value to initialize itself, and the foo module continues on with its Kubernetes stuff

Herman Smith avatar
Herman Smith

Wasn’t sure if all operations would have to complete in that module before any outputs are exposed, basically

loren avatar

oh, no, the output is available as soon as its dependencies are met

loren avatar

if you want to force an output to wait, you can use depends_on in the output block

Herman Smith avatar
Herman Smith

Brilliant, that sounds like it permits a sufficiently spaghetti flow for my needs!

loren avatar

yeah, the terraform graph is pretty excellent. sometimes so much that it is confusing!

Herman Smith avatar
Herman Smith


not required, but can serve a different purpose.
Ah, actually is required when using a provider that isn’t authored by HashiCorp, I’ve found

1
loren avatar

well there is that

Vincent Sheffer avatar
Vincent Sheffer

Trying to just add instance types to an existing cluster using eks_node_group module, but doesn’t seem to support that.

Vincent Sheffer avatar
Vincent Sheffer

More specifics: I’m using cloudposse/eks-cluster/aws 2.5.0 and eks_node_group 2.4.0. Adding instance types to the list of instance_types causes all of the existing ones to be destroyed and then recreated, which in, well, any environment seems bad. I did add another instance of the eks_node_group module but got an error where the iam “default” policy is being recreated. Dug into the module code and I don’t see any way to prevent that from happening.

It’s really important to be able to add new node groups to an existing cluster without disruption and it just isn’t clear how to do that in the documentation. Help is very much appreciated.

2022-10-28

Herman Smith avatar
Herman Smith

Has anyone successfully used moved blocks to refactor within multiple levels of module nesting?

For example: module a wraps module b (a adds no new resources, it just takes variables and passes them through to b’s) - I’d like to eliminate module a , being the useless wrapper that it is.

In the root module, I’ve changed module "a" { to module "b" { , and added the following moved block below:

moved {
  from = module.a.module.b
  to = module.b
}

Yet I get The argument "from" is required, but no definition was found., which seems a bizarre error!

Herman Smith avatar
Herman Smith

Findings about this subject online seem a little contradictory and ambiguous/unspecific, but there does seem to be suggestion it’s possible, given that cross-module boundaries can be crossed with moved

loren avatar

are you using latest terraform, 1.3? i don’t recognize the error, but cross-module moves requires 1.3…

Herman Smith avatar
Herman Smith

The very latest and greatest (1.3.3), I did check the changelogs too

1
Herman Smith avatar
Herman Smith

It looks to me like specifying a nested module address in from completely confuses the parser into thinking there isn’t even a to

loren avatar

yeah, my use cases have all been cross-modules moves, but i’m still at 1.2, so haven’t been able to get any real working experience with moved blocks just yet

Herman Smith avatar
Herman Smith

I was a bit slow on the uptake of trying out moved since it was released, perhaps not slow enough

1
loren avatar

state mv to the rescue!

Herman Smith avatar
Herman Smith

loren avatar

perhaps open an issue and see what comes back? also, if you’re in hangops slack, the primary developer of the moved feature is active there. he might chime in if you post the question there

Herman Smith avatar
Herman Smith

I totally forgot about hangops, thanks for the reminder!

1
Herman Smith avatar
Herman Smith


The from and to addresses both use a special addressing syntax that allows selecting modules, resources, and resources inside child modules.
I retract my earlier statement, this from the docs themselves is pretty unambiguous about it being positively supported

1
Herman Smith avatar
Herman Smith

@loren I’m embarrassed to say it was uh, a rather obvious error on my part.

But I’ll confirm that it works now, so feel free to use it for your cross-module moves when you upgrade

1
loren avatar

lol c’mon now, spill the beans

Herman Smith avatar
Herman Smith

Um. I read through changelogs, docs, and everything. But I had a completely empty moved { } at the bottom of the file, and must have neglected to verify the error’s line number

loren avatar

aye! i think we’ve all been there. good reminder for us all to check our assumptions

10001
loren avatar

i’ve probably done this my whole career, but i just learned about rubber ducking as a real thing… https://en.wikipedia.org/wiki/Rubber_duck_debugging

Rubber duck debuggingattachment image

In software engineering, rubber duck debugging (or rubberducking) is a method of debugging code by articulating a problem in spoken or written natural language. The name is a reference to a story in the book The Pragmatic Programmer in which a programmer would carry around a rubber duck and debug their code by forcing themselves to explain it, line-by-line, to the duck. Many other terms exist for this technique, often involving different (usually) inanimate objects, or pets such as a dog or a cat. Many programmers have had the experience of explaining a problem to someone else, possibly even to someone who knows nothing about programming, and then hitting upon the solution in the process of explaining the problem. In describing what the code is supposed to do and observing what it actually does, any incongruity between these two becomes apparent. More generally, teaching a subject forces its evaluation from different perspectives and can provide a deeper understanding. By using an inanimate object, the programmer can try to accomplish this without having to interrupt anyone else. This approach has been taught in computer science and software engineering courses.

Herman Smith avatar
Herman Smith

Great for working in total isolation.

I guarantee that my significant other in the house would start responding, thinking I’m calling out, and throw me way out of the zone

1
loren avatar

haha yeah, often the duck ends up being a coworker or slack workspace

Amrutha Sunkara avatar
Amrutha Sunkara

Hello Folks, is there a terraform module that any of you know of/use to create a tunnel via SSM?

Jonas Steinberg avatar
Jonas Steinberg

idk but there’s a nifty python module that will do it

there has to be a terraform module to handle this, just google it there are probably 10

Amrutha Sunkara avatar
Amrutha Sunkara

Actually google came up empty which is why i posted here

Jonas Steinberg avatar
Jonas Steinberg

So – terraform cloud input variables…

Are they only returned as strings?

I’m having a helluva time using them.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

how are you consuming them?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think they may be just TF_VAR_ envs in the end, which support JSON encoding

Jonas Steinberg avatar
Jonas Steinberg

can’t quite remember how I solved this but calling terraform variables across modules vs live directories in general is annoying

2022-10-29

steenhoven avatar
steenhoven

Hi, I’m using cdktf with python and I want to loop over resource creation, which each should be stored in their own CloudBackend. Is that even possible? It seems that I can only define only 1 CloudBackend per TerraformStack?

This is what I get when I run a loop over the CloudBackend , within a TerraformStack (with various NamedCloudWorkspace s):

jsii.errors.JSIIError: There is already a Construct with name 'backend' in TerraformStack [stackname]
loren avatar

Sounds like you’ll need to loop over the stack, creating one per backend

steenhoven avatar
steenhoven

Thanks, will try that!

steenhoven avatar
steenhoven

Works, thanks for the hint!

1
loren avatar

This is a core feature of terragrunt, which I use a lot, so it felt pretty natural to me

steenhoven avatar
steenhoven

Hehe, thanks. You mean the generate feature?

loren avatar

More just lots of stacks, with a backend per stack

1
loren avatar

But generate and file templating could get you there also

steenhoven avatar
steenhoven

Anotherone then: How to loop over a module and store the state in the same workspace/state?

loren avatar

well you have options using cdktf. you can loop over the module in cdktf, giving each one a unique label. or you can use terraform-native for_each on the module itself.

Carlos Reyna (Infrascension) avatar
Carlos Reyna (Infrascension)

Don’t bother with cdktf. It will cause you more heartache than success.

    keyboard_arrow_up