#terraform-aws-modules (2024-09)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-09-10

Dale avatar

hi! with the terraform-aws-api-gateway module, is there a way to enable caching? if I enable it via the console, when I next deploy my TF it deletes the provisioned cache cluster. I have had a browse through the module code and I don’t think it’s possible to use caching with it, but thought I’d check here to make sure I’m not missing something!

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Matt Calhoun

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@matt

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy White (Cloud Posse)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Dale Following on this. Do you still have that question?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Dale Bumping this up

Dale avatar

yes I still have the question

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Matt Calhoun @matt @Andriy Knysh (Cloud Posse) @Jeremy White (Cloud Posse)

Matt Calhoun avatar
Matt Calhoun

I just took a look at the module’s source code and I don’t see any caching support. I also took a quick peek at the available Terraform resources for API Gateway and didn’t see any mention of caching. Could you show a screenshot of specifically what you’re enabling in the console and maybe I can find it in Terraform (and maybe it’s a quick fix at add it to the module)?

2024-09-11

2024-09-24

2024-09-26

djk29a avatar

Wanted to get a quick sanity check before filing a Github issue but the most recent release for terraform-aws-vpc-peering-multi-account seems to have caused a regression in my TF state. I’d like to verify if the issue is on my end due to state inconsistencies, for example. I have a state created under 0.20 that is now after the terraform init -upgrade with the 0.20.1 module giving me this during the plan:

...accepter.tf line 128, in resource "aws_route" "accepter_ipv6":
2024-09-26 19:02:33 UTC	│  128:   destination_ipv6_cidr_block = local.requester_ipv6_cidr_block_associations[count.index % local.requester_ipv6_cidr_block_associations_count]["cidr_block"]
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

2024-09-27

2024-09-30

Erik Parawell avatar
Erik Parawell

Hi, I wanted to know if anyone has come across this issue setting up / updating the datadog-integration component?

╷
│ Error: error getting AWS integration from /api/v1/integration/aws: 403 Forbidden: {"errors":["Forbidden"]}
│ 
│   with module.datadog_integration.datadog_integration_aws.integration[0],
│   on .terraform/modules/datadog_integration/main.tf line 18, in resource "datadog_integration_aws" "integration":
│   18: resource "datadog_integration_aws" "integration" {
│ 
╵
exit status 1

I have tried updating to a new API key and have followed the guide https://docs.cloudposse.com/layers/monitoring/datadog/setup/ As an aside I am also seeing the same issue crop up on our existing deployments via our “atmos tf diff” GHA jobs.

Setup Datadog | The Cloud Posse Reference Architecture

Provision Datadog monitoring with Terraform

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ben Smith (Cloud Posse)

Setup Datadog | The Cloud Posse Reference Architecture

Provision Datadog monitoring with Terraform

Erik Parawell avatar
Erik Parawell

I solved this issue. I had a bad config that I didn’t see in one of my gbl stack yaml files that was overriding the good config, thus supplying a bad api key.

1
    keyboard_arrow_up