#terraform-aws-modules (2024-09)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2024-09-10
hi! with the terraform-aws-api-gateway
module, is there a way to enable caching? if I enable it via the console, when I next deploy my TF it deletes the provisioned cache cluster. I have had a browse through the module code and I don’t think it’s possible to use caching with it, but thought I’d check here to make sure I’m not missing something!
@Matt Calhoun
@matt
@Andriy Knysh (Cloud Posse) @Jeremy White (Cloud Posse)
Hi @Dale Following on this. Do you still have that question?
Hi @Dale Bumping this up
yes I still have the question
@Matt Calhoun @matt @Andriy Knysh (Cloud Posse) @Jeremy White (Cloud Posse)
I just took a look at the module’s source code and I don’t see any caching support. I also took a quick peek at the available Terraform resources for API Gateway and didn’t see any mention of caching. Could you show a screenshot of specifically what you’re enabling in the console and maybe I can find it in Terraform (and maybe it’s a quick fix at add it to the module)?
I mean using these settings @Matt Calhoun
I just took a look again and confirmed that the module doesn’t support caching, but it looks like it would be a fairly easy PR to open if you’re up for it.
resource "aws_api_gateway_stage" "this" {
It looks like you’d just need to add the cache_cluster_enabled
and cache_cluster_size
variables and pass them into the aws_api_gateway_stage
that I pointed to just above.
If you want to take a shot at that an tag me here, then I can review your PR and get it merged for you.
2024-09-11
2024-09-24
2024-09-26
Wanted to get a quick sanity check before filing a Github issue but the most recent release for terraform-aws-vpc-peering-multi-account
seems to have caused a regression in my TF state. I’d like to verify if the issue is on my end due to state inconsistencies, for example. I have a state created under 0.20
that is now after the terraform init -upgrade
with the 0.20.1
module giving me this during the plan:
...accepter.tf line 128, in resource "aws_route" "accepter_ipv6":
2024-09-26 19:02:33 UTC │ 128: destination_ipv6_cidr_block = local.requester_ipv6_cidr_block_associations[count.index % local.requester_ipv6_cidr_block_associations_count]["cidr_block"]
@Igor Rodionov
2024-09-27
2024-09-30
Hi, I wanted to know if anyone has come across this issue setting up / updating the datadog-integration component?
╷
│ Error: error getting AWS integration from /api/v1/integration/aws: 403 Forbidden: {"errors":["Forbidden"]}
│
│ with module.datadog_integration.datadog_integration_aws.integration[0],
│ on .terraform/modules/datadog_integration/main.tf line 18, in resource "datadog_integration_aws" "integration":
│ 18: resource "datadog_integration_aws" "integration" {
│
╵
exit status 1
I have tried updating to a new API key and have followed the guide https://docs.cloudposse.com/layers/monitoring/datadog/setup/ As an aside I am also seeing the same issue crop up on our existing deployments via our “atmos tf diff” GHA jobs.
Provision Datadog monitoring with Terraform
@Ben Smith (Cloud Posse)
Provision Datadog monitoring with Terraform
I solved this issue. I had a bad config that I didn’t see in one of my gbl stack yaml files that was overriding the good config, thus supplying a bad api key.