#terraform (2024-02)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-02-01

Boris Dyga avatar
Boris Dyga

Hi! Could anyone have a look at the PR? It’s hanging about for some time CC @Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-config/pull/83

#83 The access token is now passed in a http header

This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Added the MacOS .DS_Store files to .gitignore

what

• The access token is now passed in a http header • Added the MacOS .DS_Store files to .gitignore

why

• This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

please use #pr-reviews

#83 The access token is now passed in a http header

This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Added the MacOS .DS_Store files to .gitignore

what

• The access token is now passed in a http header • Added the MacOS .DS_Store files to .gitignore

why

• This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Gabriela Campana (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Boris Dyga the PR is approved, thanks

2024-02-02

RB avatar

Any chance cloud posse would be interested in maintaining a fork of terraform docs to unblock prs?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would need to discuss with the team

1
RB avatar

Thanks Erik. I also opened a thread with opentofu to see if they would be interested.

https://opentofucommunity.slack.com/archives/C05R0FD0VRU/p1706989540737409?thread_ts=1706989540.737409&cid=C05R0FD0VRU

RB avatar


Hey :wave::skin-tone-2: .

Currently, the OpenTofu Core Maintenance team does not intend to fork and maintain 3rd party tooling that is not necessary for the Core OpenTofu project.
In the future, if there’s high demand for such a tool being in high maintenance, we will reconsider this decision. However, right now we do not have the capacity to support a terraform-docs fork alongside the main OpenTofu project
so opentofu will not fork

1
RB avatar

Hi @Erik Osterman (Cloud Posse) , just a friendly ping, did you get a chance to chat with the team?

2024-02-03

pjf719 avatar

Could use some help here if anybody has a moment to take a look, thanks https://sweetops.slack.com/archives/CDYGZCLDQ/p1707013439249969

Hi all - I’m trying to use this beanstalk module to spin up some infra https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment My requirement is that the beanstalk environment attaches itself to port 443 so that it’s on SSL.

Here is my configuration:

module "alb" {
  source  = "cloudposse/alb/aws"
  version = "1.10.0"

  namespace          = "tpo"
  name               = "elastic-beanstalk"
  vpc_id             = data.aws_vpc.default.id
  subnet_ids         = data.aws_subnets.private.ids
  internal           = true
  certificate_arn    = data.aws_acm_certificate.cert.arn
  security_group_ids = [module.security_groups.alb_sg]

  http_enabled                            = false
  https_enabled                           = true

  enabled = true
  
  stage                                   = "prod"
  access_logs_enabled                     = true
  access_logs_prefix                      = "tpo-prod"
  alb_access_logs_s3_bucket_force_destroy = true

  # This additional attribute is required since both the `alb` module and `elastic_beanstalk_environment` module
  # create Security Groups with the names derived from the context (this would conflict without this additional attribute)
  attributes = ["shared"]

}


module "elastic_beanstalk_application" {
  source  = "cloudposse/elastic-beanstalk-application/aws"
  version = "0.11.1"
  enabled = true

  for_each = toset(var.EB_APPS)

  name = each.value

}

module "elastic_beanstalk_environment" {
  source   = "cloudposse/elastic-beanstalk-environment/aws"
  for_each = toset(var.EB_APPS)
  enabled = true

  region = var.REGION

  elastic_beanstalk_application_name = each.value
  name                               = "prod-${each.value}-tpo"
  environment_type                   = "LoadBalanced"
  loadbalancer_type                  = "application"
  loadbalancer_is_shared             = true
  shared_loadbalancer_arn            = module.alb.alb_arn
  loadbalancer_certificate_arn       = data.aws_acm_certificate.cert.arn

  tier          = "WebServer"
  force_destroy = true

  instance_type = "t4g.xlarge"

  vpc_id               = data.aws_vpc.default.id
  loadbalancer_subnets = data.aws_subnets.private.ids
  application_subnets  = data.aws_subnets.private.ids
  application_port = 443
  allow_all_egress = true

  additional_security_group_rules = [
    {
      type                     = "ingress"
      from_port                = 0
      to_port                  = 65535
      protocol                 = "-1"
      source_security_group_id = data.aws_security_group.vpc_default.id
      description              = "Allow all inbound traffic from trusted Security Groups"
    }
  ]
  solution_stack_name = "64bit Amazon Linux 2 v5.8.10 running Node.js 14"

  additional_settings = [
    {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = "NODE_ENV"
      value     = "prod"
    },
    {
      namespace = "aws:elbv2:listenerrule:${each.value}"
      name      = "HostHeaders"
      value     = "prod-${each.value}-taxdev.io"
    }
  ]
  env_vars = {
    "NODE_ENV" = "prod"
  }

  enable_stream_logs = true
  extended_ec2_policy_document = data.aws_iam_policy_document.minimal_s3_permissions.json
  prefer_legacy_ssm_policy     = false
  prefer_legacy_service_policy = false

}

2024-02-05

aj_baller23 avatar
aj_baller23

Hi all, wonder if i can get some feeback/advice of what the best approach for using terraforma and ansible together. Is there an ansible provider that facilitates the configuration of my servers? Thanks in advance!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
1
Serdar Dalgic avatar
Serdar Dalgic

It depends on your usecase. This article might also be useful https://spacelift.io/blog/using-terraform-and-ansible-together

Using Terraform and Ansible Togetherattachment image

In this tutorial, you’ll learn how to use Terraform and Ansible together. See how Spacelift can greatly simplify and elevate your workflow for both tools.

1

2024-02-06

pjf719 avatar

AWS/Terraform question: Given multiple elastic beanstalk environments that all utilize a shared ALB, does anybody know how to add custom ALB rules to the listener so that each rule maps each environment’s custom DNS to its designated beanstalk target group?

What is happening right now is beanstalk is creating a listener rule that uses the beanstalk DNS app-name.random-characters.us-east-1.elasticbeanstalk.com but I need this rule to have the proper host headers for the app… like *.app-domain.com

pjf719 avatar

I tried this setting but it doesn’t appear to have any effect on the listener at all…

{
      namespace: "aws:elbv2:listenerrule:${each.value}",
      name: "HostHeaders",
      value: "${each.value}.${aws_route53_zone.private.name}",
    },
Michael avatar
Michael

Does anyone know why the homebrew formula for terraform says not to bump to v1.6 because of the license change? Curious why brew is affected

loren avatar

the homebrew-core tap only includes open source packages. you can use the hashicorp tap if you want newer versions.

brew tap hashicorp/tap
brew install hashicorp/tap/terraform
2
Michael avatar
Michael

That makes sense! Thank you!

setheryops avatar
setheryops

I use tfenv but I hear mention that its not regularly maintained anymore or something…havent looked into it but it still works for me.

Joe Perez avatar
Joe Perez

tfswitch is another option

Serdar Dalgic avatar
Serdar Dalgic

tfenv is probably dead, here is a discussion thread about it https://github.com/tfutils/tfenv/issues/399

#399 Is tfenv dead?

There is no new commit since Oct 1, 2022.

Serdar Dalgic avatar
Serdar Dalgic

In that same discussion: It’s mentioned that the Tofuutils crew are working on a new tool called tenv, OpenTofu / Terraform / Terragrunt version manager . It might be useful to check.

setheryops avatar
setheryops

Cool, thx

np1
Michael avatar
Michael

Anyone use the asdf Hashicorp plugin for it?

Elad Levi avatar
Elad Levi

Few questions about the MSK module:

  1. You can work with SASL Scram or SASL Iam with MSK that has Kafka version of 2.5.1 ? how can I know if version is too old to support some kind of encryption ?
  2. When you create a new MSK cluster, you are only enable the unauthenticated option (via variable client_allow_unauthenticated) - It should be good. But what happen when you changing the client_authentication methods and enable SASL Scram and/or SASL Iam ? The terraform module will handle it right ?

Im asking because in our own MSK module we’re also using the Kafka provider to create the kafka topics within the MSK cluster. I think that because they both combine to one module there’s some mismatch between the MSK and aws provider and the kafka provider. some weird behavior when trying to update existing cluster auth method from unauthenticated. even when the kafka provider has bootstrap_servers set we are getting this error when trying to apply some changes:

No bootstrap_servers provided 

2024-02-07

Release notes from terraform avatar
Release notes from terraform
02:03:29 AM

v1.7.3 1.7.3 (February 7, 2024) BUG FIXES:

terraform test: Fix crash when dynamic-typed attributes are not assigned values in mocks. (#34610) provisioners/file: Fix panic when source is null. (<a href=”https://github.com/hashicorp/terraform/pull/34621” data-hovercard-type=”pull_request”…

Release v1.7.3 · hashicorp/terraformattachment image

1.7.3 (February 7, 2024) BUG FIXES:

terraform test: Fix crash when dynamic-typed attributes are not assigned values in mocks. (#34610) provisioners/file: Fix panic when source is null. (#34621) im…

backend/s3: Ignore default workspace prefix errors by gdavison · Pull Request #34511 · hashicorp/terraformattachment image

In versions prior to v1.6, the S3 backend ignored all errors other than NoSuchBucket when listing workspaces. This allowed cases where the user did not have access to the default workspace prefix e…

if file provisioners source is null throw error instead of panic by DanielMSchmidt · Pull Request #34621 · hashicorp/terraformattachment image

Instead of a panic return an error

Fixes #34454 Target Release

1.7.x Draft CHANGELOG entry

BUG FIXES

don’t panic when file provisioner source is null

2024-02-08

2024-02-12

Alex Atkinson avatar
Alex Atkinson

Prismacloud is killing the free checkov vscode extension by paywalling the api key. https://github.com/bridgecrewio/checkov-vscode/issues/141

#141 Checkov Integration Setup Documentation Is Out of Date

Since the implementation of the redirect from bridgecrew.cloud to prismacloud, the documentation for setting up an API token is incorrect. Please update to include the new direction on acquiring an API token.

Documentation: https://marketplace.visualstudio.com/items?itemName=Bridgecrew.checkov
Broken Integration link: https://www.bridgecrew.cloud/integrations/api-token

It’s not immediately clear whether or not checkov is now paywalled (beyond the 7 day free trial), and it’s excruciating to have to wait for a “Palo Alto Networks specialist to reach out to me” before I can get an account and API token setup. I hope this isn’t the case.

Thanks,
Alex

1
1

2024-02-13

Rustam avatar

I have an interesting challenge. There’s a very large terraform codebase (thousands resources) and currently the team relies on cloud resource names (e.g. s3 bucket name) to find those resources in terraform code. Literally, searching bucket name in codebase.

We want to introduce terraform-null-label for consistency. However, it will break the current way of finding resources in the tf codebase.

We tried to use yor but it adds too much complexity and also introduces an extra step - to find a resource in the code, you need first lookup tags (e.g. s3 bucket tags) and then using those meta-tag values to search in the codebase. Ideally, we want to avoid it.

There’s also no access to state. Only tf codebase.

Question: Has anyone tried to pre-generate null-label resource names and add them to terraform files as comments? Any other ideas how to map cloud resource names to terraform files in a large codebase?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Probably a good extended conversation to have on #office-hours if you can make it tomorrow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I feel like yor is an anti-pattern. Although I could understand it for legacy code. But everything new, using a null-label pattern (or similar implementation).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Has anyone tried to pre-generate null-label resource names and add them to terraform files as comments?
Can you provide a code snippet of the hypothetical example of the comment and code?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, “in a perfect world”, would you be able do look up a terraform resource name and get it’s AWS resource name? …that name being “unconventional” (non-label generated).

Rustam avatar

Sure. Here’s an example. Let’s say I have a customer who has problem with eg-prod-bastion-public instance and raised an internal support ticket with instance name. I can simply search for label_id:eg-prod-bastion-public in my codebase. Without it however, it’s a needle in a haystack problem.

module "bastion_label" {
  source   = "cloudposse/label/null"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  namespace  = "eg"
  stage      = "prod"
  name       = "bastion"
  attributes = ["public"]
  delimiter  = "-"

  tags = {
    "BusinessUnit" = "XYZ",
    "Snapshot"     = "true"
  }
}

# label_id:eg-prod-bastion-public
resource "aws_instance" "bastion" {
  instance_type = "t1.micro"
  tags          = module.bastion_label.tags
}
Rustam avatar


also, “in a perfect world”, would you be able do look up a terraform resource name and get it’s AWS resource name?
That would be a bonus, but it’s less of a problem for my scenario. The main challenge is finding where a particular resource is defined in the codebase.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Without it however, it’s a needle in a haystack problem.
Aha, I see what you mean.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically you need annotations that tie deployed infrastructure back to where it is on disk.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If those annotations were in the code, you could search for it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since cloud posse deploys the same root modules/modules thousands of times, with strictly configuration via atmos, the code-style comment annotations are not something we’ve considered. E.g. To deploy 20 VPCs, we still have exactly one VPC root module (component). Then 20 configurations on how to deploy it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


here’s a very large terraform codebase (thousands resources) and currently the team relies on cloud resource names (e.g. s3 bucket name) to find those resources in terraform code. Literally, searching bucket name in codebase.
So, I think there are 2 challenges.

  1. What to do with your existing code base
  2. What to aim for in new projects delivered
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In new projects delivered, I think https://atmos.tools solves the attribution error very easily because you can enforce tag conventions that pass through to all resources deployed. So everything is tagged with their Stack identifier.

Introduction to Atmos | atmos

Atmos is the Ultimate Terraform Environment Configuration and Orchestration Tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For existing Terraform, would have to take a closer look.

Rustam avatar

Thanks @Erik Osterman (Cloud Posse), appreciate your feedback. Rolling out something like Atmos would be very hard in a large org.

I’ll update if we find something interesting.

1
tnt avatar

hi all :wave:, new to the group…

I’m facing an issue in creating Datadog monitor. getting this error, saying the module datadog_monitors is not found…. all of a sudden starting today mng

╷
│ Error: Module not found
│ 
│ Module "datadog_monitors" (from main.tf:14) cannot be found in the module
│ registry at registry.terraform.io.
╵

it used to work with below version:

module "datadog_monitors" {
  source  = "cloudposse/monitor/datadog"
  version = "1.3.0"

started seeing the issue today mng..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) may know the answer.

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know he’s been working on this and we released a new version today.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, that version is super outdated, and we stopped publishing it under that name. It is still available as cloudposse/platform/datadog but we recommend you upgrade to the dedicated submoudle cloudposse/platform/datadog//modules/monitors

tnt avatar

thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We just rolled out a big update to our atmos docs. As part of that we published a write up of the limitations we encountered with standalone terraform that led to atmos, or have seen customers encounter. We break it into 10 stages, with the 10th stage being terraform bankruptcy.

https://atmos.tools/reference/terraform-limitations

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

1
1
1
Gabriel avatar
Gabriel

What do you do when you have one resource/module that you want to deploy in multiple accounts and it needs to be the same in all accounts? Is there any other way other than copy/pasting the resource/module x times with different providers?

What I’d like to do is something like this

module "example" {
  source = "source"
  providers {
    aws = aws.one
    aws = aws.two
    aws = aws.three
  }
  some_var = aws.alias
  # rest of config is the same
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is the problem we solve with atmos. What you describing is a configuration management and orchestration problem. You would define your root module one time. Then you define you configuration for that root module as part of a Stack configuration.

See https://atmos.tools/core-concepts/stacks/catalogs#organizations

Stack Catalogs | atmos

Catalogs are how to organize all Stack configurations for easy imports.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


copy/pasting the resource/module x times with different providers?
Sadly, this is what many companies do through the abuse of root modules.

With atmos, there’s no copy/pasting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anti-patterns to avoid:

• more than 2 providers in a root module. Only use 2 providers if it’s a hub/spoke relationship.

• Generally use 1 provider.

• Do not use 1 provider per region (e.g. you have poor DR resiliency if terraform root modules expect all regions online)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are on this journey @Gabriel let us know if you need help on this or how to start using Atmos

Gabriel avatar
Gabriel

thanks, I will take a look at it and try to understand how I could solve my problem with atmos.

Gabriel avatar
Gabriel

what I did at the moment is this

module "one" {
  source = "source"
  providers {
    aws = aws.one
  }
  some_var = "one"
  # rest of config is the same
}

module "two" {
  source = "source"
  providers {
    aws = aws.two
  }
  some_var = "two"
  # rest of config is the same
}

...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is what we call “stage 3” (in that link andriy shared)

2024-02-14

Release notes from terraform avatar
Release notes from terraform
06:13:28 PM

v1.8.0-alpha20240214 1.8.0-alpha20240214 (February 14, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introduced for <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2098393853” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/34567” data-hovercard-type=”pull_request”…

Release v1.8.0-alpha20240214 · hashicorp/terraformattachment image

1.8.0-alpha20240214 (February 14, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introdu…

apply schema marks to returned instance values by jbardin · Pull Request #34567 · hashicorp/terraformattachment image

The original sensitivity handling implementation applied the marks from a resource schema only when decoding values for evaluation. This appeared to work in most cases, since the resource value cou…

TechHippie avatar
TechHippie

Hi - Is there a way to merge multiple policy statements using terraform? I have a bunch of json files for each service. I want to create a iam policy based on user input of services.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region = var.region
}

module "iam_policy" {
  source = "../../"

  iam_policy         = var.iam_policy
  iam_policy_enabled = false

  context = module.this.context
}

module "iam_policy_two" {
  source = "../../"

  iam_policy         = var.iam_policy_two
  iam_policy_enabled = false

  context = module.this.context
}

module "iam_policy_three" {
  source = "../../"

  iam_source_policy_documents = [module.iam_policy_two.json]
  iam_policy_enabled          = false

  context = module.this.context
}

module "iam_policy_statements_map" {
  source = "../../"

  iam_policy_statements = var.iam_policy_statements_map
  iam_policy_enabled    = false

  context = module.this.context
}

module "iam_policy_statements_list" {
  source = "../../"

  iam_policy_statements = var.iam_policy_statements_list
  iam_policy_enabled    = false

  context = module.this.context
}

module "iam_url_policy" {
  source = "../../"

  iam_source_json_url = var.iam_source_json_url

  iam_policy_enabled = false

  context = module.this.context
}


data "aws_iam_policy_document" "assume_role" {
  count = module.this.enabled ? 1 : 0

  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "default" {
  count = module.this.enabled ? 1 : 0

  name               = module.this.id
  assume_role_policy = one(data.aws_iam_policy_document.assume_role[*].json)

  inline_policy {
    name = "test_policy"

    policy = module.iam_policy.json
  }
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it merges policies together using source_policy_documents

TechHippie avatar
TechHippie

Thank you Andriy. I did look at merging documents using source policy documents. But I sum look at the module you shared. Thank you very much.

2024-02-15

susie-h avatar
susie-h

Hello! I’m working in your api gw module. I wanted to know how i’m supposed to get custom access logs working?

I have the following variables related to logging, but as the screenshot shows, custom access logging is not turned on.

  xray_tracing_enabled = true #X-Ray tracing
  metrics_enabled = true #Detailed metrics
  logging_level = "INFO"
  #log_group_arn = "arn:aws:logs:us-east-1:829505554415:log-group:blue-parakeet"
  access_log_format = <redacted for simplicity>

i’ve already run the account-settings module once per region

jose.amengual avatar
jose.amengual

if you run account-settings that should enabled

jose.amengual avatar
jose.amengual

so maybe you run it but did no pass true to enable it?

susie-h avatar
susie-h

what do you mean “pass true”? how can i do that now?

susie-h avatar
susie-h

By the way, i am determining that the account-settings module ran because i see the CloudWatch log role ARN in API Gateway -> APIs -> Settings.

jose.amengual avatar
jose.amengual

enabled = true

susie-h avatar
susie-h

Can i add that as an input? i tried doing that and it didn’t work.

susie-h avatar
susie-h
 enabled = true
  xray_tracing_enabled = true #X-Ray tracing
  metrics_enabled = true #Detailed metrics
  logging_level = "INFO"
susie-h avatar
susie-h

^ when i do this and run the code, custom access logging is still inactive

jose.amengual avatar
jose.amengual
  create_log_group       = local.enabled && var.logging_level != "OFF"
jose.amengual avatar
jose.amengual

do you have that enabled?

jose.amengual avatar
jose.amengual

var.logging_level

susie-h avatar
susie-h

i have logging_level = INFO

susie-h avatar
susie-h

enabled isn’t seeing itself as true

jose.amengual avatar
jose.amengual

then you should have a log group created for your api

susie-h avatar
susie-h
susie-h avatar
susie-h

Both execution logs and access logs are disabled in the stage details

jose.amengual avatar
jose.amengual

you will have to check the plan and see

susie-h avatar
susie-h

it looks like i manually created the cloudwatch role instead of using the account-settings module that came with api gw (https://github.com/cloudposse/terraform-aws-api-gateway/tree/main/examples/account-settings)

susie-h avatar
susie-h

is that why “enabled” isn’t true?

susie-h avatar
susie-h

account-settings is run once per region. i run the api-gw module many times to create new api-gw’s. how can the api-gw enabled variable rely on account-settings module to pass enabled=true if it’s only run once and the other is run many??

jose.amengual avatar
jose.amengual

no, that is not how it works

jose.amengual avatar
jose.amengual

the account-setting is run once per region

jose.amengual avatar
jose.amengual

then you go an deploy your api gateways with loglevel = XXX

jose.amengual avatar
jose.amengual

that is all you need

susie-h avatar
susie-h

and the “enabled=true” follows the future api gw’s that get deployed using the module?

jose.amengual avatar
jose.amengual

no

jose.amengual avatar
jose.amengual

the account-setting its a submodule

jose.amengual avatar
jose.amengual

look at the examples folder

susie-h avatar
susie-h

i did. none of the examples show with logging enabled.

susie-h avatar
susie-h

i don’t understand how to get “local.enabled” to be true along with logging_level. It looks like both need to be true to create create_log_group as mentioned in your code snipped:

create_log_group       = local.enabled && var.logging_level != "OFF"

I have logging level set to “INFO”.

jose.amengual avatar
jose.amengual

just pass to your module instantiation enabled = true

jose.amengual avatar
jose.amengual

the value comes from the context.tf file

susie-h avatar
susie-h

I pass these variables to the api gw module and it does not create the cloudwatch logs:

  enabled = true
  xray_tracing_enabled = true #X-Ray tracing
  metrics_enabled = true #Detailed metrics
  logging_level = "INFO"
jose.amengual avatar
jose.amengual

what about your plan? does it show the cloudwatch group at all?

jose.amengual avatar
jose.amengual

what version of the module are you using?

susie-h avatar
susie-h

No, it creates aws_api_gateway_method_settings and aws_api_gateway_stage

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_api_gateway_method_settings.all[0] will be created
  + resource "aws_api_gateway_method_settings" "all" {
      + id          = (known after apply)
      + method_path = "*/*"
      + rest_api_id = "vu1a6hqjxk"
      + stage_name  = "io"

      + settings {
          + cache_data_encrypted                       = (known after apply)
          + cache_ttl_in_seconds                       = (known after apply)
          + caching_enabled                            = (known after apply)
          + data_trace_enabled                         = (known after apply)
          + logging_level                              = "OFF"
          + metrics_enabled                            = true
          + require_authorization_for_cache_control    = (known after apply)
          + throttling_burst_limit                     = -1
          + throttling_rate_limit                      = -1
          + unauthorized_cache_control_header_strategy = (known after apply)
        }
    }

  # aws_api_gateway_stage.this[0] will be created
  + resource "aws_api_gateway_stage" "this" {
      + arn                  = (known after apply)
      + deployment_id        = "naxrtf"
      + execution_arn        = (known after apply)
      + id                   = (known after apply)
      + invoke_url           = (known after apply)
      + rest_api_id          = "vu1a6hqjxk"
      + stage_name           = "io"
      + tags                 = {
          + "Name"  = "green-parakeet-io"
          + "Stage" = "io"
        }
      + tags_all             = {
          + "Name"      = "green-parakeet-io"
          + "Stage"     = "io"
          + "Terraform" = "true"
        }
      + web_acl_arn          = (known after apply)
      + xray_tracing_enabled = true
    }

Plan: 2 to add, 0 to change, 0 to destroy.
jose.amengual avatar
jose.amengual

that is old….

jose.amengual avatar
jose.amengual

` + logging_level = “OFF”`

susie-h avatar
susie-h

+ logging_level = "OFF"” ??? what do you mean by this?

jose.amengual avatar
jose.amengual

that is on your plan

jose.amengual avatar
jose.amengual

so is not reading your value for some reason

jose.amengual avatar
jose.amengual

you have logging_level = “INFO”

jose.amengual avatar
jose.amengual

and that should be on the settings

susie-h avatar
susie-h

i see that now. ok. thanks for pointing that out. i’ll look into that and then update to newer module.

I’m using terragrunt to pass logging_level as an input

susie-h avatar
susie-h

Thank you @jose.amengual it is working now

susie-h avatar
susie-h

Do you know if logging_level will be updated to support the selected category in the screenshot?

jose.amengual avatar
jose.amengual

create a PR , we can review it

susie-h avatar
susie-h

ok

susie-h avatar
susie-h

It’s a terraform issue. I don’t think your code can add that isn’t there yet in the original resource. I opened a fix with terraform to hopefully get it added. https://github.com/hashicorp/terraform-provider-aws/issues/35863

#35863 [Enhancement]: additional logging_level options for aws api gw to match aws gui options

Description

AWS API Gateways offer 4 logging levels in the stage settings:

  1. Off
  2. Errors only
  3. Errors and Info Logs
  4. Full request and response logs

Currently, the resource api_gateway_method_settings has options for the first 3 but not the 4th. I’m requesting that the option be added to select “Full request and response logs” in the logging_level argument for the api_gateway_method_settings resource.

Affected Resource(s) and/or Data Source(s)

api_gateway_method_settings

Potential Terraform Configuration

Currently "The available levels are OFF, ERROR, and INFO" for logging_level. I propose "FULL" be added to configure "Full request and response logs".

References

Terraform resource:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_method_settings

Setting in api gw:
image (1)

Would you like to implement a fix?

None

susie-h avatar
susie-h

Terraform responded. I don’t see the variable in your module. I’ll create a PR.

susie-h avatar
susie-h

i submitted a pr

jose.amengual avatar
jose.amengual

link?

susie-h avatar
susie-h
#36 Added variable for cloudwatch Full Request and Response Logs

what

• Added variable data_trace_enabled to the aws_api_gateway_method_settings resource • This change allows for configuration of CloudWatch logging setting “Full Request and Response Logs” available in the AWS UI. [2]

why

The variable logging_level controls CloudWatch log setting in the AWS UI for OFF, INFO, and ERROR, but doesn’t include an option for “Full Request and Response Logs”. In the AWS UI for API GW, there’s an additional option, “Full Request and Response Logs”, as shown in the screenshot:

image

According to terraform documentation, the variable data_trace_enabled = true is required in conjunction with logging_level = "INFO" to enable “Full Request and Response Logs”. This is added to the aws_api_gateway_method_settings resource in the settings code block [2]:

settings {
    logging_level      = "INFO"
    metrics_enabled    = true
    data_trace_enabled = true
  }

references

[1] Terraform resource:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_method_settings

[2] Closed Issue with Terraform citing solution
hashicorp/terraform-provider-aws#35863 (comment)

jose.amengual avatar
jose.amengual

please check the comments on the PR

jose.amengual avatar
jose.amengual

run make precommit/terraform nd commit the changes

susie-h avatar
susie-h

ok

2024-02-16

Release notes from terraform avatar
Release notes from terraform
11:23:31 PM

v1.8.0-alpha20240216 1.8.0-alpha20240216 (February 16, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introduced for <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2098393853” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/34567” data-hovercard-type=”pull_request”…

Release v1.8.0-alpha20240216 · hashicorp/terraformattachment image

1.8.0-alpha20240216 (February 16, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introdu…

apply schema marks to returned instance values by jbardin · Pull Request #34567 · hashicorp/terraformattachment image

The original sensitivity handling implementation applied the marks from a resource schema only when decoding values for evaluation. This appeared to work in most cases, since the resource value cou…

2024-02-19

Mannan Bhuiyan avatar
Mannan Bhuiyan

@Everyone Hi all friends can anyone help me out to refer a terraform modules where root module called child module as a source and create required resources from locals like locals{ } module “rds” { for_each = { for k, v in local.rds : k => v if try(v.create, true) } source = ./modules/rds }

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For this ChatGPT will be your friend

fb-like1
1
1

2024-02-20

2024-02-21

jswc avatar

Hi folks! I am trying to create a “composite” DataDog monitor (https://registry.terraform.io/providers/DataDog/datadog/latest/docs/guides/monitors#composite-monitors).

We have some alarms defined in .yaml, like these: https://github.com/cloudposse/terraform-datadog-platform/blob/main/catalog/monitors/k8s.yaml.

I couldn’t find an example composite alarm in the repo, and am struggling to piece together a query that DataDog validates successfully. Using some existing monitors here for a simple composite alarm example, should something like this work?

k8s-high-cpu-usage:
...
k8s-high-disk-usage:
...

k8s-cpu-disk-composite:
  name: "(k8s) High CPU and High Disk Usage Detected"
  type: composite
  query: |
    datadog_monitor.k8s-high-cpu-usage.id || datadog_monitor.k8s-high-disk-usage.id 
...

I tested a query of 123456789 || 987654321 , and that works OK. So it just seems to be a problem of grabbing those IDs. Also tried k8s-high-cpu-usage.id || k8s-high-disk-usage.id, but that also had validation issues.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Ben Smith (Cloud Posse) @Jeremy G (Cloud Posse) @Jeremy White (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@jswc I do not know the direct answer to your question, but I have some tips for you.

• The monitors in our catalog are old and the format outdated. (We are working on updating them.) • If you can update to the latest version of our monitors module, then you can define the monitor in JSON. The advantage of defining the monitor in JSON is that you can use the Datadog website/console to create the monitor, using all the built-in help and shortcuts, and then export the final monitor as JSON and import it into Terraform. See our examples for what that looks like.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@jswc Oh, sorry, I misunderstood your question. No, we do not have a way for you to insert a monitor ID from a monitor created in Terraform into another monitor query. Terraform in general does not let you use outputs as inputs in the same cycle (plan/apply). Probably the best thing to do is create monitors in one component and then use that component’s outputs create either composite monitors or synthetic tests in another component, using the output of the first component as input. We do that via Terraform state right now, and have support for that if you are using atmos and its stacks.

jswc avatar

Hi Jeremy, thanks for your replies.

Do I understand correctly that your recommendation around components would be like:

# from <https://registry.terraform.io/providers/DataDog/datadog/latest/docs/guides/monitors>

resource "datadog_monitor" "bar" {
  name    = "Composite Monitor"
  type    = "composite"
  message = "This is a message"
  query   = "${datadog_monitor.metric1.id} || ${datadog_monitor.metric2.id}"
}

# me adding some example monitor whose ID is used above
resource "datadog_monitor" "metric1" {
  name    = "metric 1 monitor"
  type    = "metric alert"
  message = "..."
  query   = "${someMetric} > 10"
}
...

i.e. writing plainer Terraform, not using the CloudPosse terraform-datadog-platform way?

Using the IDs like this reminds me of aws policies/policy_attachments, like

# <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment#example-usage>

...
resource "aws_iam_policy" "policy" {
  name        = "test-policy"
  description = "A test policy"
  policy      = data.aws_iam_policy_document.policy.json
}

resource "aws_iam_policy_attachment" "test-attach" {
  name       = "test-attachment"
  users      = [aws_iam_user.user.name]
  roles      = [aws_iam_role.role.name]
  groups     = [aws_iam_group.group.name]
  policy_arn = aws_iam_policy.policy.arn
}

I’m pretty confused how doing it this way could be OK, but when we go via yaml the output/IDs are not accessible.

I may be way off the mark, but my instinct is that: • .yaml could have ${paramter1} for use with cloudposse/config/yaml, but can also leave some ${} uninterpolated ◦ so later, the composite query could still see ${datadog_monitor.foo.id} but that doesn’t really consider what’s actually happening so it’s probably nonsense

jswc avatar

Looked around more and I think I understand your recommendation better: • keep the setup as is, but ◦ use the output datadog_monitors (https://registry.terraform.io/modules/cloudposse/platform/datadog/latest?tab=outputs) in another module ▪︎ filter irrelevant monitors, pair up the alarms needed for composite • create composite monitor using those pairs I’ll try and read more about what structure datadog_monitors is - that approach may be fair.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, you need one component (root module) to do terraform apply to create the monitors, then another one to take the IDs output from the first one and use them to make a composite monitor.

jswc avatar

Nice, thanks Jeremy That could be a nice example in the repo, as it’s quite different from how it’s done with just TF.

Release notes from terraform avatar
Release notes from terraform
08:33:30 PM

v1.7.4 1.7.4 (February 21, 2024) BUG FIXES:

terraform test: Fix automatic loading of variable files within the test directory on windows platforms. (#34666) plan renderer: Very large numbers (> 2^63) will no longer be truncated in the human-readable plan. (<a href=”https://github.com/hashicorp/terraform/pull/34702“…

terraform test: use platform independent path functions by liamcervante · Pull Request #34666 · hashicorp/terraformattachment image

This PR updates the testing framework to use the platform independent filepath package instead of the linux-only path package. This fixes loading and processing variable values loaded from automate…

plan rendering: fix truncation of very large numbers by liamcervante · Pull Request #34702 · hashicorp/terraformattachment image

When unmarshalling the JSON plan, the renderer now uses json.Number to represent numbers instead of float64. This means that very large numbers (> 2^63) will no longer be truncated down to 2^63.

2024-02-22

2024-02-23

2024-02-25

2024-02-26

Andrew Miskell avatar
Andrew Miskell

Hi Guys, I’m having an issue I can’t seem to figure out. I’m working on a few terraform modules for our application and one of them is a AWS Transfer Family module. Everything so far appears to be working, however, I can’t figure out why it’s not picking up the EIP allocation ids for the endpoint details, everything else in the endpoint details works.

The specific area I’m having trouble with is below. Everything else works, like the similar security_group_ids line above it) and I’ve verified the EIP’s are created using terraform state show. The weird thing is, if I change “address_allocation_ids” in the lookup to anything else, like “foo”, it picks up the EIPs the module created and works.

  dynamic "endpoint_details" {
    for_each = var.transfer_server_type == "VPC" || var.transfer_server_type == "VPC_ENDPOINT" ? ["enabled"] : []
    content {
      vpc_id                 = lookup(var.endpoint_details, "vpc_id", null)
      vpc_endpoint_id        = lookup(var.endpoint_details, "vpc_endpoint_id", null)
      subnet_ids             = lookup(var.endpoint_details, "subnet_ids", null)
      security_group_ids     = lookup(var.endpoint_details, "security_group_ids", aws_security_group.this[*].id)
      address_allocation_ids = lookup(var.endpoint_details, "address_allocation_ids", aws_eip.this[*].allocation_id)
    }
  }

2024-02-27

marcelo.eguino avatar
marcelo.eguino

Hi, I have a question regarding Permission Sets on SSO. Currently I’m working with permission sets without problems. I’ve tried to add a new one that has a customer_managed_policy_attachments , but I’m not able to make it work.

I followed up the documentation from the components but I always get this error:

Error: waiting for SSO Permission Set (arn:aws:sso:::permissionSet/ssoins-7223c730732a9a98/ps-ffa7195776e1bd0d) provision: unexpected state 'FAILED', wanted target 'SUCCEEDED'. last error: Received a 404 status error: Not supported policy arn:aws:iam::999999999999:policy/RedshiftManagement.

What I saw, and I don’t know if this is the issue is that the Customer Managed Policy RedshiftManagement is created under the root account 111111111111, but it needs to be attached to the permission Set at the Account 999999999999 . Here is the complete code of the Permission Set:

locals {
  red_shift_access_permission_set = [{
    name             = "RedshiftAccess",
    description      = "Allow access to Redshift",
    relay_state      = "",
    session_duration = "",
    tags             = {},
    inline_policy    = data.aws_iam_policy_document.SqlWorkbench.json,
    policy_attachments = [
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftDataFullAccess",
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftQueryEditorV2FullAccess",
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftFullAccess"
    ]
    customer_managed_policy_attachments = [
      {
        name = aws_iam_policy.RedshiftManagement.name
        path = aws_iam_policy.RedshiftManagement.path
      }
    ]
  }]
}

resource "aws_iam_policy" "RedshiftManagement" {
  name   = "RedshiftManagement"
  path   = "/"
#  policy = aws_iam_policy_document.RedshiftManagement.json
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect   = "Allow",
        Action   = "s3:GetObject",
        Resource = "*",
      },
    ],
  })
}

data "aws_iam_policy_document" "SqlWorkbench" {
  statement {
    sid       = "SqlWorkbenchAccess"
    effect    = "Allow"
    actions   = ["sqlworkbench:*"]
    resources = ["*"]
  }
  statement {
    sid    = "s3Actions"
    effect = "Allow"
    actions = [
      "s3:PutObject",
      "s3:Get*",
      "s3:List*"
    ]
    resources = [
      "arn:aws:s3:::aaaaaa-redshift",
      "arn:aws:s3:::aaaaaa-redshift/*"
    ]
  }
}

Here RedshiftManagement has a simple action to test the policy attachment.

jose.amengual avatar
jose.amengual

this is using the cloudposse aws-sso, account, and account-map component

marcelo.eguino avatar
marcelo.eguino

Yes, it’s using all the components mentioned

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

if you’re using the root account as the identity administrator with AWS Identity Center, then that’s where the Permission Sets should be as well. Then you can attach a Permission Set to any account under that Organization

jose.amengual avatar
jose.amengual

he is using identity as the account for role chaining

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Alternatively you could also attach the policy doc directly if you merge SqlWorkbench

locals {
  red_shift_access_permission_set = [{
    name             = "RedshiftAccess",
    description      = "Allow access to Redshift",
    relay_state      = "",
    session_duration = "",
    tags             = {},
    inline_policy    = data.aws_iam_policy_document.redshift_access.json,
    policy_attachments = [
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftDataFullAccess",
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftQueryEditorV2FullAccess",
      "arn:${local.aws_partition}:iam::aws:policy/AmazonRedshiftFullAccess"
    ]
    customer_managed_policy_attachments = []
  }]
}

data "aws_iam_policy_document" "redshift_access" {
  statement {
    sid       = "SqlWorkbenchAccess"
    effect    = "Allow"
    actions   = ["sqlworkbench:*"]
    resources = ["*"]
  }
  statement {
    sid    = "s3Actions"
    effect = "Allow"
    actions = [
      "s3:PutObject",
      "s3:Get*",
      "s3:List*"
    ]
    resources = [
      "arn:aws:s3:::aaaaaa-redshift",
      "arn:aws:s3:::aaaaaa-redshift/*"
    ]
  }
  statement {
    sid = "RedshiftManagement"
    effect = "Allow",
    actions = [
      "s3:GetObject"
    ]
    resources = ["*"]
  }
}
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)


he is using identity as the account for role chaining
then the Permission Sets should be in your identity account as well

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

and the policies in the same account as the permission set

marcelo.eguino avatar
marcelo.eguino

Thanks, I’ll try with that

2024-02-28

Release notes from terraform avatar
Release notes from terraform
01:53:31 PM

v1.8.0-alpha20240228 1.8.0-alpha20240228 (February 28, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introduced for <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2098393853” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/34567” data-hovercard-type=”pull_request”…

Release v1.8.0-alpha20240228 · hashicorp/terraformattachment image

1.8.0-alpha20240228 (February 28, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introdu…

apply schema marks to returned instance values by jbardin · Pull Request #34567 · hashicorp/terraformattachment image

The original sensitivity handling implementation applied the marks from a resource schema only when decoding values for evaluation. This appeared to work in most cases, since the resource value cou…

Matt Gowie avatar
Matt Gowie

This is fascinating to me: Someone has written their own full-blown Terraform framework for themselves + a complete Stacks alternative. https://github.com/DavidGamba/dgtools/tree/master/bt#stacks-a-different-take https://github.com/DavidGamba/dgtools/tree/master/bt

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Hans D @Andriy Knysh (Cloud Posse) it’s using Cuelang

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Cuelang is cool (as @Hans D knows :slightly_smiling_face: ). In Cue you can describe things, generate things, and validate things all using the same config/model (data is self-validating unlike YAML). But not simple, and not widely used

Matt Gowie avatar
Matt Gowie

Yeah, I’ve run into it a few times and Ive heard people sing the praises. Of course it’s one of those tools where I question if the additional complexity it adds only further complicates things, but I tend to question that about a lot. You’re probably not practicing “Infrastructure as Data” anymore if you’re using Cue.

Matt Gowie avatar
Matt Gowie

We were looking into https://kcl-lang.io/ internally a tiny bit, which is similar.

KCL programming language. - Mutation Validation Abstraction Production-Ready | KCL programming language.

KCL is an open-source constraint-based record & functional language mainly used in configuration and policy scenarios.

Hans D avatar

Depends on your definition of “Infra as Data”. Personally, I prefer the dynamic parts of Cue over the HCL implementation. And it solves a lot of the repetitive bits that are now all over the place ( I really the attributes/includes also as part of asciidoc for documentation)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Don’t forget Pkl

1
Hans D avatar

cue, rego, asciidoc, gomplate and jq/yq … perhaps throw in some go-task (sorry)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i wonder why Google doesn’t promote Cue better. They should have created some frameworks on top of it to increase the adoption (eg a terraform-like framework in Cue). Like Rails for Ruby (before which Ruby was almost unknown)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


cue, rego, asciidoc, gomplate and jq/yq

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’d take cue over all of that

joshmyers avatar
joshmyers

lol, Apple are driving their Terraform with pkl, generate JSON, pass it to Terraform. Validation stuff is super nice. env var support, arguably nicer language to write than HCL.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a smart use of pkl+JSON. I’d be curious if they are generating imperative terraform (JSON) code, e.g., without vars, and handling that entirely in PKL.

2024-02-29

leonkatz avatar
leonkatz

Is there a way to merge aws_secretsmanager, values, I’m trying to create a secret that I can add to. My main problem is the first run when there is no secret yet.

Elad Levi avatar
Elad Levi

Please explain a bit more

leonkatz avatar
leonkatz

I think I have down to something simple, I have a variable that might be empty or might be a key value pair, I want to combine it with another key value pair as a map?

leonkatz avatar
leonkatz

I just don’t know if the variable will be a key value pair or an empty string “”

susie-h avatar
susie-h

i work with the api-gateway module a lot. i’m curious how it’s recommended to manage the lambda permissions that go along with resources configured inside the api-gw. Say, if a resource used a lambda function for it’s integration request, that lambda would need a policy statement for invoke permissions for that api gw. i see there isn’t lambda permission code in the module. i’m working on a separate module to add permissions after the gateway is deployed. i wanted to know if the cloudposse team had any discusssions on navigating this when creating the code for the module.

    keyboard_arrow_up