#terraform-aws-modules (2023-09)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2023-09-11
I’m running into an issue with terraform-aws-datadog-lambda-forwarder
, where the aws_lambda_permission.allow_s3_bucket
attribute is barfing on bucket names that contain periods.
Bucket names can contain any characters that are valid for FQDNs, the use of the raw bucket name in the aws_lambda_permission.allow_s3_bucket.statement_id
attribute allows invalid characters, specifically the period .
, which is invalid input for the resource, producing the following error, generated for bucket name com.somebody.dev-foo-us-east-2-s3-logs
:
│ Error: invalid value for statement_id (must contain alphanumeric, underscores or dashes only)
│
│ with module.datadog_lambda_forwarder.aws_lambda_permission.allow_s3_bucket["com.somebody.dev-foo-us-east-2-s3-logs"],
│ on ../../../../../../src/terraform-aws-datadog-lambda-forwarder/lambda-log.tf line 126, in resource "aws_lambda_permission" "allow_s3_bucket":
│ 126: statement_id = "AllowS3ToInvokeLambda-${each.value}"
This change fixes that for periods:
statement_id = "AllowS3ToInvokeLambda-${
replace(
title(
replace(each.value, ".", " "),
),
" ", ""
)
}"
Is there another way around this?
@Jeremy G (Cloud Posse)
There’s a few ways to think of this. To be fair, this behavior adds some logic that might be unexpected to another user. Some folks would probably like to be aware of the stray character, rather than finding out later their bucket didn’t match the name they specified.
As such, it might be better to add a validation
block to the variable … The benefit being you could have the error message emit the code change even. Such as:
validation {
condition = strcontains(var.bucket, ".")
error_message = "Please filter out invalid characters using something like `replace( name, \".\", \" \")`"
}
Input variables allow you to customize modules without altering their source code. Learn how to declare, define, and reference variables in configurations.
To be clear, that’s just a naive example. It’s probably better to form something like this stackoverflow answer
I’m trying to create an s3 bucket through cloudformation. I tried using regex ^([0-9a-z.-]){3,63}$, but it also accepts the patterns “…” and “—” which are invalid according to new s3 naming
err, actually, starting to realize you’re speaking of the policy…
I guess statement Ids don’t form many contracts with other APIs, so the additional behavior proposed ought to be benign
Yes, it’s a formatting thing in the policy Sid
. Bucket names take “.”, but policy Sid
s don’t.
and a little research suggests that the period is the only character that falls in this category. I expect to be wrong about that.
To be fair, the statement id ought to not be required in general. Maybe it would be even wiser to remove it altogether?
I agree, @Jeremy White (Cloud Posse). It’s a aws_lambda_permission
with a single clause.
It would need to be truly profound to not be clear in intent. Let’s just do that. Less is more
and apparently, TF will generate them if not provided. From those statement_id
docs you linked above:
(Optional) A unique statement identifier. By default generated by Terraform.
I’ll still want an internal review with a team member, and opinions change a good deal between cohorts. But I doubt this will take much time to fix up.
Do you consume the component too?
or just the module?
what
• remove the statement_id
from the aws_lambda_permission
resource for
buckets
why
• statement_id
breaks when valid buckets are specified. The two have entirely
different requirements for what is syntactically valid
• statement_id
is intended for organizing many documents. Since the
permission is simple and its intent direct… the parameter is unneeded
• The alternative, to devise a way to correct valid bucket names, is more
logic to maintain and could end up breaking again in the future.
references
• statement_id • SweetOps thread (Public slack)
We just use the module.
I’m not even sure what the component is. Can you tap me with the old Cluestick one time?
sure! the component style of datadog-lambda-forwarder
lives here and is basically more opinionated than a vanilla terraform module. Components lean on using atmos for configuration… which means you define the terraform once, and then you specify all your configs for it in a catalog. Think of it as a way to keep your config separate from your implementation. The structure of an atmos
style repo looks like this:
.
├── components
│ └── terraform
│ ├── some-terraform
│ │ └── component.yaml
│ ├── other-terraform
│ │ └── component.yaml
│ └── datadog-lambda-forwarder
│ └── component.yaml
└── stacks
├── catalog
│ ├── my-app
│ │ ├── defaults.yaml
│ │ ├── non-prod.yaml
│ │ └── prod.yaml
│ └── other_defaults.yaml
├── orgs
│ └── acme
│ └── plat
│ ├── dev
│ │ └── eu-west-1
│ │ └── infra.yaml
│ ├── prod
│ │ └── eu-west-1
│ │ └── infra.yaml
│ └── staging
│ └── eu-west-1
│ └── infra.yaml
└── workflows
└── my_common_tasks.yaml
Where the catalog
houses defaults, and the orgs
hold only the specifics for a given environment… so you DRY up your config -and- your terraform. Atmos also has workflows
, so you only need to run one command, and it’ll run terraform as many times as it takes for each component in a particular stack.
Despite how opinionated it is, it stays true to being ‘just terraform’. So you can always drop atmos and just run your modules with var files. In fact, that’s all atmos fundamentally does… write var-files and run terraform with them.
This component is responsible for provision all the necessary infrastructure to
Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Also, the PR merged, so you should be good to go with the latest release of the datadog lambda forwarder.
Thanks, @Jeremy White (Cloud Posse)! Much appreciated!!