#terraform-aws-modules (2023-09)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2023-09-11

Eric D. Berg avatar
Eric D. Berg

I’m running into an issue with terraform-aws-datadog-lambda-forwarder, where the aws_lambda_permission.allow_s3_bucket attribute is barfing on bucket names that contain periods.

Bucket names can contain any characters that are valid for FQDNs, the use of the raw bucket name in the aws_lambda_permission.allow_s3_bucket.statement_id attribute allows invalid characters, specifically the period ., which is invalid input for the resource, producing the following error, generated for bucket name com.somebody.dev-foo-us-east-2-s3-logs:

│ Error: invalid value for statement_id (must contain alphanumeric, underscores or dashes only)
│
│   with module.datadog_lambda_forwarder.aws_lambda_permission.allow_s3_bucket["com.somebody.dev-foo-us-east-2-s3-logs"],
│   on ../../../../../../src/terraform-aws-datadog-lambda-forwarder/lambda-log.tf line 126, in resource "aws_lambda_permission" "allow_s3_bucket":
│  126:   statement_id  = "AllowS3ToInvokeLambda-${each.value}"

This change fixes that for periods:

statement_id = "AllowS3ToInvokeLambda-${
    replace(
      title(
        replace(each.value, ".", " "),
      ),
      " ", ""
    )
  }"

Is there another way around this?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy White (Cloud Posse)

1
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

There’s a few ways to think of this. To be fair, this behavior adds some logic that might be unexpected to another user. Some folks would probably like to be aware of the stray character, rather than finding out later their bucket didn’t match the name they specified.

As such, it might be better to add a validation block to the variable … The benefit being you could have the error message emit the code change even. Such as:

validation {
    condition     = strcontains(var.bucket, ".")
    error_message = "Please filter out invalid characters using something like `replace( name, \".\", \" \")`"
  }
Input Variables - Configuration Language | Terraform | HashiCorp Developerattachment image

Input variables allow you to customize modules without altering their source code. Learn how to declare, define, and reference variables in configurations.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

To be clear, that’s just a naive example. It’s probably better to form something like this stackoverflow answer

Regex for s3 bucket name

I’m trying to create an s3 bucket through cloudformation. I tried using regex ^([0-9a-z.-]){3,63}$, but it also accepts the patterns “…” and “—” which are invalid according to new s3 naming

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

err, actually, starting to realize you’re speaking of the policy…

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I guess statement Ids don’t form many contracts with other APIs, so the additional behavior proposed ought to be benign

Eric D. Berg avatar
Eric D. Berg

Yes, it’s a formatting thing in the policy Sid. Bucket names take “.”, but policy Sids don’t.

Eric D. Berg avatar
Eric D. Berg

and a little research suggests that the period is the only character that falls in this category. I expect to be wrong about that.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

To be fair, the statement id ought to not be required in general. Maybe it would be even wiser to remove it altogether?

Eric D. Berg avatar
Eric D. Berg

I agree, @Jeremy White (Cloud Posse). It’s a aws_lambda_permission with a single clause.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

It would need to be truly profound to not be clear in intent. Let’s just do that. Less is more

Eric D. Berg avatar
Eric D. Berg

and apparently, TF will generate them if not provided. From those statement_id docs you linked above:
(Optional) A unique statement identifier. By default generated by Terraform.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I’ll still want an internal review with a team member, and opinions change a good deal between cohorts. But I doubt this will take much time to fix up.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Do you consume the component too?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

or just the module?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
#64 fix: drop statement_id. Low value and has issues

what

• remove the statement_id from the aws_lambda_permission resource for
buckets

why

statement_id breaks when valid buckets are specified. The two have entirely
different requirements for what is syntactically valid • statement_id is intended for organizing many documents. Since the
permission is simple and its intent direct… the parameter is unneeded • The alternative, to devise a way to correct valid bucket names, is more
logic to maintain and could end up breaking again in the future.

references

statement_idSweetOps thread (Public slack)

Eric D. Berg avatar
Eric D. Berg

We just use the module.

Eric D. Berg avatar
Eric D. Berg

I’m not even sure what the component is. Can you tap me with the old Cluestick one time?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

sure! the component style of datadog-lambda-forwarder lives here and is basically more opinionated than a vanilla terraform module. Components lean on using atmos for configuration… which means you define the terraform once, and then you specify all your configs for it in a catalog. Think of it as a way to keep your config separate from your implementation. The structure of an atmos style repo looks like this:

.
├── components
│   └── terraform
│       ├── some-terraform
│       │   └── component.yaml
│       ├── other-terraform
│       │   └── component.yaml
│       └── datadog-lambda-forwarder
│           └── component.yaml
└── stacks
    ├── catalog
    │   ├── my-app
    │   │   ├── defaults.yaml
    │   │   ├── non-prod.yaml
    │   │   └── prod.yaml
    │   └── other_defaults.yaml
    ├── orgs
    │   └── acme
    │       └── plat
    │           ├── dev
    │           │   └── eu-west-1
    │           │       └── infra.yaml
    │           ├── prod
    │           │   └── eu-west-1
    │           │       └── infra.yaml
    │           └── staging
    │               └── eu-west-1
    │                   └── infra.yaml
    └── workflows
        └── my_common_tasks.yaml

Where the catalog houses defaults, and the orgs hold only the specifics for a given environment… so you DRY up your config -and- your terraform. Atmos also has workflows, so you only need to run one command, and it’ll run terraform as many times as it takes for each component in a particular stack.

Despite how opinionated it is, it stays true to being ‘just terraform’. So you can always drop atmos and just run your modules with var files. In fact, that’s all atmos fundamentally does… write var-files and run terraform with them.

datadog-lambda-forwarder | The Cloud Posse Developer Hub

This component is responsible for provision all the necessary infrastructure to

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Also, the PR merged, so you should be good to go with the latest release of the datadog lambda forwarder.

Eric D. Berg avatar
Eric D. Berg

Thanks, @Jeremy White (Cloud Posse)! Much appreciated!!

2023-09-20

2023-09-21

2023-09-26

    keyboard_arrow_up