#terraform-aws-modules (2024-05)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-05-07

2024-05-08

Dale avatar

Hey! wave I have a question about this ECR module. It seems to enforce the idea that the lifecycle policy should be based on number of images in the repository (default: 500) rather than the number of days an image has hung around for, even though ECR supports both types of policies. Is that a conscious decision by-design, or an oversight? If by-design, is that because it’s a widely accepted best practice? Looking for either sources I can read or just a quick summary on why it’s the way it is please!

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR

Dale avatar

@Erik Osterman (Cloud Posse) tagging you in case you can point me to the right person thanks!

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the reason it’s based on the number, is that there’s a hard limit that cannot be increased. Lifecycle by age works for repos that aren’t that busy, but for busy repos then you have a forced failure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could see a case being made to support both, and we would reject it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The other consideration is you don’t want the production image lifecycled

Dale avatar

ok, so it’s both for cost implications on busy repos and stability of in-use images

Dale avatar

or did you mean in your first message that using lifecycle by age means you could still hit the hard quota in a repo on the AWS account level?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, exactly - that using lifecycle by age means you could still hit the hard quota of the ECR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It used to be 1,000 I believe. They raised it to 10,000.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Er.. that’s the wrong limit

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s not a lot of tags when you run automated builds in CI

Dale avatar

gotcha! I’ve never got that limit before and assumed that if you hit it while using a lifecycle-by-age policy then AWS would handle it automatically somehow thanks very much for the explanation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


then AWS would handle it automatically somehow
At least not when we first encountered it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But things improve all the time, so if there’s a better way to accomplish the same outcome…. we’re open to it

Dale avatar

TBH, it’s probably better to set it as a hard number of images rather than number of days, if only to prevent people from accidentally storing too many images in that timeframe and blowing their budget

Dale avatar

but I’m not sure if when you design the modules you do it with that kind of thing in mind or not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, you could use this for that too. More aggressively purge images in dev, for example.

2024-05-11

Marat Bakeev avatar
Marat Bakeev

Hey everyone, could anyone help me with aws-team-roles module? :sweat_smile: How can I change the name format for the roles, that are generated by that module? For example, I’m getting a team role like this:

  # aws_iam_role.default["admin"] will be created
  + resource "aws_iam_role" "default" {
...
      + name                  = "nsp-gbl-dns-admin"

But I’m trying to use the name format ‘namespace-tenant-environment-stage’ - and when I run terraform in the org account, it wants to assume role nsp-core-gbl-dns-terraform. And fails %)

I’ve found out, that if I set var.label_order to

      - namespace
      - tenant
      - environment
      - stage 

Then it works fine.

Is this the correct solution? Or I’m trying to do something backwards?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a very opinionated root module part of our refarch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes the label order is the correct way

1

2024-05-15

Quentin BERTRAND avatar
Quentin BERTRAND

Hello, https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/commit/aa3840ee7874a74c27e4226eaab585fab9501faf#diff-dc46acf24af[…]1f33d9bf2532fbbR1

With the data , terraform plan no longer works if subnets don’t exist (which can happen when an entire infrastructure has to be created from scratch)

Would you have an idea for solving this problem?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cloud Posse always separates infrastructure into their various components, to reduce blast radius, speed up plans, and in general avoid this entire catagory of problems.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The life cycle of the VPC is entirely different from the life cycle of an EC2 auto scale group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thus, they don’t belong in the same root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Of course, this is an opinionated design, and it’s not shared by everyone - but it’s probably why we didn’t encounter this as a problem.

1

2024-05-21

Zing avatar

hey there, I’m looking at the aws-config module and I’m running into a few issues:

• I’m using an organization aggregator

• I’m using a central SNS topic and S3 bucket

• I see resources in my child accounts showing up in the aggregators for my central account

• i do not see configuration change events for my child accounts (configuration change timeline) in the central aggregator

• I do see configuration change events in the configuration timeline on the child accounts

• I do not see anything actually touching the central sns topic? is this expected? Am I not supposed to see configuration timeline / change events in the central account? Should I see activity on the sns topic?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy White (Cloud Posse)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

From what I’ve seen, the organization deploy of AWS-Config is really just calling the APIs in the child accounts. There’s no really connection to the central account, and in fact if you use aws-config in the child account, I believe both organization and account level AWS-Config share the same API limits for total rules allowed and conformance packs total.

2024-05-26

Marat Bakeev avatar
Marat Bakeev

Hi guys, there seems to be an issue with the VPC component - would it be possible to update the version of dynamic-subnet within it, so we can use ap-southeast-4 Melbourne? Details are here - https://github.com/cloudposse/terraform-aws-components/issues/1047

#1047 vpc: subnets module does not support Melbourne region

Describe the Bug

VPC component in this repo uses subnets module version, that doesn’t support Melbourne opt-in region (ap-southeast-4).
vpc/main.tf#L142

The issue is that cloudposse/dynamic-subnets/aws version 2.3.0 uses cloudposse/utils version 1.1.0, which was released when Melbourne wasn’t in its code.

cloudposse/dynamic-subnets/aws 2.4.0 uses cloudposse/utils version 1.3.0, which includes support for Melbourne
(It was actually added in utils 1.2.0 - cloudposse/terraform-aws-utils#26)

It would be great, if this component could be updated to use cloudposse/dynamic-subnets/aws 2.4.0 or higher.

Expected Behavior

Deployment of a vpc works in ap-southeast-4.

Steps to Reproduce

Use the VPC component with these variables:

# Variables for the component 'vpc' in the stack 'core-apse4-network':
availability_zones:
- a
- b
- c
enabled: true
environment: apse4
map_public_ip_on_launch: false
max_subnet_count: 3
region: ap-southeast-4
stage: network

run the stack and observe the error atmos terraform apply vpc -s core-apse4-network

Planning failed. Terraform encountered an error while generating this plan.

│ Error: Invalid index
│
│   on .terraform/modules/subnets/main.tf line 71, in locals:
│   71:   subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│     ├────────────────
│     │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│   on .terraform/modules/subnets/main.tf line 71, in locals:
│   71:   subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│     ├────────────────
│     │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│   on .terraform/modules/subnets/main.tf line 71, in locals:
│   71:   subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│     ├────────────────
│     │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
exit status 1

Screenshots

No response

Environment

No response

Additional Context

No response

1
1

2024-05-29

    keyboard_arrow_up