#terraform-aws-modules (2024-05)

terraform Terraform Modules

Discussions related to https://github.com/terraform-aws-modules

Archive: https://archive.sweetops.com/terraform-aws-modules/

2024-05-07

2024-05-08

Dale avatar

Hey! wave I have a question about this ECR module. It seems to enforce the idea that the lifecycle policy should be based on number of images in the repository (default: 500) rather than the number of days an image has hung around for, even though ECR supports both types of policies. Is that a conscious decision by-design, or an oversight? If by-design, is that because it’s a widely accepted best practice? Looking for either sources I can read or just a quick summary on why it’s the way it is please!

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR

Dale avatar

@Erik Osterman (Cloud Posse) tagging you in case you can point me to the right person thanks!

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the reason it’s based on the number, is that there’s a hard limit that cannot be increased. Lifecycle by age works for repos that aren’t that busy, but for busy repos then you have a forced failure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I could see a case being made to support both, and we would reject it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The other consideration is you don’t want the production image lifecycled

Dale avatar

ok, so it’s both for cost implications on busy repos and stability of in-use images

Dale avatar

or did you mean in your first message that using lifecycle by age means you could still hit the hard quota in a repo on the AWS account level?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, exactly - that using lifecycle by age means you could still hit the hard quota of the ECR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It used to be 1,000 I believe. They raised it to 10,000.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Er.. that’s the wrong limit

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s not a lot of tags when you run automated builds in CI

Dale avatar

gotcha! I’ve never got that limit before and assumed that if you hit it while using a lifecycle-by-age policy then AWS would handle it automatically somehow thanks very much for the explanation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


then AWS would handle it automatically somehow
At least not when we first encountered it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But things improve all the time, so if there’s a better way to accomplish the same outcome…. we’re open to it

Dale avatar

TBH, it’s probably better to set it as a hard number of images rather than number of days, if only to prevent people from accidentally storing too many images in that timeframe and blowing their budget

Dale avatar

but I’m not sure if when you design the modules you do it with that kind of thing in mind or not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, you could use this for that too. More aggressively purge images in dev, for example.

2024-05-11

Marat Bakeev avatar
Marat Bakeev

Hey everyone, could anyone help me with aws-team-roles module? :sweat_smile: How can I change the name format for the roles, that are generated by that module? For example, I’m getting a team role like this:

  # aws_iam_role.default["admin"] will be created
  + resource "aws_iam_role" "default" {
...
      + name                  = "nsp-gbl-dns-admin"

But I’m trying to use the name format ‘namespace-tenant-environment-stage’ - and when I run terraform in the org account, it wants to assume role nsp-core-gbl-dns-terraform. And fails %)

I’ve found out, that if I set var.label_order to

      - namespace
      - tenant
      - environment
      - stage 

Then it works fine.

Is this the correct solution? Or I’m trying to do something backwards?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a very opinionated root module part of our refarch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes the label order is the correct way

1

2024-05-15

Quentin BERTRAND avatar
Quentin BERTRAND

Hello, https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/commit/aa3840ee7874a74c27e4226eaab585fab9501faf#diff-dc46acf24af[…]1f33d9bf2532fbbR1

With the data , terraform plan no longer works if subnets don’t exist (which can happen when an entire infrastructure has to be created from scratch)

Would you have an idea for solving this problem?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cloud Posse always separates infrastructure into their various components, to reduce blast radius, speed up plans, and in general avoid this entire catagory of problems.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The life cycle of the VPC is entirely different from the life cycle of an EC2 auto scale group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thus, they don’t belong in the same root module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Of course, this is an opinionated design, and it’s not shared by everyone - but it’s probably why we didn’t encounter this as a problem.

    keyboard_arrow_up