#terraform-aws-modules (2024-05)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2024-05-07
2024-05-08
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
Hey! I have a question about this ECR module. It seems to enforce the idea that the lifecycle policy should be based on number of images in the repository (default: 500) rather than the number of days an image has hung around for, even though ECR supports both types of policies. Is that a conscious decision by-design, or an oversight? If by-design, is that because it’s a widely accepted best practice? Looking for either sources I can read or just a quick summary on why it’s the way it is please!
Terraform Module to manage Docker Container Registries on AWS ECR
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
@Erik Osterman (Cloud Posse) tagging you in case you can point me to the right person thanks!
Terraform Module to manage Docker Container Registries on AWS ECR
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
So the reason it’s based on the number, is that there’s a hard limit that cannot be increased. Lifecycle by age works for repos that aren’t that busy, but for busy repos then you have a forced failure.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
I could see a case being made to support both, and we would reject it.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
The other consideration is you don’t want the production image lifecycled
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
ok, so it’s both for cost implications on busy repos and stability of in-use images
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
or did you mean in your first message that using lifecycle by age means you could still hit the hard quota in a repo on the AWS account level?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Yes, exactly - that using lifecycle by age means you could still hit the hard quota of the ECR
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
It used to be 1,000 I believe. They raised it to 10,000.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Er.. that’s the wrong limit
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
That’s not a lot of tags when you run automated builds in CI
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
gotcha! I’ve never got that limit before and assumed that if you hit it while using a lifecycle-by-age policy then AWS would handle it automatically somehow thanks very much for the explanation
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
then AWS would handle it automatically somehow
At least not when we first encountered it.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
But things improve all the time, so if there’s a better way to accomplish the same outcome…. we’re open to it
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
TBH, it’s probably better to set it as a hard number of images rather than number of days, if only to prevent people from accidentally storing too many images in that timeframe and blowing their budget
![Dale avatar](https://secure.gravatar.com/avatar/081a9d27c39deb338378ae0c454ccb87.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
but I’m not sure if when you design the modules you do it with that kind of thing in mind or not
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Yes, you could use this for that too. More aggressively purge images in dev, for example.
2024-05-11
![Marat Bakeev avatar](https://avatars.slack-edge.com/2024-04-17/6971325615781_bd39243db327455c2c2d_72.png)
Hey everyone, could anyone help me with aws-team-roles
module? :sweat_smile:
How can I change the name format for the roles, that are generated by that module? For example, I’m getting a team role like this:
# aws_iam_role.default["admin"] will be created
+ resource "aws_iam_role" "default" {
...
+ name = "nsp-gbl-dns-admin"
But I’m trying to use the name format ‘namespace-tenant-environment-stage’ - and when I run terraform in the org account, it wants to assume role nsp-core-gbl-dns-terraform. And fails %)
I’ve found out, that if I set var.label_order
to
- namespace
- tenant
- environment
- stage
Then it works fine.
Is this the correct solution? Or I’m trying to do something backwards?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
This is a very opinionated root module part of our refarch
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
2024-05-15
![Quentin BERTRAND avatar](https://avatars.slack-edge.com/2022-08-29/4028696878592_774493d7ba3d4c45009e_72.jpg)
With the data
, terraform plan
no longer works if subnets don’t exist (which can happen when an entire infrastructure has to be created from scratch)
Would you have an idea for solving this problem?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Cloud Posse always separates infrastructure into their various components, to reduce blast radius, speed up plans, and in general avoid this entire catagory of problems.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
The life cycle of the VPC is entirely different from the life cycle of an EC2 auto scale group.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Thus, they don’t belong in the same root module.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Of course, this is an opinionated design, and it’s not shared by everyone - but it’s probably why we didn’t encounter this as a problem.
2024-05-21
![Zing avatar](https://secure.gravatar.com/avatar/acc8a8448f5566294450c6527388e44e.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
hey there, I’m looking at the aws-config module and I’m running into a few issues:
• I’m using an organization aggregator
• I’m using a central SNS topic and S3 bucket
• I see resources in my child accounts showing up in the aggregators for my central account
• i do not see configuration change events for my child accounts (configuration change timeline) in the central aggregator
• I do see configuration change events in the configuration timeline on the child accounts
• I do not see anything actually touching the central sns topic? is this expected? Am I not supposed to see configuration timeline / change events in the central account? Should I see activity on the sns topic?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Jeremy White (Cloud Posse)
![Jeremy White (Cloud Posse) avatar](https://avatars.slack-edge.com/2022-10-14/4236950492513_ceab13cebd77d26f2ef6_72.jpg)
From what I’ve seen, the organization deploy of AWS-Config is really just calling the APIs in the child accounts. There’s no really connection to the central account, and in fact if you use aws-config in the child account, I believe both organization and account level AWS-Config share the same API limits for total rules allowed and conformance packs total.
2024-05-26
![Marat Bakeev avatar](https://avatars.slack-edge.com/2024-04-17/6971325615781_bd39243db327455c2c2d_72.png)
Hi guys, there seems to be an issue with the VPC component - would it be possible to update the version of dynamic-subnet
within it, so we can use ap-southeast-4
Melbourne?
Details are here - https://github.com/cloudposse/terraform-aws-components/issues/1047
Describe the Bug
VPC component in this repo uses subnets module version, that doesn’t support Melbourne opt-in region (ap-southeast-4).
vpc/main.tf#L142
The issue is that cloudposse/dynamic-subnets/aws version 2.3.0 uses cloudposse/utils version 1.1.0, which was released when Melbourne wasn’t in its code.
cloudposse/dynamic-subnets/aws 2.4.0 uses cloudposse/utils version 1.3.0, which includes support for Melbourne
(It was actually added in utils 1.2.0 - cloudposse/terraform-aws-utils#26)
It would be great, if this component could be updated to use cloudposse/dynamic-subnets/aws 2.4.0 or higher.
Expected Behavior
Deployment of a vpc works in ap-southeast-4.
Steps to Reproduce
Use the VPC component with these variables:
# Variables for the component 'vpc' in the stack 'core-apse4-network':
availability_zones:
- a
- b
- c
enabled: true
environment: apse4
map_public_ip_on_launch: false
max_subnet_count: 3
region: ap-southeast-4
stage: network
run the stack and observe the error atmos terraform apply vpc -s core-apse4-network
Planning failed. Terraform encountered an error while generating this plan.
│ Error: Invalid index
│
│ on .terraform/modules/subnets/main.tf line 71, in locals:
│ 71: subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│ ├────────────────
│ │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│ on .terraform/modules/subnets/main.tf line 71, in locals:
│ 71: subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│ ├────────────────
│ │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│ on .terraform/modules/subnets/main.tf line 71, in locals:
│ 71: subnet_az_abbreviations = [for az in local.subnet_availability_zones : local.az_abbreviation_map[az]]
│ ├────────────────
│ │ local.az_abbreviation_map is object with 271 attributes
│
│ The given key does not identify an element in this collection value.
╵
exit status 1
Screenshots
No response
Environment
No response
Additional Context
No response