#terraform-aws-modules (2024-04)
Terraform Modules
Discussions related to https://github.com/terraform-aws-modules
Archive: https://archive.sweetops.com/terraform-aws-modules/
2024-04-03
how are folks handing permissionsets defining permissions for teams that have varying levels of access to multiple accounts?
for example, say we have a business intelligence team.
we create a business intelligence permission set and create that in the various target accounts, but that permission set should have SLIGHTLY different permissions in each account. I don’t know if this a solvable “problem” i think the cloudposse module for permissionsets is nice, but I don’t think this pattern is possible?
You can deploy roles with the same name but adjusted permissions for different accounts and allow the individual teams to assume those roles via permission sets. they will always assume the same role but will not have the same permissions.
I know, this makes things “slightly” more complicated , but that’s the first thing that came to mind.
I was thinking about doing that, but I wonder how the user experience would be…
so say I have 10 accounts
Would I deploy:
one permissionset and 10 roles? 10 permissionsets and 10 roles?
Is the assume role action going to be seamless? Say they click on the console, will they see those 10 roles? Or will they have to assume role from the permissionsets? Also, don’t permissionsets automatically create a role with the same name?
For advanced use-cases we use a hybrid approach. Check out our components for aws-teams
and aws-team-roles
. We grant then a permission set access to a role provisioned with these other root modules.
This has worked well for us together with atmos
oh interesting
do the child accounts need to trust anything other than the teams in the identity account?
@Dan Miller (Cloud Posse) or @Jeremy G (Cloud Posse) will have more insights
Our current architecture supports a few different options.
The old-school option is that a permission set gives access to what we call a “team” (implemented as an IAM Role) in the identity
account. All the other accounts, configured via. aws-team-roles
, decide which teams get access to which roles in that account. I strongly recommend that you give the roles the exact same permissions in all accounts, otherwise it gets very hard to keep track of who has access to what. So perhaps you’d have roles bi_admin
, bi_poweruser
, bi_reader
, etc, and in some accounts your bi
team would have access to one and in some accounts a different one. However, if the difference is just in the Resource ARNs, you do not need to make separate roles for that. Also, if ALL BI users get the exact same access, then you might find it easier to manage just having a bi_team
and one bi
role in each account with tweaked permissions. It’s a “name your poison” kind of situation.
A new option, especially if the users do not need to use Terraform or EKS, is to just handle everything with Permission Sets. I still recommend a Permission Set per use case, but now you can create an AWS Identity Center Group that has access to the various permission sets in the various accounts, and just let users use the web console or aws
CLI.
There are hybrid options in between these 2 extremes. Picking the right balance of trade-offs is what Cloud Posse professional services is made to help with.
Good morning kind folks! I have a question about the use of context modules in Cloudposse’s AWS DMS modules. When I try to plan anything using examples right off the readme:
module "dms_replication_instance" {
source = "cloudposse/dms/aws//modules/dms-replication-instance"
# Cloud Posse recommends pinning every module to a specific version
version = "0.2.0"
# If `auto_minor_version_upgrade` is enabled,
# then we should omit the patch part of the version or Terraform will try to revert the version upon detected drift
engine_version = "3.4"
replication_instance_class = "dms.t2.small"
allocated_storage = 50
apply_immediately = true
auto_minor_version_upgrade = true
allow_major_version_upgrade = false
multi_az = false
publicly_accessible = false
preferred_maintenance_window = "sun:10:30-sun:14:30"
vpc_security_group_ids = [local.convox_instances_security_group_id, local.eks_security_group_id]
subnet_ids = data.terraform_remote_state.common.outputs.vpc.convox.private_subnets
context = module.this.context
# depends_on = [
# # The required DMS roles must be present before replication instances can be provisioned
# module.dms_iam
# ]
}
I get `
Error: Reference to undeclared module
on dms-migration.tf line 22, in module "dms_replication_instance":
22: context = module.this.context
No module call named "this" is declared in the root module.
If I remove the reference to context it will next complain about the content of replication_id
because it is composed of module.this.id
which seems to evaluate to null or empty string.
This example (and every other module) uses our null-label module. If you don’t want to use context, then you’d need to pass any of the inputs for ID to create a non-empty identifier. For example name = foo
. But I’d recommend giving null-label a try. It’s pretty useful!
Here’s a great article on why it’s useful and how you can use it from one of our community members https://masterpoint.io/updates/terraform-null-label/
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
Thanks @Dan Miller (Cloud Posse) I was just starting to try that, but the references you provided will be very helpful!
plus we have a pre-defined, opinionated mixin that you can drop into your root module if you use some of the same identifiers as us. We use this to consistently set things like namespace, stage, environment, etc
https://github.com/cloudposse/terraform-null-label/blob/main/exports/context.tf
2024-04-08
Hi Team, we use the multi-az-subnets module and we have been getting argument is deprecated warnings:
│ Warning: Argument is deprecated │ │ with module.stg.module.vpc.module.isolated_subnet.aws_eip.public, │ on .terraform/modules/stg.vpc.isolated_subnet/public.tf line 119, in resource “aws_eip” “public”: │ 119: vpc = true │ │ use domain attribute instead │ │ (and 14 more similar warnings elsewhere)
It looks like this module is not maintained any more? I just wondered if any one had any recommendations of similar subnet modules? or if there was a way to work around. Thanks!
DEPRECATED (use cloudposse/terraform-aws-dynamic-subnets instead): Terraform module for multi-AZ public and private subnets provisioning
Yes, it was too much for us to maintain both modules
DEPRECATED (use cloudposse/terraform-aws-dynamic-subnets instead): Terraform module for multi-AZ public and private subnets provisioning
You can accomplish more or less the same thing with https://github.com/cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC
It’s just more configurable.
You can still achieve multi-az subnets.
Aahhh, perfect, thanks for that. I’ll have a look into it
2024-04-18
Hi, it looks like modules/components for AWS SQS queue exist at two locations, terraform-aws-components/modules/sqs-queue/modules/terraform-aws-sqs-queue
and terraform-aws-components/modules/sqs-queue
(which slightly wraps the former, adding compatibility with the account-roles component)
I’m wondering why there hasn’t been a root module published at cloudposse/terraform-aws-sqs-queue
to manage sqs queue resources
Currently the module lacks examples which makes usage of variables like policy
unclear
It looks like we pulled in the sqs-queue
component but never finished the submodule. Yes, terraform-aws-sqs-queue
should have been published as an independent module, but I’m assuming we never got around to it.
But that module is simple enough that it may not warrant a module of it’s own. Everything in that submodule could be moved to the root module / component
One reason to create a child module repo for this is to support Dead letter queues. I would like to see that personally. We’re building that for a customer now, so we (Kevin specifically) may be able to share our implementation once that wraps up.