#refarch (2024-11)
Cloud Posse Reference Architecture
2024-11-05
We want to inform our community that starting next Tuesday (2024-11-12), we will begin migrating each component in this repository (cloudposse/terraform-aws-components
) repository to a new GitHub organization. This change is essential to enhance the stability and usability of our components, including:
• Leveraging terratest
automation for improved testing.
• Implementing semantic versioning to clearly communicate updates and breaking changes.
• Enabling a more modular structure, with a focus on maintaining individual components more effectively.
• Accelerate PR reviews and component contributions
• Automated dependabot updates
• And tons more!!
What to Expect
• Migration Timeline: The migration process will start next Tuesday and is anticipated to be completed by the end of the following week.
• Updated Documentation: We are actively updating our documentation and the cloudposse-component
updater tool to facilitate a smooth transition.
• Maintenance Mode: After this migration, this cloudposse/terraform-aws-components
repository will enter maintenance mode. We will no longer accept new contributions here, and all updates should be directed towards the new individual component repositories.
• Future Archiving: In approximately six months, we plan to archive this repository and transfer it to the cloudposse-archives
organization.
FAQ
• Does this affect our terraform modules? No. We are not moving our terraform modules
We are committed to making this migration as seamless as possible, but if you have any questions or concerns, please post your comments in this issue. Your feedback is important to us, and we appreciate your support in this transition!
Thank you,
The Cloud Posse Team
2024-11-14
When deploying a new cluster I wasn’t able to get node groups to join the cluster initially. It seems this is because the cluster endpoint is private and the vpc-cni addon isn’t deployed yet. By default we have addons_depends_on: true
in our EKS catalog, however, that prevents the vpc-cni from being added. Changing it will prevent the workflow from completing successfully b/c coredns, etc. require a node.
What am I missing here?
2024-11-15
Hi everyone,
Is it possible to somehow use reference architecture in one account?
Essentially, we need to set up just one prod account, without anything else, with just the vpc, IAM and a EKS cluster
2024-11-20
Ref https://aws.amazon.com/ru/blogs/networking-and-content-delivery/vpc-block-public-access/
Today, the AWS Well-Architected Framework describes a single account with a single VPC as an anti-pattern.
I recall that the refarch only required a single vpc per region per account. I know technically it’s possible in atmos to create more than 1 vpc but is one vpc per region account still the standard or is it better to create multiple vpcs per region per account now ? If so, how do you recommend segmenting the vpcs?
@Dan Miller (Cloud Posse)
That’s a core design of refarch, so it’s definitely worth discussing. I’ll bring it up to the team. Thanks for sharing!
Following up on this topic, and yes we are still in agreement with their statement. AWS is saying here that you should put workloads in separate VPCs. We absolutely do that. However in the reference architecture we take it a step further. Not only are workloads in separate VPCs, but they are in separate accounts as well! That way we can make use of IAM-level boundaries in addition to network-level boundaries.
Specifically the Well-Architected framework says this:
Today, the AWS Well-Architected Framework describes a single account with a single VPC as an anti-pattern
Our interpretation emphasizes “single account with a single VPC”. We do not have a single account by any means
cc @Erik Osterman (Cloud Posse)
Oh i see what you mean.
I suppose it also depends on how you define a workload
Thanks for commenting and clarifying this
Thanks, @Dan Miller (Cloud Posse) I forgot to follow up
Yes, I think arguably we have a better recommendation, as segementing workloads by account and VPC, provides IAM level boundaries, not just network level boundaries.
In a recent implementation, for example, the customer elected to create an OU per SDLC, then a single network egress point per OU. Then multiple accounts, each with a dedicated workload with a private VPC.
Our Refarch is not rigid. It advocates patterns and symmetry. We’re not against multiple VPCs per account, but it should be symmetrical and not overly reliant on complex IAM policies. We find segmenting workload by account simplifies IAM and provides network isolation.
Thanks Eric, that is very helpful
2024-11-21
Looks like I had deployed block-all behavior to the Route53 DNS resolver firewall ONLY in our dev
stage. Whoops.
2024-11-22
Behind the scenes, account-map
will map the root_account_aws_name
to root_account_account_name
for the object used in every other component’s providers.tf. So after correcting that value, make sure to reapply account-map
and then try account-settings
again