#refarch (2024-11)

Cloud Posse Reference Architecture

2024-11-05

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#1177 :loudspeaker: Upcoming Migration of Components to a New GitHub Organization (CODE FREEZE 11/12 - 11/17)

We want to inform our community that starting next Tuesday (2024-11-12), we will begin migrating each component in this repository (cloudposse/terraform-aws-components) repository to a new GitHub organization. This change is essential to enhance the stability and usability of our components, including:

• Leveraging terratest automation for improved testing. • Implementing semantic versioning to clearly communicate updates and breaking changes. • Enabling a more modular structure, with a focus on maintaining individual components more effectively. • Accelerate PR reviews and component contributions • Automated dependabot updates • And tons more!!

What to Expect

Migration Timeline: The migration process will start next Tuesday and is anticipated to be completed by the end of the following week. • Updated Documentation: We are actively updating our documentation and the cloudposse-component updater tool to facilitate a smooth transition. • Maintenance Mode: After this migration, this cloudposse/terraform-aws-components repository will enter maintenance mode. We will no longer accept new contributions here, and all updates should be directed towards the new individual component repositories. • Future Archiving: In approximately six months, we plan to archive this repository and transfer it to the cloudposse-archives organization.

FAQ

• Does this affect our terraform modules? No. We are not moving our terraform modules


We are committed to making this migration as seamless as possible, but if you have any questions or concerns, please post your comments in this issue. Your feedback is important to us, and we appreciate your support in this transition!

Thank you,

The Cloud Posse Team

2024-11-14

github3 avatar
github3
09:09:49 PM

When deploying a new cluster I wasn’t able to get node groups to join the cluster initially. It seems this is because the cluster endpoint is private and the vpc-cni addon isn’t deployed yet. By default we have addons_depends_on: true in our EKS catalog, however, that prevents the vpc-cni from being added. Changing it will prevent the workflow from completing successfully b/c coredns, etc. require a node.

What am I missing here?

1

2024-11-15

github3 avatar
github3
08:22:15 AM

Hi everyone,

Is it possible to somehow use reference architecture in one account?
Essentially, we need to set up just one prod account, without anything else, with just the vpc, IAM and a EKS cluster

1

2024-11-20

RB avatar

Ref https://aws.amazon.com/ru/blogs/networking-and-content-delivery/vpc-block-public-access/

Today, the AWS Well-Architected Framework describes a single account with a single VPC as an anti-pattern.

I recall that the refarch only required a single vpc per region per account. I know technically it’s possible in atmos to create more than 1 vpc but is one vpc per region account still the standard or is it better to create multiple vpcs per region per account now ? If so, how do you recommend segmenting the vpcs?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

That’s a core design of refarch, so it’s definitely worth discussing. I’ll bring it up to the team. Thanks for sharing!

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Following up on this topic, and yes we are still in agreement with their statement. AWS is saying here that you should put workloads in separate VPCs. We absolutely do that. However in the reference architecture we take it a step further. Not only are workloads in separate VPCs, but they are in separate accounts as well! That way we can make use of IAM-level boundaries in addition to network-level boundaries.

Specifically the Well-Architected framework says this:
Today, the AWS Well-Architected Framework describes a single account with a single VPC as an anti-pattern
Our interpretation emphasizes “single account with a single VPC”. We do not have a single account by any means

cc @Erik Osterman (Cloud Posse)

1
RB avatar

Oh i see what you mean.

I suppose it also depends on how you define a workload

RB avatar

Thanks for commenting and clarifying this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks, @Dan Miller (Cloud Posse) I forgot to follow up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I think arguably we have a better recommendation, as segementing workloads by account and VPC, provides IAM level boundaries, not just network level boundaries.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In a recent implementation, for example, the customer elected to create an OU per SDLC, then a single network egress point per OU. Then multiple accounts, each with a dedicated workload with a private VPC.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our Refarch is not rigid. It advocates patterns and symmetry. We’re not against multiple VPCs per account, but it should be symmetrical and not overly reliant on complex IAM policies. We find segmenting workload by account simplifies IAM and provides network isolation.

1
RB avatar

Thanks Eric, that is very helpful

2024-11-21

github3 avatar
github3
03:54:22 PM

Looks like I had deployed block-all behavior to the Route53 DNS resolver firewall ONLY in our dev stage. Whoops.

1

2024-11-22

github3 avatar
github3
05:15:05 AM

@security-penguin did this resolve your question?

github3 avatar
github3
05:15:11 AM

Behind the scenes, account-map will map the root_account_aws_name to root_account_account_name for the object used in every other component’s providers.tf. So after correcting that value, make sure to reapply account-map and then try account-settings again

2024-11-27

    keyboard_arrow_up