#refarch (2024-02)
Cloud Posse Reference Architecture
2024-02-12
The CloudPosse style seems to prefer “single~ton~” components (e.g. lambda
component deploys a single lambda, sns-topic
deploys one topic). The exception to this is ssm-parameters
, I guess because one terraform workspace per parameter would be painful to access via remote-state and slow to deploy.
I am considering creating “plural” component variants to maximise DRYness and reduce workspace sprawl for a few deployments. For example, an sns-topics
component
# format TBC, not convinced the `try` is required if the "default/global" vars have null or empty defaults
module "sns_topic" {
source = "cloudposse/sns-topic/aws"
version = "0.20.1"
for_each = var.sns_topics
subscribers = try(coalesce(each.value.subscribers, var.subscribers), {})
...
kms_master_key_id = try(coalesce(each.value.kms_master_key_id, var.kms_master_key_id), null)
encryption_enabled = try(coalesce(each.value.encryption_enabled, var.encryption_enabled), null)
...
context = module.this.context
}
This *seems* like an OK idea to me, assuming that the resources have similar lifecycles, and in cases where you want to access outputs downstream in remote-state
it would make that YAML cleaner. I notice that CloudPosse don’t seem to ever do this in the core component library, perhaps there is a very good reason why not… Is there a downside to this approach?
@Matthew Reggler great to see you poke your head in! It’s been a while.
We have a MASSIVE docs update coming that introduces our best practices on this.
what
• Update readme to be more consistent with atmos.tools • Fix links • Add features/benefits • Add use-cases • Add glossary
why
• Better explain what atmos does and why
references
• grnet/docusaurus-terminology#28 (cannot use use markdown links in terms) • grnet/docusaurus-terminology#29 (tooltips incompatible with slugs)
Of course, there’s never a steadfast rule other than the laws of physics. Bend them as you need.
The “singleton” component, is an interesting way of referring to them. I don’t think of them as singletons, in that sense since they are used anytime (multiple times) you need that particular thing. Calling it a singleton to me sounds more like moving the factory inside the component, which we recognize (after having done this for years now) is bad.
We still have some public components that do this bad thing I discuss. Components like accounts
should be redesigned to only provision an account, not a factory of accounts.
Thanks for the reply, and yes, still very much around . Agree with you on my misuse of singleton, not quite the right concept for what is encapsulated within a component really. I see your point about avoiding factories within a component, makes a lot of sense.
I suppose I’m slightly too attached to the idea hooking groups of connected resources around a loop of some kind, and a terraform for_each
is slightly more digestable that something arcane and terrifying in a Go template
I can see cases where I might bend the rules (e.g. a common set of sns topics + subscribers deployed per env for chatops/alerting — defined as a (meta-)component instead of a stack to batch the ARNs in one state file), but think otherwise you’ve convinced me there. Thanks!
for_each
is slightly more digestable that something arcane and terrifying in a Go template
Agree, if this is the result, we are not advocating using Go templating over doing it in HCL.
We don’t want to replace one bad thing with an even worse one.
Go templating in stacks is an escape hatch.
So in refarch v2, we’ll be doubling down on this pattern (not the go templating!!!)
but I want to make sure I understand why you need this loop over subscribers.
subscribers = try(coalesce(each.value.subscribers, var.subscribers), {})
...
kms_master_key_id = try(coalesce(each.value.kms_master_key_id, var.kms_master_key_id), null)
encryption_enabled = try(coalesce(each.value.encryption_enabled, var.encryption_enabled), null)
e.g. the pattern we want to ascribe to is each subscriber subscribes as part of getting provisioned.
Put another way. Imagine one component provisions a security group. Then the cluster component adds it’s rules to it. The database component adds it’s rules to it. The VPN component adds it’s rules to it.
Intead of, provision the cluster component, database component, vpn component. Then provision the security group component with a list of other components to permit.
Another example. Provision OU component. The provision an account component for each account, giving it a reference by name to the OU.
To root this in actual context, I have two examples:
- we have a pre-SweetOps “notifications module” that deploys a series of sns-topics with env-specific Teams channels/PagerDuty endpoints with which they communicate, grouping this up as a single component allows the email/service endpoints for each subscriber to be injectable as a Go template string, and serves to make the component appear consistent (for good or ill) with our previous approach. (A bad reason for using a factory, I’d guess)
- we have a collection of lambdas whose ARNs are all fed into a Step Function to define the source for various steps, having these as one component allows for a single remote state lookup in the Step Function’s component. (A good? reason for using a factory)
As for the pattern for using try(coalesce(<local>, <global>), <default>)
that is my attempt to be maximally transparent, and produce a maximally configurable component, in alignment with the general ethos of SweetOps. I accept that it is probably unnecessary in this case
2024-02-14
DynamoDB: Add support for the import_table block
https://github.com/cloudposse/terraform-aws-dynamodb/pull/120
A preemptive PR for refarch: https://github.com/cloudposse/terraform-aws-components/pull/981
@Dan Miller (Cloud Posse) is the terratest using an old TF version?
Keyword “optional” is not a valid type constructor.
https://github.com/cloudposse/actions/actions/runs/7905463589/job/21578153852#step<i class="em em-9"413
this failed, but i can run it locally with no prob:
https://github.com/cloudposse/actions/actions/runs/7905463589/job/21578153852
oh, yes. https://github.com/cloudposse/terraform-aws-dynamodb/blob/main/versions.tf#L2
what’s the approach to handling TF version updates?
required_version = ">= 0.13.0"
ah yeah it’s the Terraform version. Set versions to required_version = ">= 1.0"
and then add the terraform/1.0
label to the PR
I already pushed the versions
Done
…the label, that is
approved and merged
sweet
https://github.com/cloudposse/terraform-aws-components/pull/981 is ready now, but it is pending a run (unsure how that gets triggered)
what
• Added DynamoDB import_table support
why
• This adds support for using S3 data to import into a new table
references
• Depends on cloudposse/terraform-aws-dynamodb#120
I’m not sure what the deal is with run-pre-commit-checks-and-autocommit-changes
. Looks like it’s a required check we’ve added to the Organization but may be broken. I pinged the team about it
resolved and merged