#atmos (2022-08)
2022-08-08
I have set up catalog config as shown in the s3-bucket component README.
In the actual stack file (stacks/org/prod/us-east-1.yaml
), should something like this work?
(Naming replaced to match README)
components:
terraform:
# Temporary dev bucket in prod account
template-bucket-dev:
metadata:
component: s3-bucket
inherits:
- template-bucket
vars:
stage: dev
privileged_principal_arns:
- arn:aws:iam::123456789123:role/eg-dev-backend-task:
- ""
template-bucket:
vars:
privileged_principal_arns:
- arn:aws:iam::923456789123:role/eg-prod-backend-task:
- ""
This works (as expected):
atmos terraform plan template-bucket --stack prod
This gets the error below:
atmos terraform plan template-bucket-dev --stack prod
Searched all stack files, but could not find config for the component 'template-bucket-dev' in the stack 'prod'.
Check that all variables in the stack name pattern '{stage}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
atmos.yaml:
base_path: ""
components:
terraform:
base_path: "components"
auto_generate_backend_file: true
stacks:
base_path: "stacks"
included_paths:
- "org/**/*"
name_pattern: "{stage}"
excluded_paths:
- "catalog/**/*"
- "**/_defaults.yaml"
logs:
verbose: true
colors: true
vars:
stage: dev
this breaks it
the stack YAML file is for prod
you should not override the context vars per component since atmos uses the context vars to find the component in the stack
and here
terraform:
# Temporary dev bucket in prod account
template-bucket-dev:
metadata:
component: s3-bucket
inherits:
- template-bucket
vars:
stage: dev
ah yes
Still getting used to that, thanks!
the stage is already dev
, so the command atmos terraform plan template-bucket-dev --stack prod
can’t find it anymore
everything for stage dev
should go to stacks/org/dev/us-east-1.yaml
in your case
ok I see
yeah, this is a weird case because we’re migrating from a single account to multi-account but we need to apply this change in the single account - so it’s basically a hack I’m trying to do, within our new stack structure
i.e. we’re mid-migration
the YAML file names don’t matter, the folder sructure does not matter, you can name the files anything and use any level of subfolders
what matters is the context vars (tenant, environment, stage) inside the files - atmos uses it to find the component in the stack
I have changed it to:
components:
terraform:
# Temporary dev bucket in prod account
template-bucket-dev:
metadata:
component: s3-bucket
inherits:
- template-bucket
vars:
attributes:
- dev
privileged_principal_arns:
- arn:aws:iam::123456789123:role/eg-dev-backend-task:
- ""
and that should get the job done for now
ok I see - thanks for the clarification
I think it will stick in my understanding finally
this is the key https://sweetops.slack.com/archives/C031919U8A0/p1660010904513129?thread_ts=1660009575.038859&cid=C031919U8A0
the YAML file names don’t matter, the folder sructure does not matter, you can name the files anything and use any level of subfolders
the bonus is, you an rename the files (file names are for humans), move the files to any subfolder, and it should continue working
That does seems like it will result in overall more stability, which is great
btw, here
included_paths:
- "org/**/*"
name_pattern: "{stage}"
excluded_paths:
- "catalog/**/*"
- "**/_defaults.yaml"
you don’t need - "catalog/**/*"
b/c the included_paths
only include "org/**/*"
and you exclude "**/_defaults.yaml"
which are any _defaults
files in orgs/…/…
ok, I think I picked that up from an example somewhere.. i assumed it meant it wouldn’t process any of those files when scanning for config unless they are included by others..
Is that what excluded_paths
does?
included_paths
include folders to scan
excluded_paths
exclude some folders/files from the included
"catalog/**/*"
will never be scanned since it’s not included in "org/**/*"
ah I see
the latest atmos example uses the latest stack structure here https://github.com/cloudposse/atmos/tree/master/examples/complete/stacks
Thank you - i’ll review that again
Is the actual purpose of excluded_paths to reduce scanning effort or to make sure the catalog entries aren’t actually ‘executed’ ?
1) _defaults.yaml
file are not the actual stacks, they are just imported (like files from catalog
), so we don’t need to scan all of them to find the component in the stack
2) we need to give atmos only the top-level stack files to scan, otherwise it will scan some YAML files that are just meant for import
makes sense, thanks
btw, these commands will help you with the stack configs and finding misconfigurations
atmos validate stacks
atmos describe stacks
atmos describe component xxx -s yyy
oh nice - i’ve never seen this site!
it’s not done yet, but the list of the CLI commands is complete
also see these https://github.com/cloudposse/atmos/pull/146
what
• Improve error handling and error messages
• Add atmos validate stacks
command
why
• Check and validate all YAML files in the stacks
folder
• Detect invalid YAML and print the file names and the line numbers
test
atmos validate stacks
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-1.yaml'
yaml: line 15: found unknown directive name
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-2.yaml'
yaml: line 16: could not find expected ':'
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-3.yaml'
yaml: line 13: did not find expected key
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-4.yaml'
yaml: block sequence entries are not allowed in this context
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-5.yaml'
yaml: mapping values are not allowed in this context
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-6.yaml'
yaml: line 2: block sequence entries are not allowed in this context
Invalid YAML file 'catalog/invalid-yaml/invalid-yaml-7.yaml'
yaml: line 4: could not find expected ':'
what
• Add atmos describe stacks
command
• Allow writing the result to a file by using --file
command-line flag
• Allow formatting the result as YAML or JSON by using --format
command-line flag
• Allow filtering of the result by using the command-line flags: stack
, component-types
, components
, sections
• Available component sections: backend
, backend_type
, deps
, env
, inheritance
, metadata
, remote_state_backend
, remote_state_backend_type
, settings
, vars
why
• Command to show stack configs and all the components in the stacks • Slice and dice the stack config to show different information about stacks and components
usage
atmos describe stacks
atmos describe stacks --component-types=helmfile
atmos describe stacks --component-types=terraform,helmfile
atmos describe stacks --components=infra/vpc
atmos describe stacks --components=echo-server
atmos describe stacks --components=echo-server,infra/vpc
atmos describe stacks --components=echo-server,infra/vpc --sections=vars
atmos describe stacks --components=echo-server,infra/vpc --sections=vars,settings
atmos describe stacks --components=test/test-component-override-3 --sections=inheritance
atmos describe stacks --components=test/test-component-override-3 --sections=component
atmos describe stacks --components=test/test-component-override-3 --sections=deps
atmos describe stacks --components=test/test-component-override-3 --sections=vars,settings --file=stacks.yaml
atmos describe stacks --components=test/test-component-override-3 --sections=vars,settings --format=json --file=stacks.json
atmos describe stacks --components=test/test-component-override-3 --sections=deps,vars -s=tenant2/ue2/staging
tests Show all stacks with all the components with all the component sections (Warning: this will dump ALL YAML config for all components for all stacks to the console)
atmos describe stacks
......
tenant2/ue2/staging:
components:
terraform:
test/test-component-override-3:
backend:
acl: bucket-owner-full-control
bucket: eg-ue2-root-tfstate
dynamodb_table: eg-ue2-root-tfstate-lock
encrypt: true
key: terraform.tfstate
region: us-east-2
role_arn: null
workspace_key_prefix: test-test-component
backend_type: s3
command: terraform
component: test/test-component
deps:
- catalog/terraform/services/service-1
- catalog/terraform/services/service-2
- catalog/terraform/test-component
- catalog/terraform/test-component-override-3
- globals/globals
- globals/tenant2-globals
- globals/ue2-globals
- tenant2/ue2/staging
env:
TEST_ENV_VAR1: val1-override-3
TEST_ENV_VAR2: val2-override-3
TEST_ENV_VAR3: val3-override-3
TEST_ENV_VAR4: val4-override-3
inheritance:
- mixin/test-2
- mixin/test-1
- test/test-component-override-2
- test/test-component-override
- test/test-component
metadata:
component: test/test-component
inherits:
- test/test-component-override
- test/test-component-override-2
- mixin/test-1
- mixin/test-2
terraform_workspace: test-component-override-3-workspace
type: real
remote_state_backend:
acl: bucket-owner-full-control
bucket: eg-ue2-root-tfstate
dynamodb_table: eg-ue2-root-tfstate-lock
encrypt: true
key: terraform.tfstate
region: us-east-2
role_arn: arn:aws:iam::123456789012:role/eg-gbl-root-terraform
workspace_key_prefix: test-test-component
remote_state_backend_type: s3
settings:
spacelift:
stack_name_pattern: '{tenant}-{environment}-{stage}-new-component'
workspace_enabled: false
vars:
enabled: true
environment: ue2
namespace: eg
region: us-east-2
service_1_list:
- 5
- 6
- 7
service_1_map:
a: 1
b: 6
c: 7
d: 8
service_1_name: mixin-2
service_2_list:
- 4
- 5
- 6
service_2_map:
a: 4
b: 5
c: 6
service_2_name: service-2-override-2
stage: staging
tenant: tenant2
....
Show only stacks with terraform
components
atmos describe stacks --component-types=terraform
Show only stacks with helmfile
components
atmos describe stacks --component-types=helmfile
Show only a specific stack with all the components with all the component sections
atmos describe stacks -s=tenant2/ue2/staging
Show only the stacks where a specific component is configured (with all component sections)
atmos describe stacks --components=infra/vpc
Show only the stacks where the specific components are configured (with all component sections)
atmos describe stacks --components=echo-server,infra/vpc
Show only the specific sections for the components in all stacks
atmos describe stacks --components=echo-server,infra/vpc --sections=vars,settings
atmos describe stacks --components=test/test-component-override-3 --sections=inheritance
atmos describe stacks --components=test/test-component-override-3 --sections=component
atmos describe stacks --components=test/test-component-override-3 --sections=deps
Write the result to a file (in YAML format)
atmos describe stacks --sections=vars,settings --file=stacks.yaml
Write the result to a file (in JSON format)
atmos describe stacks --sections=vars,settings --format=json --file=stacks.json
Show all configured stacks (by specifying a non existing component in the filter)
atmos describe stacks --components=none
tenant1/ue2/dev: {}
tenant1/ue2/prod: {}
tenant1/ue2/staging: {}
tenant2/ue2/dev: {}
tenant2/ue2/prod: {}
tenant2/ue2/staging: {}
Show all components in all stacks with just the component names (by specifying a non existing section in the filter)
``` atmos describe stacks –sections=none
tenant1/ue2/dev: components: helmfile: echo-server: {} infra/infra-server: {} infra/infra-server-override: {} terraform: infra/vpc: {} mixin/test-1: {} mixin/test-2: {} test/test-component: {} test/test-component-override: {} test/test-component-override-2: {} test/test-component-override-3: {} top-level-component1: {} tenant1/ue2/prod: components: helmfile: echo-server: {} infra/infra-server: {} infra/infra-server-override: {} terraform: infra/vpc: {} mixin/test-1: {} mixin/test-2: {} test/test-component: {} test/test-component-override: {} test/test-component-override-2: {} test/test-component-override-3: {} top-level-component1: {} tenant1/ue2/staging: components: helmfile: echo-server: {} infra/infra-server: {} infra/infra-server-override: {} terraform: infra/vpc: {} mixin/test-1: {} mixin/test-2: {} test/test-component: {} test/test-component-override: {} test/test-component-override-2: {} test/test-component-override-3: {} top-level-component1: {} tenant2/ue2/dev: components: helmfile: echo-server: {} infra/infra-server: {} infra/infra-server-override: {} terraform: infra/vpc: {} mixin/test-1: {} mixin/test-2: {} test/test-component: {} test/test-component-override: {} test/test-component-override-2: {} test/test-component-override-3: {} top-level-component1: {} tenant2/ue2/prod: components: helmfile: echo-server: {} infra/infra-server: {} …
thanks for all that. We’re definitely at a productive level with atmos. Just some minor things like above, due to my knowledge gaps.
also, you can define custom atmos
commands (they will be shown in atmos help
) https://github.com/cloudposse/atmos/blob/master/atmos.yaml#L66
# Custom CLI commands
what
• Add ATMOS_CLI_CONFIG_PATH
ENV var
• Detect more YAML stack misconfigurations
• Add functionality to define atmos
custom CLI commands
why
• ATMOS_CLI_CONFIG_PATH
ENV var allows specifying the location of atmos.yaml
CLI config file. This is useful for CI/CD environments (e.g. Spacelift) where an infrastructure repository gets loaded into a custom path and atmos.yaml
is not in the locations where atmos
expects to find it (no need to copy atmos.yaml
into /usr/local/etc/atmos/atmos.yaml
)
• Detect more YAML stack misconfigurations, e.g. when the same tenant/environment/stage is defined in more than one top-level YAML stack config file (directly or via imports).
For example, if the same `var.tenant = tenant1` is specified for `tenant1-ue2-dev` and `tenant2-ue2-dev` stacks, the
command `atmos describe component test/test-component-override -s tenant1-ue2-dev` will throw this error
Searching for stack config where the component 'test/test-component-override' is defined
Found config for the component 'test/test-component-override' for the stack 'tenant1-ue2-dev' in the file 'tenant1/ue2/dev'
Found config for the component 'test/test-component-override' for the stack 'tenant1-ue2-dev' in the file 'tenant2/ue2/dev'
Found duplicate config for the component 'test/test-component-override' for the stack 'tenant1-ue2-dev' in the files: tenant1/ue2/dev, tenant2/ue2/dev.
Check that all context variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the files and not duplicated.
Check that imports are valid.
• Allow extending atmos
with custom commands. Custom commands can be defined in atmos.yaml
CLI config file. Custom commands support subcommands at any level (e.g. atmos my-command subcommand1 suncommand2 argument1 argument2 flag1 flag2
)
# Custom CLI commands
commands:
- name: tf
description: Execute terraform commands
# subcommands
commands:
- name: plan
description: This command plans terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
env:
- key: ENV_VAR_1
value: ENV_VAR_1_value
- key: ENV_VAR_2
# `valueCommand` is an external command to execute to get the value for the ENV var
# Either 'value' or 'valueCommand' can be specified for the ENV var, but not both
valueCommand: echo ENV_VAR_2_value
# steps support Go templates
steps:
- atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
- name: terraform
description: Execute terraform commands
# subcommands
commands:
- name: provision
description: This command provisions terraform components
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
# ENV var values support Go templates
env:
- key: ATMOS_COMPONENT
value: "{{ .Arguments.component }}"
- key: ATMOS_STACK
value: "{{ .Flags.stack }}"
steps:
- atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
- atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
- name: play
description: This command plays games
steps:
- echo Playing...
# subcommands
commands:
- name: hello
description: This command says Hello world
steps:
- echo Saying Hello world...
- echo Hello world
- name: ping
description: This command plays ping-pong
steps:
- echo Playing ping-pong...
- echo pong
Custom commands support Go templates and ENV vars in commands steps, and Go templates in ENV vars values, as well as allow specifying an external executable to be called to get the value for an ENV var.
They are automatically added to atmos help
:
Available Commands:
aws Execute 'aws' commands
completion Generate the autocompletion script for the specified shell
describe Execute 'describe' commands
helmfile Execute 'helmfile' commands
help Help about any command
play This command plays games
terraform Execute 'terraform' commands
tf Execute terraform commands
validate Execute 'validate' commands
vendor Execute 'vendor' commands
version Print the CLI version
workflow Execute a workflow
Custom commands test
atmos play ping
Executing command:
/bin/echo Playing ping-pong...
Playing ping-pong...
Executing command:
/bin/echo pong
pong
atmos play hello
Executing command:
/bin/echo Saying Hello world...
Saying Hello world...
Executing command:
/bin/echo Hello world
Hello world
atmos terraform provision test/test-component-override -s tenant1-ue2-dev
Using ENV vars:
ATMOS_COMPONENT=test/test-component-override
ATMOS_STACK=tenant1-ue2-dev
Executing command:
/usr/local/bin/atmos terraform plan test/test-component-override -s tenant1-ue2-dev
....
Executing command:
/usr/local/bin/atmos terraform apply test/test-component-override -s tenant1-ue2-dev
atmos tf plan test/test-component-override -s tenant1-ue2-dev
# This command gets the value for the ENV var by calling an external executable
Executing command:
/bin/echo ENV_VAR_2_value
Executing command:
/usr/local/bin/atmos terraform plan test/test-component-override -s tenant1-ue2-dev
references
• YAML interface inspired by ahoy-cli and choria-io’s appbuilder. See discussion thread.
2022-08-20
v1.4.26 what Update versions Fix handling of partial stacks definitions Improve error handling Add an example of partial stacks definition why Update Go, Docker, Terraform to the latest version to keep up to date When searching for the specified component in the specified stack (e.g. atmos describe component -s ), if any of the stack config files throws error (which also means that we can’t find the component in that stack), print the error to the console and continue searching for the component in…
what Update versions Fix handling of partial stacks definitions Improve error handling Add an example of partial stacks definition why Update Go, Docker, Terraform to the latest version to keep …
v1.4.26 what Update versions Fix handling of partial stacks definitions Improve error handling Add an example of partial stacks definition why Update Go, Docker, Terraform to the latest version to keep up to date When searching for the specified component in the specified stack (e.g. atmos describe component -s ), if any of the stack config files throws error (which also means that we can’t find the component in that stack), print the error to the console and continue searching for the component in…
2022-08-24
Hey CP folks — Not necessarily a atmos question, but in that realm around components + proces: How do you go about deleting the AWS Default VPCs?
I know about the awsutils_default_vpc_deletion
(https://registry.terraform.io/providers/cloudposse/awsutils/latest/docs/resources/default_vpc_deletion), but I noticed there is no component for that resource which is what I would’ve expected to see if that was used.
cc @Matt Calhoun as I know I’ve heard you talk about this in office hours or Slack somewhere.
The resource is deleted as part of our compliance component
I would be open to adding a feature flag to our vpc component
Ah interesting. I have a team member in progress on building a small component around default_vpc_deletion
right now that we were going to upstream.
I could see adding it to an existing component, but for right now I think it’s not a bad stand along action so it can be dealt with on a per-region basis.
That works too
Feel free to upstream
Actually, what about adding it to the existing account settings component?
Do you use that one?
We use that and that was the other component I thought of adding it to, BUT account-settings is a global component and deleting the default VPC would be a per-region action.
It’s likely possible to create a submodule that we could iterate over regions to delete default VPCs, but dunno if it’d be the best fit.
Ohhhhh snap!
Nah, proceed then with your current approach
The aws provider has native support for it so i don’t think the awsutils provider is needed for default vpc deletion anymore
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
@RB I just had that same conversation with a teammate — The issue I don’t like with deleting the default VPC via AWS provider + default_vpc
resource is that it requires two applies: One to gain ownership over the VPC resource and then one to destroy the VPC resource.
My teammate was saying he wanted to go the official provider route because it was official, but really I think it’s just more work for no gain. So IMO the awsutils way is superior.
Ah that is very interesting. I have not used the native resource and was unaware of this limitation.
It’s probably worth a ticket with the aws provider. You’re right, the awsutils provider seems to be superior here
Ya, official route is “consistent” with terraform patterns, but a PIA and not gitops friendly.
Hence, our resource still wins.
Hey everyone, @Matt Gowie’s teammate was me. I was less concerned about it being official than idiomatic. To my knowledge, tf will destroy resources on apply but only if they are child resources of a resource that tf is managing. My understanding is that when AWS implemented a complete lifecycle for default VPC they did it as an independent resource, so you have to run an apply to pick it up, and then you can do a terraform destroy
to get rid of it (but only if a flag is set in the resource). Hashi’s implementation is consistent with CRUD principles given what they are provided from amazon, imo.
I actually like the account-settings idea, because it feels like this should be a flag at account creation that AWS doesn’t provide. This would work like the official default GKE provider where there’s a “remove_default_node_pool” flag and terraform reconciles that after it creates the cluster.
It’s definitely more work to delete it in every region though.
the account-settings
is deployed in gbl
tho with a primary region so you’d only be able to affect your primary region and not your secondary regions
it would be easier to create a ~net net~ new regional component vpc-delete-default
and deploy that imho
not sure I follow on “net net regional component”, but I am new to atmos so not fully up-to-speed on its idiom. I just mean that the desired behavior is probably account-level, definitely in terms of security. If you don’t want unsecured VPCs in one region, you won’t want them in any region.
corrected. net net
=> new
terraform providers have a region input and providers cannot be dynamically added so a regional component would be vpc-delete-default
component imported into say ue2-automation
and uw2-automation
to delete the default vpc in both ue2 and uw2
Got it I think this is the direction we are headed in.
I wish this were a flag at account creation on the AWS side though, that feels like the missing piece to me. ty for the discussion everyone!
Ya, in an ideal world, it should be a feature flag on account creation.
@Matt Calhoun has joined the channel
It would be nice to write up an article using atmos, in defense of terraform workspaces using tfvars.
For example, trussworks is vehemently against terraform workspaces and this blog post is commonly cited.
Terraform is a powerful tool for building out infrastructure, but it can also create traps for you to fall into. Here’s how we build our infrastructure at Truss to avoid some common pitfalls.
cc: @Matt Gowie @Erik Osterman (Cloud Posse)
Terraform is a powerful tool for building out infrastructure, but it can also create traps for you to fall into. Here’s how we build our infrastructure at Truss to avoid some common pitfalls.
cc: @Andriy Knysh (Cloud Posse)
workspaces work well TFC, Atlantis an many other tools, I think this people are stuck on early version of TF but I tried to stay away from workspaces as much as possible because I do not want to add more stuff to my TF workflow but not because they do not work
atmos uses workspaces too, no?
Yes, atmos uses workspaces. The workspaces atmos uses are the name of the <root-stack>-<optional derived component name>
.
# ue2-automation.yaml
components:
terraform:
# results in ue2-automation workspace and ue2-automation-vpc spacelift stack
vpc:
# optional metadata key since the type=real is implicit
metadata:
type: real
component: vpc
# results in no workspace
eks/defaults:
metadata:
type: abstract
component: eks/cluster
# results in ue2-automation-eks-example workspace and spacelift stack
eks/example:
metadata:
type: real
component: eks/cluster
inherits:
- eks/defaults
It would be nice to write up an article using atmos, in defense of terraform workspaces using tfvars.
100% — Would love to see it. The copy pasta of directories method seems archaic. Even with root modules only being child modules it still feels deeply wrong.
There will be a Masterpoint blog started in the coming month or two and I’ll add this to my list of potential post ideas if none of ya’ll haven’t jumped on it by then.
Strongly agree
the main issue they describe in the article is actually not related to TF workspaces. They are talking about diff versions of the same TF component (e.g. some TF components using diff database)
If you force your development and production environments to use the same exact code, how do you test new versions of your infrastructure? Let's say you want to change your application to be backed by a different type of datastore, or add in some new AWS resource that was just released. How do you do that in your development environment without doing it in production at the same time?
it’s the issue of TF components versioning
which is a big issue if you have a lot of diff variations, but easily solved if you have just a few diff versions
components
terraform
component1
component1_v2
yes, but this article is used to throw out the idea of workspaces and so this article can easily be used to toss out atmos
i speak from debate experience of trying to push atmos btw.
in any case, TF workspaces have nothing to do with TF component versioning. You’ll encounter the same issue whether you are using TF workspaces or not
Ya this is a worthy topic. There is so much FUD about terraform workspaces that is consistently spread by influential persons and companies using terraform.
is there a cloudposse blog ?
- In
atmos
, we are using workspaces, but can “easily” switch it off
- “easily” from the code point of view. Not “easily” if you consider S3 backend where we have
workspace_key_prefix
as the bucket folder andworkspace
as subfolder. Backend needs to be considered here as well; otherwise everything in S3 will be in just one big folder which is not easy to maintain
but we don’t post regularly
(other than our office hours updates, which some what dilute the blog)
the second issue they mentioned is forgetting to switch to the correct namespace. With plain TF, this is a big issue and we can agree with them on that
atmos
does the switch automatically on every command execution, and prints the message to the console about what workspace is being used
exactly, this is what i want from the blog post because atmos
makes working with workspaces
very very very easy
in fact, when first working with atmos, i had no idea it was using workspaces in the background
and as far as ive worked with atmos, it’s never failed to choose the appropriate workspace for me
Good design is in all the things you notice. Great design is in all the things you don't.
Wim Hovens
:)
wait, what about feedback on the design?
v1.4.27 what Update atmos vendor pull why Allow using absolute and relative file paths in component.yaml when vendoring mixins. This will allow having mixins in a local folder (and not in a private GitHub repo for which you’ll have to use a GitHub token or other means to authenticate) # mixins override files from ‘source’ with the same ‘filename’ (e.g. ‘context.tf’ will override ‘context.tf’ from the ‘source’) # mixins are processed in the order they are declared in the list mixins: #…
what Update atmos vendor pull why Allow using absolute and relative file paths in component.yaml when vendoring mixins. This will allow having mixins in a local folder (and not in a private GitH…
v1.4.27 what Update atmos vendor pull why Allow using absolute and relative file paths in component.yaml when vendoring mixins. This will allow having mixins in a local folder (and not in a private GitHub repo for which you’ll have to use a GitHub token or other means to authenticate) # mixins override files from ‘source’ with the same ‘filename’ (e.g. ‘context.tf’ will override ‘context.tf’ from the ‘source’) # mixins are processed in the order they are declared in the list mixins: #…
2022-08-25
2022-08-26
Hello, thought I’d join this slack space as I stumbled on Atmos - looks great. Simple question though, I’m finding the complete example a bit overwhelming, does anyone have a public example that’s considerably simpler that doesn’t try to show all the features off?
Hrmmm good feedback
Have you checked out the tutorials?
Those are simpler albeit a little out of date
Yeah though googling lead me to : https://atmos.tools/core-concepts/fundamentals
If you get a chance to run through it and have any problems please post here
There was a tiny bit of irony in finding a tool that abstracts out the complication of layering and composing complex infrastructure - only to have the example in the git repo being the full complete complex infrastructure and none of the intermediate layering of complexity
Then let’s open a PR to fix it
Haha it’s true
Is it a case that the project is new/moving so quickly the docs site is a bit bare right now?
So it’s one of the challenges we have. First as the creators it’s “easy” for us. But we know it’s a lot to take in. How to build up to that? We want to find a good way to do that. Even in OO programming it’s hard, and we took the concepts from there
http://atmos.tools needs more content!
We have a lot more internal documentation for customers but it’s taking a while for us to upstream it
Think if you’re doing a tutorial series for AWS for example, maybe showing intermediate stages starting off with a single component/layer of a VPC, then having two layer components of a public/private subnet setup (think IGW/NAT GW) or an isolated subnet setup (think just raw subnets and VPC endpoints for aws services - lambda maybe?) and then sitting something on top of those two options, to show the power of a stack where you have multiple middle options and can drop EKS on those depending on some layered config might have some “real world” value
I’ve come to find atmos because I am finding it increasingly difficult to provide our developers a composable stack where we can have multiple VPC layouts, because the compute platform of EKS is not really bothered by those complexities down in the subnets, but it’s critical the vpc/subnets exist first - and resolving that dependency web is proving painful
Yes that’s the problem we had as consultants going into new situations. We always had to solve the problem of different architecture and couldn’t develop the terraform for every architecture. We didn’t want to do code generation either because that’s hard to test.
Yeah the code generation route is pain too, ideally the atmos style of providing a recipe of components with some overrides in the vars seems exactly the right solution…
So we built atmos so we could write our modules once and compose them in any way expressing it as yaml. Smaller states. Decoupled lifecycles of various kinds of resources. The mistake we see all the time is someone has a module with the vpc and the eks cluster in it. But the lifecycles are totally different.
They should have 2 different states.
yup, exactly the pain we’re feeling
plus the risk of accidentally nuking the VPC…
So that’s comforting that atmos is attempting to solve this mess
Feel free to DM me and I can show you behind the scenes how we organize it
Thanks for the offer, before I embark on the tutorials - noticed as you suggested they’re 17 months old - do they still work with the current version of atmos or has API changed?
(basically asking do I need to go fish out a much older binary for atmos to play along)
nevermind, looks like you’ve bundled a docker image (TIL about geodesic)
let’s see if it it all works
geodesic tutorial worked bar one thing: https://docs.cloudposse.com/tutorials/geodesic-getting-started/#geodesic-usage-patterns the second install option just plain didn’t do anything
the atmos example blows up because the fetch-weather url 404s now
This might be helpful too. This is the normal layout for an atmos mono repo. The only difference is that we usually dont use the “infra” directory.
https://github.com/cloudposse/atmos/tree/master/examples/complete
Generally there are 2 main directories, components/terraform/<root terraform modules>
and stacks/
# stacks/uw2-dev.yaml
# or stacks/orgs/acme/dev/us-west-2.yaml
vars:
# company prefix
namespace: acme
# our word for short region code
environment: uw2
# our word for account
stage: dev
components:
terraform:
vpc/defaults:
metadata:
type: abstract
# corresponds to components/terraform/vpc
component: vpc
vars:
enabled: true
vpc/example:
metadata:
# real is implicit and optional
type: real
component: vpc
inherits:
- vpc/defaults
vars:
name: example
Then
atmos terraform plan vpc/example --stack uw2-dev
This creates a vpc
component called example in the dev
account in us-west-2
does atmos have a simple solution to creating a different terraform state file per component?
Yes, atmos will generate the backend based on the yaml inputs and use a workspace key prefix
Atmos uses workspaces behind the scenes and the workspace is equivalent to the <root-stack>-<component name>
if using a derived component (via metadata
) like in the above fashion or simply <root-stack>
without using a derived component.
This ensures a unique workspace per component instantiation per root stack
is there the concept of stacks with shared components? say there’s two stacks, dev-a and dev-b, both of them are just eks setups (two control planes needed) but they can both share subnets and the vpc - possible?
I appreciate this would cause pain when doing terraform destroy, not knowing the other stack exists - so maybe it’s just a bad idea
yeah just ignore that, it’s a stupid idea to cross the streams, it’s delicate enough in terraform as it is
Yes, the components are shared. See this.
# catalog/vpc/example.yaml
components:
terraform:
vpc/defaults:
metadata:
type: abstract
# corresponds to components/terraform/vpc
component: vpc
vars:
enabled: true
vpc/example:
metadata:
# real is implicit and optional
type: real
component: vpc
inherits:
- vpc/defaults
vars:
name: example
# stacks/uw2-dev.yaml
import:
- catalog/vpc/example
# stacks/uw2-prod.yaml
import:
- catalog/vpc/example
atmos terraform plan vpc/example --stack uw2-dev
atmos terraform plan vpc/example --stack uw2-prod
thats a shared component reused across 2 accounts
oh damn.
that is powerful
ya… it is and we really need to blog on it more lol
now, if the documentation was exhaustive…
blogs are cool, lots of examples of where terraform falls short and how to solve that problem on the docs site will be great
and we all know terraform falls short in some catastrophic ways…
2022-08-30
Can atmos support a model where a stack is used as a template and you can provision multiple copies of that template where you simply change the environment prefix without having to fork/copy the yaml and make multiple stacks?
say all the terraform resources are prefixed in their name with a string, and all I want to do is provision an entire stack with a dynamic prefix - and the state be named accordingly:
say in terraform land it’d be TF_VAR_PREFIX=abc terraform plan
and TF_VAR_PREFIX=abc terraform apply
to bring up stacks where the states live in s3:\\examplebucket\states\${AWS_REGION}\${PREFIX}\${COMPONENT}.state
so the backend key is dynamic based on component, and prefix (and optionally region)
@Daniel Loader check this out — https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/test-component-override.yaml#L11-L18
In that example, the component has a specific name that will end up in the workspace name (test/test-component-override
) and it references that its code comes from test/test-component
. I believe that is what you’re looking for.
AFAIK, you can’t accomplish that from the command line, but in the IaC / infrastructure as data world, you don’t really want to as that would go against best practice.
"test/test-component-override":
# Specify terraform binary to run
command: "/usr/local/bin/terraform"
# The `component` attribute specifies that `test/test-component-override` inherits from the `test/test-component` base component,
# and points to the `test/test-component` Terraform component in the `components/terraform` folder
# `test/test-component-override` can override all the variables and other settings of the base component (except the `metadata` section).
# In this example, variables for each service are overridden in `catalog/services/service-?-override.*`
component: "test/test-component"
that’s fair feedback, was looking for a way to provision dynamic sandboxes for devs, that last 1-8hrs at the most
Yeah you can do that via a pipeline that adds some YAML to your stacks and then runs a fresh apply of that new stack. Then have a job come back around and clean it up later.
And we do that with atmos. The difference is we commit that file. What’s rad is then you have a record of it it, versus if you just do it on the command line without state, it’s inconsistent
Atmos supports wildcard includes, so we create a folder called previews/ and programmatically commit a file to that folder which then gets automatically depoyed
Delete the file and it automatically gets destroyed
v1.4.28 what Update atmos vendor pull why When pulling in mixins, override the destination file if it already exists Prevent the error: symlink components/terraform/mixins/context.tf components/terraform/infra/vpc-flow-logs-bucket/context.tf: file exists
what Update atmos vendor pull why When pulling in mixins, override the destination file if it already exists Prevent the error: symlink components/terraform/mixins/context.tf components/terrafo…