#atmos (2022-09)
2022-09-07
v1.5.0 what Add support for custom integrations in atmos.yaml Add Atlantis support (Atlantis is an integration) Add atmos terraform generate varfiles and atmos atlantis generate repo-config CLI commands why Support Atlantis Generate the varfiles for all components in all stacks (this is used in Atlantis repo config, and will be used to detect drifts in variables to simplify triggering Spacelift stacks) Automatically generate Atlantis repo config file atlantis.yaml. Using the config, project and…
what Add support for custom integrations in atmos.yaml Add Atlantis support (Atlantis is an integration) Add atmos terraform generate varfiles and atmos atlantis generate repo-config CLI commands …
v1.5.0 what Add support for custom integrations in atmos.yaml Add Atlantis support (Atlantis is an integration) Add atmos terraform generate varfiles and atmos atlantis generate repo-config CLI commands why Support Atlantis Generate the varfiles for all components in all stacks (this is used in Atlantis repo config, and will be used to detect drifts in variables to simplify triggering Spacelift stacks) Automatically generate Atlantis repo config file atlantis.yaml. Using the config, project and…
2022-09-09
Hey folks — I posed this question to my team the other day and would like to see if there was more specific or correct nomenclature that I’m not aware of.
We should discuss what to call an instance of a component in a stack file. My thoughts on the nomenclature right now is:
1. stack file / stack (lowercase) — The files and atmos stacks e.g. gbl-root, ue1-automation, etc.
2. Spacelift Stack / Stack (uppercase) — Spacelift’s term for an instance of a component — This has its own state and is planned / applied. e.g. ue1-audit-cloudtrail-bucket
3. Component instance (This is what I would like to have a better term for, but don’t) — The configuration in our stack files that allows us to create an instance of a component with atmos and is the local version of a “Stack”.
Any thoughts?
@Matt Gowie thanks, those are good questions
TF components in components/terraform
folder are called terraform components (or helmfile components).
in stacks
, we have a few things:
- Stack config files (e.g.
ue1-dev.yaml
) where the top-level stacks are defined
- Atmos stacks - the logical stacks (not related to stack config files). The logical stack names are always derived from the context (e.g.
tenant-ue2-prod
) and they are not the config files. That’s why you can name your stack config files anything and put them in any folder or subfolder at any level, atmos just cares about the logical names. In short, the stack config files are for humans to organize the config and use the file and folder names that people understand. The logical stack names (derived from the context) are for atmos
- Atmos components - that’s what you define in the YAML stack config files and point them to the “physical” terraform/helmfile components using
metadata.component: xxx
attribute
- Atmos components can be of a few types: real and abstract. The abstract components (defined by
metadata.type: abstract
) are just containers with some default values, they can’t be provisioned. The real components get provisioned. Real components can inherit from abstract components (and other real atmos components). Another important thing about atmos components is that they allow you to provision more than one same TF component with diff name in the same stack
- Which bring us to the concept of atmos component inheritance. An atmos component (defined in the YAML files) can be a base component for other atmos components, which we call derived components. Derived components can inherit from other atmos components (abstract and real) using
metadat.inherits: [xxx, yyy]
- Spacelift stacks are diff concept. They are a Cartesian product of all (real) atmos components and all atmos stacks (logical names derived from the context), meaning we generate as many Spacelift stacks as we have atmos components multiplied by atmos stacks, e.g. for a component
vpc
in stacksue2-dev
andue2-prod
, we generate the Spacelift stacksue2-dev-vpc
andue2-prod-vpc
etc.
so yes, many definitions are reused or overridden. But to summarize, we have:
• atmos stack config files (top-level stacks and catalog)
• atmos stacks (derived from the context). Each atmos stack is a collection of atmos components provisioned into a particular environment (region/account/stage). Each atmos stack can be defined in one or many atmos stack config files (meaning you can define some components in one stack config file, and other components in other stack config files for the same logical stack (region/env/stage)
• terraform and helmfile components (in components
folder) - this is the TF/helmfile code
• atmos components in YAML config files (where you define vars and other config for TF/helmfile components) - and they can be real/abstract, base/derived
• Spacelift stacks (in short, each Spacelift stack is an atmos component in an atmos stack)
• Atlantis projects - similar to Spacelift stacks, each atlantis project is an atmos component in an atmos stack
Here’s my take on it (ignore me if I’m confusing things, please let me know if I’m correct)
# catalog/s3-bucket/defaults.yaml
components:
terraform:
# abstract component
s3-bucket/defaults:
metadata:
type: abstract
# catalog/s3-bucket/flavor.yaml
import:
- catalog/s3-bucket/defaults
components:
terraform:
# real component
s3-bucket/flavor:
metadata:
inherits:
- s3-bucket/defaults
then there is a difference between a real
component based on an abstract
and a real
component based on nothing. The former real
is non-derived
whereas the latter real
is derived
.
`I’ve been using
yes, but let’s keep in mind that abstract/real
and base/derived
are diff concepts and not related to each other except that you usually define an abstract component to be a base and inherit from it later in other components (so it’s just a collection of some default values)
you cannot have a real abstract, but you can have a real base (or non-derived) and a real derived
(let’s not complicate things even more it’s already a lot of diff thing here to name )
@Andriy Knysh (Cloud Posse) awesome stuff — so what I have been referring to as a “component instance”, you refer to as an “atmos component”, which I can understand.
I really like the term “logical stack” as a differentiator against stack files because I think that is one of the more confusing things to understand when coming into atmos: logical stacks are derived from context and not from file names. And really, the logical stacks are more important because that is actually what you need to reference and what gets used in your workspace name and similar.
Anyway, thanks for braindumping all of the above. We should sticky this thread and as you folks are writing out atmos documentation, you should copy + pasta a good bit of the above.
any atmos component can be a base for any other atmos component (it’s inheritance chain). Making a component abstract
just prevents it from being deployed (atmos will complain if you try to deploy an abstract component). Both base and derived components can be made abstract, which just means you don’t want them to be provisioned and will derive from them in other atmos components
that’s some OOP here
and we have not even started on Component-Oriented-Programming (COP) yet
but thanks you @Matt Gowie for bringing up the naming issue. Yes, we have called all of that diff names (mostly using components and stacks), but components and stacks are completely diff overloaded things and we have many of them - TF components, atmos components, stack config files, stacks, Spacelift stacks - all diff things, and having a common dictionary for all of that stuff is important otherwise people will not understand each other
if you have ideas on how to better name these things, let us know
(e.g. component instance
is a good name for atmos component
since in OOP they call it class instance
or object
. E.g. in C# they don’t call an object created from a C# class a C# object)
I’ll keep it in mind. But yeah component instance
is the only one I felt was my own term and I didn’t know what you folks called that internally. I think instance
follows the OOP terminology, but I also get what you would call them atmos components as that makes a bit of sense too.
v1.5.1 what Adds a –skip-init flag which allows skipping terraform init why This can help speed up workflows in the case that the user knows their last command successfully ran terraform init and they do not need to run init again.
what Adds a –skip-init flag which allows skipping terraform init why This can help speed up workflows in the case that the user knows their last command successfully ran terraform init and they…
v1.5.1 what Adds a –skip-init flag which allows skipping terraform init why This can help speed up workflows in the case that the user knows their last command successfully ran terraform init and they do not need to run init again.
2022-09-10
v1.6.0 what Update atmos atlantis generate repo-config command Support native HCL output format in atmos terraform generate varfiles command why Do not iterate over Go maps in atmos atlantis generate repo-config command. Go iterates over maps in a non-deterministic order resulting in constant drift in the final atlantis.yaml file. Instead, get the map keys, sort them, and iterate over the sorted keys Support native HCL output format in atmos terraform generate varfiles command - when ejecting from…
what Update atmos atlantis generate repo-config command Support native HCL output format in atmos terraform generate varfiles command why Do not iterate over Go maps in atmos atlantis generate r…
v1.6.0 what Update atmos atlantis generate repo-config command Support native HCL output format in atmos terraform generate varfiles command why Do not iterate over Go maps in atmos atlantis generate repo-config command. Go iterates over maps in a non-deterministic order resulting in constant drift in the final atlantis.yaml file. Instead, get the map keys, sort them, and iterate over the sorted keys Support native HCL output format in atmos terraform generate varfiles command - when ejecting from…
@Andriy Knysh (Cloud Posse) do you know why it keeps posting 2 updates for every release?
it’s prob RSS config, we need to check it
2022-09-11
v1.7.0 what Add atmos terraform generate backends command why Generate terraform backend configs for all terraform components Supported formats HCL and JSON A GitHub Action that generates all .tfvar files and backend.tf files so that projects can be used with conventional terraform GitOps tools like atlantis, Terraform Cloud, et al. test
# hcl is default, no need to specify it atmos terraform generate backends –format=hcl
Writing backend config for the terraform component ‘test/test-component’ to…
what Add atmos terraform generate backends command why Generate terraform backend configs for all terraform components Supported formats HCL and JSON A GitHub Action that generates all .tfvar fi…
if you keep up the pace you will be in version 100.0.0
in no time
it’s still a long way to go to 100
All because of you @jose.amengual
lol, I asked for one thing…..
(so the back story here, is to support atlantis, everything should be committed, which means all the varfiles and all the backends. that’s what we were generating on the fly.)
So in this model, either with precommit hooks or something like a GitHub Action workflow, generate the backends and varfiles with atmos, then atlanits will work very well.
In fact, so will TFC, Spacelift, Env0, etc because it’s all vanilla HCL.
@RB was asking about this
Atlantis currently can read on the fly Atlantis yaml files
the reason for committing the varfiles and backends is to detect “affected files” in the PR
Hence, they cannot be generated on the fly
(is my understanding)
but I’m. not sure if it run the plans correctly, that is what I need to test next
it was never my intention to commit the files back
v1.7.0 what Add atmos terraform generate backends command why Generate terraform backend configs for all terraform components Supported formats HCL and JSON A GitHub Action that generates all .tfvar files and backend.tf files so that projects can be used with conventional terraform GitOps tools like atlantis, Terraform Cloud, et al. test
# hcl is default, no need to specify it atmos terraform generate backends –format=hcl
Writing backend config for the terraform component ‘test/test-component’ to…
2022-09-13
@jose.amengual any opinion on using App Runner for a simpler hosting of Atlantis?
Also, we’re thinking maybe it should just be a super simple cloudformation template
that way you can bootstrap everything even without a terraform backend,.
mmmmm
Imagine then in a control tower organization, this could be a part of the baseline deployed.
only apprunner apps can be part of that?
No, but it basically reduces the entire problem set down to 1-2 resources
No need for ALB, target groups, ecs cluster, ecs tasks, and two dozen more resources.
Less resources, less things that can go wrong.
Less things that cost money
mmm I will take a look at apprunner
I never used it
2 vCPU 4GB max for app runner
that is no much
for a small atlantis that could work but for a bigger deployment/infra repo it could be too small
Oh interesting
Surprised it is that low
I have never used it
v1.7.1 what Fixes an issue where ATMOS_CLI_CONFIG_PATH points to a non directory and results in a panic. why This provides proper messaging and gracefully fails. The error in question here that was getting skipped over by os.IsNotExist(err) was the following: stat /usr/local/etc/atmos/atmos.yaml/atmos.yaml: not a directory
what Fixes an issue where ATMOS_CLI_CONFIG_PATH points to a non directory and results in a panic. why This provides proper messaging and gracefully fails. The error in question here that was get…
2022-09-14
v1.8.0 what Remove the default hardcoded CLI config Update TF workspace calculation for Spacelift stacks why Make atmos.yaml CLI config always required. Remove the default hardcoded CLI config b/c it had some default values which are not applicable for all use cases. Instead, throw error if atmos.yaml is not found export ATMOS_LOGS_VERBOSE=true atmos describe component test/test-component-override -s tenant1-ue2-dev
Found ENV var ATMOS_LOGS_VERBOSE=true
Searching, processing and merging atmos CLI…
what Remove the default hardcoded CLI config Update TF workspace calculation for Spacelift stacks why Make atmos.yaml CLI config always required. Remove the default hardcoded CLI config b/c it h…
2022-09-16
Hello. Loving atmos. Hopefully a quick question on a Friday. Is there any way to have a YAML anchor in the config of an imported catalog file and then use it in your stack config?
I’m thinking it might not be possible as it would produce invalid YAML…
I think anchors work only inside one yaml file. When we import, we use Go lib to read yaml files one by one, so cross-file anchors would not work
Presently, anchors are scoped to the file boundaries. We process each file as YAML before processing imports.
Thanks
2022-09-20
Hey, in the latest atmos example directory structure it’s been setup for multiple orgs and tenants, how is this actually used in practice? Multiple orgs in case of organizational changes like acquisitions? And for tenants, is this used for multi-tenant environments or even for OUs like platform and management? I’m having trouble relating the actual example atmos setup to this SweetOps foundational infrastructure diagram.
multiple orgs if you have many completely separated orgs/business units and you want to separate the provisioned resources for them. In many cases it’s just one org
tenants usually directly correspond to OUs inside the org
note that all these names are for people to organize the stack configs into folders and files, atmos
does not care about these names and folders, it just needs the context variables for each stack: namespace( corresponds to org), tenant(OU), stage(account), environment(region)
so the provisioned AWS resource IDs look like
{namespace}-{tenant}-{environment}-{stage}-{resource_name}
e.g.
cp-core-ue2-prod-vpc
cp-core-uw2-dev-vpc
cp-platform-ue2-staging-rds
so all the resource IDs and names are unique and consistent
you can skip any of the context vars, e,g, if you don’t have tenants/OUs, you can use {namespace}-{environment}-{stage}
for atmos stack names, and the stacks
folder structure will not have tenants/OUs
companies have multiple AWS Organizations for multiple reasons. (not talking about OUs, but actually separate root payer accounts)
here are 3 examples
- Through acquisition, they have multiple root accounts that have not been consolidated
- companies create identical organizational structures for {dev, staging, prod}; this is a security conscious company
- (similar to the above) companies create a model organization similar to their live organization; this way they have a sandbox to test organizational changes without affecting the live organization.
99% of companies play cowboy in the root organization, because they only have one.
I’ve even seen a 4th configuration, where the security organization runs in their own root org so no one in the primary organization can tamper with security audits.
as Erik mentioned, having multiple Orgs will allow you to completely separate the root/management accounts (for many reasons, security, acquisition, etc.), which having just OUs does not achieve
I’m adding some CloudPosse terraform components to my atmos stacks, are the example components like this in the atmos repo following you guys’ recommended pattern? Specifically around vendoring terraform components and the context and introspection mixins, are those mixins always necessary when importing CP modules and doing things the SweetOps way? Or maybe it’s a Q for office hours?
# 'vpc-flow-logs-bucket' component vendoring config
# 'component.yaml' in the component folder is processed by the 'atmos' commands
# 'atmos vendor pull -c infra/vpc-flow-logs-bucket' or 'atmos vendor pull --component infra/vpc-flow-logs-bucket'
# > atmos vendor pull -c infra/vpc-flow-logs-bucket
# Pulling sources for the component 'infra/vpc-flow-logs-bucket' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref=0.194.0'
# and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
#
# Including the file 'README.md' since it matches the '**/*.md' pattern from 'included_paths'
# Excluding the file 'context.tf' since it matches the '**/context.tf' pattern from 'excluded_paths'
# Including the file 'default.auto.tfvars' since it matches the '**/*.tfvars' pattern from 'included_paths'
# Including the file 'main.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'outputs.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'providers.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'variables.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'versions.tf' since it matches the '**/*.tf' pattern from 'included_paths'
#
# Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>'
# for the component 'infra/vpc-flow-logs-bucket' and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
# Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-aws-components/0.194.0/modules/datadog-agent/introspection.mixin.tf>'
# for the component 'infra/vpc-flow-logs-bucket' and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-flow-logs-bucket-vendor-config
description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
source:
# Source 'uri' supports the following protocols: Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
version: 0.196.1
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
excluded_paths:
- "**/context.tf"
# mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# mixins are processed in the order they are declared in the list
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
# Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
# - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
# This mixin `uri` is relative to the current `vpc-flow-logs-bucket` folder
- uri: ../../mixins/context.tf
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 0.196.1
filename: introspection.mixin.tf
Mixins are not required. Those are just examples of how to use them if you need to add a single file or override a file from the component repo
# 'vpc-flow-logs-bucket' component vendoring config
# 'component.yaml' in the component folder is processed by the 'atmos' commands
# 'atmos vendor pull -c infra/vpc-flow-logs-bucket' or 'atmos vendor pull --component infra/vpc-flow-logs-bucket'
# > atmos vendor pull -c infra/vpc-flow-logs-bucket
# Pulling sources for the component 'infra/vpc-flow-logs-bucket' from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref=0.194.0'
# and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
#
# Including the file 'README.md' since it matches the '**/*.md' pattern from 'included_paths'
# Excluding the file 'context.tf' since it matches the '**/context.tf' pattern from 'excluded_paths'
# Including the file 'default.auto.tfvars' since it matches the '**/*.tfvars' pattern from 'included_paths'
# Including the file 'main.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'outputs.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'providers.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'variables.tf' since it matches the '**/*.tf' pattern from 'included_paths'
# Including the file 'versions.tf' since it matches the '**/*.tf' pattern from 'included_paths'
#
# Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>'
# for the component 'infra/vpc-flow-logs-bucket' and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
# Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-aws-components/0.194.0/modules/datadog-agent/introspection.mixin.tf>'
# for the component 'infra/vpc-flow-logs-bucket' and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-flow-logs-bucket-vendor-config
description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
source:
# Source 'uri' supports the following protocols: Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
version: 0.196.1
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
excluded_paths:
- "**/context.tf"
# mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# mixins are processed in the order they are declared in the list
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
# Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
# - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
# This mixin `uri` is relative to the current `vpc-flow-logs-bucket` folder
- uri: ../../mixins/context.tf
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 0.196.1
filename: introspection.mixin.tf
I see, thanks
Back again :smile:
I’m stuck on an odd issue around the steps in tutorial 3, “Your first environment on AWS”. I’m following the initial state setup process but on my own stacks and with the latest version of the tfstate-backend
component. When running the terraform plan
, atmos is looking for ../account-map/modules/
which it can’t find. I tried adding the account-map
component and doing an atmos vendor pull
(including the modules), but still getting this error. Thread below…
√ . [default] (HOST) infra ⨠ atmos terraform plan tfstate-backend --stack core-gbl-root
Variables for the component 'tfstate-backend' in the stack 'core-gbl-root':
enable_server_side_encryption: true
environment: gbl
force_destroy: false
name: tfstate
namespace: hl
prevent_unencrypted_uploads: true
region: us-west-2
stage: root
tenant: core
Writing the variables to file:
/localhost/workspace/infra/components/terraform/tfstate-backend/core-gbl-root-tfstate-backend.terraform.tfvars.json
Executing command:
/usr/bin/terraform init -reconfigure
Initializing modules...
- assume_role in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map/modules/iam-assume-role-policy: no such file or
│ directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
I also then tried adding the account-map
to my core-gbl-root
stack, and running an atmos terraform plan
for it, but that gives me this error:
│ Error:
│ Searched all stack files, but could not find config for the component 'account' in the stack 'core-gbl-root'.
│ Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
│ Are the component and stack names correct? Did you forget an import?
│
│ with module.accounts.data.utils_component_config.config,
│ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
account-map
component is our component to store all info about all the accounts in the system + all the IAM roles (primary and delegated) so we can use the remote state to read the role ARNs and account IDs
to provision the backend, on cold-start, you need to comment out all the remote state lookups and provision the backend using a root user or admin user (not roles, since they don’t exist yet)
once the backend is provisioned (w/o any s3 backend yet to store the backend’s backend), you uncomment the s3 section for the backend component and plan again. TF will ask you to move the backend to the new s3 location
once you have the backend provisioned and moved to s3, you provision all the IAM roles using root/admin user credentials
then we provision account-map
to store all the account and roles info
then in all other components we use account-map
remote state to read the required terraform IAM role to use in assime_role
section
to summarize:
- Provision
tfstate-backend
component locally (it will store its state on your file system)
- Move the
tfstate-backend
component backend to s3
- Provision all required IAM roles using admin user credentials
- Provision
account-map
component
- Use remote state of
account-map
component in all other components
you are not forced to use account-map
if you provide the terraform role in assume_role
section differently,, that’s just how we do it
but for this you’ll have to modify the component s a bit which you vendor from terraform-aws-components
Oh I see, thanks, of course, that worked for setting up state! Do you know why terraform is not finding the ../account-map/modules/iam-assume-role-policy
module once I uncomment that section in tfstate-backend
? Something with the relative path, since atmos finds the tfstate-backend
fine.
My atmos config is as follows, in rootfs/usr/local/etc/atmos/atmos.yaml
:
base_path: "."
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: false
stacks:
base_path: "stacks"
included_paths:
- "orgs/**/*"
excluded_paths:
- "**/_defaults.yaml"
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
base_path: "stacks/workflows"
logs:
verbose: false
colors: true
Oh, that module doesn’t exist, I think it’s actually called team-assume-role-policy
Sorry I have the same error show that below. I was read the construction here but don’t know how to provision tfstate-backend
component locally. Can anyone help me?
Unable to evaluate directory symlink: lstat ../account-map/modules/iam-assume-role-policy: no such file or directory
Here is my configuration:
import:
- orgs/eb/_defaults
vars:
stage: "common"
tenant: "iac"
environment: "apne1"
components:
terraform:
tfstate-backend:
backend:
s3:
bucket: "eb-iac-tfstate"
key: "default.tfstate"
dynamodb_table: "eb-iac-tfstate-lock"
region: "ap-northeast-1"
I was tried to comment out the s3 code to provision backend locally, but it show another error
invalid 'components.terraform.tfstate-backend.backend' section in the file 'orgs/eb/iac/_defaults'
- Provision
tfstate-backend
component locally (it will store its state on your file system)
@Quyen did you review the doc https://atmos.tools/quick-start/configure-terraform-backend ?
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
invalid ‘components.terraform.tfstate-backend.backend’ section in the file ‘orgs/eb/iac/_defaults’
the section should look like this
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: "your-s3-bucket-name"
dynamodb_table: "your-dynamodb-table-name"
key: "terraform.tfstate"
region: "your-aws-region"
role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"
2022-09-21
@Linda Pham (Cloud Posse) let’s add an internal task for @Ben Smith (Cloud Posse) or @Dan Meyers to revamp the tutorial: https://github.com/cloudposse/tutorials
@Linda Pham (Cloud Posse) has joined the channel
2022-09-22
I’m getting a newbie error message when trying to plan a stack - I’m not really sure what I’m missing here?
pmcdonald@mt-lvt3652ckq atmos % atmos terraform plan vpc -s dev
Searched all stack files, but could not find config for the component 'vpc' in the stack 'dev'.
Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
.
├── ./atmos.yaml
├── ./components
│ └── ./components/terraform
│ └── ./components/terraform/vpc
│ ├── ./components/terraform/vpc/README.md
│ ├── ./components/terraform/vpc/context.tf
│ ├── ./components/terraform/vpc/main.tf
│ ├── ./components/terraform/vpc/outputs.tf
│ └── ./components/terraform/vpc/variables.tf
└── ./stacks
├── ./stacks/catalog
│ └── ./stacks/catalog/terraform
│ └── ./stacks/catalog/terraform/vpc.yaml
└── ./stacks/dev.yaml
dev.yaml:
import:
- catalog/terraform/vpc
vars:
stage: dev
terraform:
vars: {}
components:
terraform:
vpc:
vars:
enabled: true
subnet_type_tag_key: "example.net/subnet/type"
vpc_flow_logs_enabled: true
vpc_flow_logs_bucket_environment_name: <environment>
vpc_flow_logs_bucket_stage_name: "audit"
vpc_flow_logs_traffic_type: "ALL"
ipv4_primary_cidr_block: "10.111.0.0/18"
See the error message
Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined
in your atmos.yaml
you have
stacks:
name_pattern: {tenant}-{environment}-{stage}
but in your stack you are using just stage (e.g. dev
)
set it to
stacks:
name_pattern:{stage}
that worked. I tried stacks/tenant-uw2-dev.yaml
and got the same error
Thanks though.. I got passed the error!
one more question. How does atmos resolve the name_pattern of {tenant}-{environment}-{stage}
?
./stacks/test-usw2-dev.yaml
import:
- catalog/terraform/vpc
vars:
stage: dev
tenant: test
environmemt: usw2
terraform:
vars: {}
components:
terraform:
vpc:
vars:
enabled: true
subnet_type_tag_key: "example.net/subnet/type"
ipv4_primary_cidr_block: "10.111.0.0/18"
atmos can’t find the stack
it searches all YAML top-level stack config files and compares the context variables against the pattern {tenant}-{environment}-{stage}
from what you have
vars:
stage: dev
tenant: test
environmemt: usw2
the command should be like this
atmos terraform plan vpc -s test-usw2-dev
Running that command yields:
Searched all stack files, but could not find config for the component 'vpc' in the stack 'test-usw2-dev'.
Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
in what location do you have atmos.yaml
?
this?
stacks:
base_path: stacks
included_paths:
- '**/*'
excluded_paths:
- '**/_defaults.yaml'
name_pattern: '{tenant}-{environment}-{stage}'
is this file in the root of your repo?
it is
tree -f .
.
├── ./atmos.yaml
├── ./components
│ └── ./components/terraform
│ ├── ./components/terraform/tfstate
│ │ ├── ./components/terraform/tfstate/main.tf
│ │ └── ./components/terraform/tfstate/variables.tf
│ └── ./components/terraform/vpc
│ ├── ./components/terraform/vpc/README.md
│ ├── ./components/terraform/vpc/context.tf
│ ├── ./components/terraform/vpc/dev-vpc.terraform.tfvars.json
│ ├── ./components/terraform/vpc/main.tf
│ ├── ./components/terraform/vpc/outputs.tf
│ ├── ./components/terraform/vpc/terraform.tfstate.d
│ │ └── ./components/terraform/vpc/terraform.tfstate.d/dev
│ └── ./components/terraform/vpc/variables.tf
└── ./stacks
├── ./stacks/catalog
│ └── ./stacks/catalog/terraform
│ └── ./stacks/catalog/terraform/vpc.yaml
└── ./stacks/test-usw2-dev.yaml
you mean it was working before with just having
vars:
stage: dev
and not working then you added
vars:
stage: dev
tenant: test
environmemt: usw2
I got it working by changing name_pattern: '{tenant}-{environment}-{stage}'
to name_pattern: '{stage}'
and the file name was dev.yaml
the file name can be anything
I changed it back to name_pattern: '{tenant}-{environment}-{stage}'
and renamed the file to test-usw2-dev.yaml
and added the variables stage:dev tenant: test environment: usw2
that doesnt work
hmmm
does this help?
atmos describe config
Found ENV var ATMOS_LOGS_VERBOSE=true
Searching, processing and merging atmos CLI configurations (atmos.yaml) in the following order:
system dir, home dir, current dir, ENV vars, command-line arguments
No config file atmos.yaml found in path '/usr/local/etc/atmos/atmos.yaml'.
No config file atmos.yaml found in path '/Users/pmcdonald/.atmos/atmos.yaml'.
Found CLI config in '/Users/pmcdonald/workspace/metropolis-iac/atmos/atmos.yaml'
Processed CLI config '/Users/pmcdonald/workspace/metropolis-iac/atmos/atmos.yaml'
{
"base_path": ".",
"components": {
"terraform": {
"base_path": "components/terraform",
"apply_auto_approve": false,
"deploy_run_init": true,
"init_run_reconfigure": true,
"auto_generate_backend_file": false
},
"helmfile": {
"base_path": "components/helmfile",
"kubeconfig_path": "/dev/shm",
"helm_aws_profile_pattern": "{namespace}-{tenant}-gbl-{stage}-helm",
"cluster_name_pattern": "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
}
},
"stacks": {
"base_path": "stacks",
"included_paths": [
"**/*"
],
"excluded_paths": [
"**/_defaults.yaml"
],
"name_pattern": "{tenant}-{environment}-{stage}"
},
"workflows": {
"base_path": "stacks/workflows"
},
"logs": {
"verbose": false,
"colors": true
},
"commands": null,
"integrations": {
"atlantis": {
"path": "",
"config_templates": null,
"project_templates": null,
"workflow_templates": null
}
},
"Initialized": true
}
i suggest you go back to the working state. Don’t change the file name, it can be anything
atmos searches the stack by the context vars (tenant, environment, stage), not by file names
wouldn’t test-usw2-dev.yaml
match?
it should (but it does not for you so something is wrong)
if you don’t resolve the issue, you can send me your code I’ll take a look (by just looking at it, it should work)
I got it working. I had to add tenant
and environment
to stacks/catalog/terraform/vpc.yaml
components:
terraform:
"vpc":
backend:
s3:
workspace_key_prefix: infra-vpc
vars:
enabled: true
name: "common"
subnet_type_tag_key: eg.io/subnet/type
nat_gateway_enabled: true
nat_instance_enabled: false
max_subnet_count: 3
tentant: test
environment: usw2
Thanks for your help!
hmm, you should not do it, the vars are defined in the top-level stack
i still suggest you send me your code for review
(you don’t set those context vars in each component, they are set in top-level stacks)
v1.8.1 what Use namespace context variable in the code that is used to return remote-state for a component in a stack why For stacks config using multiple Orgs, we use namespace in stack names, and need to be able to find the remote state of the components provisioned in these stack
what Use namespace context variable in the code that is used to return remote-state for a component in a stack why For stacks config using multiple Orgs, we use namespace in stack names, and nee…
v1.8.1 what Use namespace context variable in the code that is used to return remote-state for a component in a stack why For stacks config using multiple Orgs, we use namespace in stack names, and need to be able to find the remote state of the components provisioned in these stack
2022-09-23
(Solved) Hey, I’m using the CloudPosse aws components for a new environment, and just tried to deploy my first component to a non-root stack, identity:
module.iam_roles.module.account_map.data.terraform_remote_state.s3[0]: Read complete after 1s
╷
│ Error: Invalid index
│
│ on ../account-map/modules/iam-roles/outputs.tf line 2, in output "terraform_role_arn":
│ 2: value = module.account_map.outputs.terraform_roles[local.account_name]
│ ├────────────────
│ │ local.account_name is "identity"
│ │ module.account_map.outputs.terraform_roles is object with 10 attributes
│
│ The given key does not identify an element in this collection value.
It’s looking for the identity
account in account-map
instead of core-identity
. Stack:
stage: identity
tenant: core
environment: gbl
In the root stack, my account
component looks as follows:
.....
- name: core
accounts:
- name: core-identity
tenant: core
stage: identity
tags:
eks: false
.....
and account-map
:
...
root_account_account_name: core-root
identity_account_account_name: core-identity
...
Could this be an issue related to the null-label descriptor_formats
? If so, how would I apply this special configuration of descriptor_formats
so my account-name is formatted as core-identity
and not identity
?
This is a confusing aspect of our current conventions. For customers NOT using tenant
(which is a relatively recent addition to null-label
), “account name” should not have a dash and is exactly the same as stage
.) For customers using tenant
the account name is tenant-stage
and we have a special configuration in null-label
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
that creates the account name from the tenant
and stage
labels. This leads to code like this
account_name = lookup(module.this.descriptors, "account_name", var.stage)
Oh! desciptor_formats
is just an atmos stack var that gets passed to terraform-yaml-stack-config
and then to null-label
This is a confusing aspect of our current conventions. For customers NOT using tenant
(which is a relatively recent addition to null-label
), “account name” should not have a dash and is exactly the same as stage
.) For customers using tenant
the account name is tenant-stage
and we have a special configuration in null-label
descriptor_formats:
account_name:
format: "%v-%v"
labels:
- tenant
- stage
that creates the account name from the tenant
and stage
labels. This leads to code like this
account_name = lookup(module.this.descriptors, "account_name", var.stage)
v1.8.2 what Fix atlantis projects generation why apply_requirements should be under project, not under autoplan references https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html
what Fix atlantis projects generation why apply_requirements should be under project, not under autoplan references https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html
Atlantis: Terraform Pull Request Automation