#atmos (2025-01)
2025-01-02
Is there any documentation for writing Atmos integrations or is this not a recommended practice? https://atmos.tools/terms/integration/
An Integration is a mechanism of working with other tools and APIs.
I think we would like to have more, haven’t standardized the interface
An Integration is a mechanism of working with other tools and APIs.
Could you elaborate on what you might want to integrate?
I’m considering integrating different tooling that would benefit from Atmos’ YAML deep merging. Right now, I’d like to add a new component type to atmos.yaml
. For example:
components:
ansible:
base_path: "components/ansible"
I’d like to use the atmos describe stacks
and components functionality to create my own version of tfvars.json
using a custom wrapper script that is harnessed by a custom Atmos command. I ran into a schema failure because it couldn’t validate the structure, so I added new Ansible component definitions to stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
, created a catalog default for the component, and everything validates but it can’t find the Ansible components. So I’m just looking into if there are any better ways to handle this edge case
Ah yes, I have been thinking about this some and how we could add support for custom component types. That’s interesting if atmos is not loading those other custom components. I don’t think we have tested that, and also the implication on schemas. I think to support multiple custom types, we would want to extend support for multiple json schemas that get deep merged
I think we would want to register custom types in atmos.yaml, and provide some common conventions like base_path, command, and so forth. This is also where we could define the schema validation to use for the component type.
Is there some best practices users have figured out or recommended by the authors regarding using the atmos.Component
template function across different (AWS) accounts? It seems a bit tricky with role assumption. For example if a user or a automation runner assumes role A in account A but has an atmos.Component
template function that references to state in stack B then if it seems that the B stack’s state bucket would need to allow cross account access from A role? Is there another way? It seems like you could end up with a lot of access config following this strategy.
It doesn’t work well, unless you have a role that can access all backends
So we are implementing a better long term solution
what
• Add the concepts of hooks that can run before/after other atmos commands • Add a hook type that can write output values to stores (AWS SSM Param Store, Artifactory)
why
In order to share data between components and to decouple the Terraform permissions context from the shared values permissions context.
Using this, you can store outputs outside of the terraform state, and read from the store
This simplifies the permissions model
ETA for this is end of next week
@Erik Osterman (Cloud Posse) neat, thanks for the info!
2025-01-03
:rocket: Enhancements
Support default values for arguments in Atmos custom commands @Listener430 (#905)
what
• Support default values for arguments in Atmos custom commands in atmos.yaml
why
• Allow specifying default values and don’t not require the users to provide the values when invoking the custom commands
before
a custom cli-command is defined
after
Implement Custom Markdown Styling for Workflow Commands @Cerebrovinny (#853)
Implement Custom Markdown Styling for Workflow Commands
What
• Added custom markdown styling support for workflow command outputs
• Implemented a configurable markdown styling system through atmos.yaml
• Added fallback to built-in default styles when no custom styles are defined
• Added new workflow error templates in markdown format
• Improved code readability and maintainability in markdown rendering logic
Why
• Enhances user experience • Allows users to define their own color schemes and styling preferences • Improves error message readability with consistent formatting and styling • Makes the CLI more accessible by supporting both default and custom color schemes • Follows modern CLI design patterns with rich text formatting
Technical Details
• Added markdown
settings section in atmos.yaml
for custom styling configuration
• Implemented style inheritance system (custom styles override built-in defaults)
• Added support for styling:
• Document text and background colors
• Headings (H1-H6) with custom colors and formatting
• Code blocks with syntax highlighting
• Links with custom colors and underlining
• Blockquotes and emphasis styles
• Enhanced error handling with structured markdown templates
• Added proper fallback mechanisms for style configuration
References
• Implements styling using glamour for terminal markdown rendering • Follows ANSI terminal styling standards • https://github.com/charmbracelet/bubbletea • https://github.com/charmbracelet • https://github.com/charmbracelet/glow • https://github.com/charmbracelet/glamour/blob/master/styles/gallery/README.md
Testing
The implementation has been tested with:
• Custom styling configurations in atmos.yaml
Screenshot 2024-12-29 at 23 34 33
• Default styling fallback when no custom styles are defined
Screenshot 2024-12-30 at 23 37 04
2025-01-05
Hello! I am new to atmos and I am trying to implement it for our new infrastructure (I hope I am asking at the good place, sorry if it is not the case). I am currently blocked on trying to get working atmos with the http backend (interface with gitlab). Unfortunately the http backend doesn’t support workspace (terraform doc) and therefore atmos is crashing when trying to select the workspace. In your documentation (link) the http backend is supported. Is there a way to get it working? Maybe I can use environment variables to force a workspace like here? Am I on the good path or am I missing something? Thanks in advance for your help!
Workspaces allow the use of multiple states with a single configuration directory.
Configure Terraform Backends.
Describe the Bug
In our deployment pipeline, we create the TF workspace via HTTP API as need to configuration not possible from [backend.tf](http://backend.tf)
, e.g. working-directory
, global-remote-state
, etc.
Once the workspace is created/checked, it sets the TF_WORKSPACE
variable.
In setting Atmos for the first time, found what I think is incorrect behaviour.
Terraform has been successfully initialized!
Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: ecr/redacted-app
Terraform component: ecr
Stack: redacted-dev
Working dir: components/terraform/ecr
Executing command:
/usr/bin/terraform workspace select long_complex_workspace_name_redacted
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
Executing command:
/usr/bin/terraform workspace new long_complex_workspace_name_redacted
Workspace "new long_complex_workspace_name_redacted" already exists
exit status 1
Expected Behavior
If the workspace is overridden by TF_WORKSPACE
, then atmos
should accept that it is managed elsewhere and don’t try create it again.
Steps to Reproduce
Run atmos
with TF_WORKSPACE
environment variable set.
Screenshots
No response
Environment
• OS: Windows 11 • Atmos: 1.83.1 • Terraform: 1.8.2
Additional Context
I think we could change the error handling in ~L394
atmos/internal/exec/terraform.go
Lines 382 to 409 in8060adb
err = ExecuteShellCommand( | |
---|---|
cliConfig, | |
info.Command, | |
[]string{“workspace”, “select”, info.TerraformWorkspace}, | |
componentPath, | |
info.ComponentEnvList, | |
info.DryRun, | |
workspaceSelectRedirectStdErr, | |
) | |
if err != nil { | |
var osErr *osexec.ExitError | |
ok := errors.As(err, &osErr) | |
if !ok | | osErr.ExitCode() != 1 { |
// err is not a non-zero exit code or err is not exit code 1, which we are expecting | |
return err | |
} | |
err = ExecuteShellCommand( | |
cliConfig, | |
info.Command, | |
[]string{“workspace”, “new”, info.TerraformWorkspace}, | |
componentPath, | |
info.ComponentEnvList, | |
info.DryRun, | |
info.RedirectStdErr, | |
) | |
if err != nil { | |
return err | |
} |
So this came up recently in separate thread for a different purpose
Workspaces allow the use of multiple states with a single configuration directory.
Configure Terraform Backends.
Describe the Bug
In our deployment pipeline, we create the TF workspace via HTTP API as need to configuration not possible from [backend.tf](http://backend.tf)
, e.g. working-directory
, global-remote-state
, etc.
Once the workspace is created/checked, it sets the TF_WORKSPACE
variable.
In setting Atmos for the first time, found what I think is incorrect behaviour.
Terraform has been successfully initialized!
Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: ecr/redacted-app
Terraform component: ecr
Stack: redacted-dev
Working dir: components/terraform/ecr
Executing command:
/usr/bin/terraform workspace select long_complex_workspace_name_redacted
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
Executing command:
/usr/bin/terraform workspace new long_complex_workspace_name_redacted
Workspace "new long_complex_workspace_name_redacted" already exists
exit status 1
Expected Behavior
If the workspace is overridden by TF_WORKSPACE
, then atmos
should accept that it is managed elsewhere and don’t try create it again.
Steps to Reproduce
Run atmos
with TF_WORKSPACE
environment variable set.
Screenshots
No response
Environment
• OS: Windows 11 • Atmos: 1.83.1 • Terraform: 1.8.2
Additional Context
I think we could change the error handling in ~L394
atmos/internal/exec/terraform.go
Lines 382 to 409 in8060adb
err = ExecuteShellCommand( | |
---|---|
cliConfig, | |
info.Command, | |
[]string{“workspace”, “select”, info.TerraformWorkspace}, | |
componentPath, | |
info.ComponentEnvList, | |
info.DryRun, | |
workspaceSelectRedirectStdErr, | |
) | |
if err != nil { | |
var osErr *osexec.ExitError | |
ok := errors.As(err, &osErr) | |
if !ok | | osErr.ExitCode() != 1 { |
// err is not a non-zero exit code or err is not exit code 1, which we are expecting | |
return err | |
} | |
err = ExecuteShellCommand( | |
cliConfig, | |
info.Command, | |
[]string{“workspace”, “new”, info.TerraformWorkspace}, | |
componentPath, | |
info.ComponentEnvList, | |
info.DryRun, | |
info.RedirectStdErr, | |
) | |
if err != nil { | |
return err | |
} |
Opentofu is considering deprecating workspaces
https://github.com/opentofu/opentofu/issues/2160
Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?
We need to add a feature flag to toggle usage of workspaces. Relatively easy fix for us to make.
It would be great!
Should I open an issue on github to track the issue?
Sure, let’s do that, so we can notify you when it’s done
I think we can get to it in the next 2 weeks or so.
Sounds good to me, I will create the issue tomorrow then
Describe the Bug
atmos
should support the http
backend as described here.
Unfortunately, the http
backend is not inside of the workspace supported backend list and internally atmos
is using workspace.
This is limiting for us as we would like to use atmos
with GitLab
that only support the http
backend. This backend coupled with GitLab (I don’t know other ones that well) is expecting to have different addresses to provide isolation instead of using workspaces. Maybe we could use templated variables from atmos
to adjust this backend address?
There is an open issue for workspaces support on GitLab here.
Moreover OpenTofu is considering deprecating workspaces opentofu/opentofu#2160
Expected Behavior
Provide a way to setup the http
backend with atmos
.
Steps to Reproduce
Setup a classic http backend, then run an atmos terraform
command. It should fail with workspaces not supported
.
Screenshots
No response
Environment
No response
Additional Context
On Slack a feature flag to toggle usage of workspaces has been suggested to solve this issue.
Enhancements
Support for a circuit breaker to avoid recursive calls to Atmos custom commands (infinite loop) @Listener430 (#906)
what
• Support for a circuit breaker to avoid recursive calls to Atmos custom commands
why
• Avoid infinite loops
test
Add-on to atmos.yaml
The circuit breaker in use
2025-01-06
Hey guys, is there a way to control bucket ACL through the cloudfront-s3-cdn
module? I saw that the module in question depends on s3-log-storage
which depends on the module s3-buckets
which are all managed by CloudPosse. I saw that there is an input grants
which I believe controls this. I’m talking about the settings here.
@Jeremy White (Cloud Posse)
So, I’d just like to clarify something. Since the cloudfront cdn module can have multiple buckets involved, you are seeking to update the grants on the origin bucket? Or on the logs bucket? or on both?
On the logs bucket.
To be honest, I realized that for the logs bucket I only need the external account associated with the canonical ID for log delivery in AWS. But still wanted to know if it’s possible to have the current AWS account have permissions added as well
there doesn’t appear to be a way to update the grants for the logs bucket in the module. It’s set to only have one: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/main/main.tf#L435
grants = [
to be clear here, you can always try adding more grants outside the module and see if terraform removes them. That is, you might find that additional ACL rules will not be scrubbed when terraform plans for drift/change. I only say that because it seems there’s a dynamic block on the ACL rules which might indicate additional rules will be ignored: https://github.com/cloudposse/terraform-aws-s3-bucket/blob/main/main.tf#L180
for_each = try(length(local.acl_grants), 0) == 0 || try(length(var.acl), 0) > 0 ? [] : [1]
(note, the acl_grants
local is formed on the var grants
)
if it plays nice, you could just add more grants after running the module to establish the base ACL
2025-01-07
Merge Atmos specific and terraform/helmfile help documentation @samtholiya (#857)
what
• Merge Atmos specific and terraform/helmfile help documentation
why
• Show the native terraform/helmfile commands and Atmos-specific terraform/helmfile commands
Separate Test Fixtures from Examples @milldr (#918)
what
• Move examples/tests
to tests/fixtures
why
• Examples are intended to be just that, examples. We should not be including test usages with the examples
Implement atmos about
and atmos support
commands @Cerebrovinny (#909)
what
• Implement atmos about
and atmos support
commands
why
• Help users to find information about Atmos and Cloud Posse support
Support
Screenshot 2025-01-04 at 17 42 38
About
Hey!
I would like to be able to pass all extra args of a custom CLI command to the underneath command without defining them in the flag
section. Is it possible?
My use case would be to give to ansible-playbook
some extra flags without defining all of them in the custom CLI of atmos
. Do you think it makes sense?
Oh right, I recall we had discussed adding support for --
to custom commands.
This would be a good escape hatch.
Are you familiar with the --
convention in CLI commands?
I am not sure, for me, --
is for long arguments?
Aha, so when it’s not followed by any argument signifies end of arguments. That means that everything after it is to be interpreted as a literal string and not as flags.
I’ve created a task so that we can implement this. I don’t think it’s a lot of work and we had meant to do that before. I think we might get to it in the next few weeks.
Hummmm
My use case is to have a custom command atmos ansible run
that is calling ansible-playbook
I would like to be able to add for example the argument -vvvvvvv
to be passed to ansible-playbook without defining it in the custom CLI
Or for example -l
flags to be sent too without specifying arguments
Because at the moment, every extra arguments given to the custom CLI that is not specify ends up in an error and cannot be further processed
Does it make more sense?
Yes, so if we implement this, you could do:
atmos ansible run —- -vvvvvvv
And then there would be an ENV that contains -vvvvvvv
So you could pass that literal string to ansible or parse it yourself
Note, it would contain all characters after the double hyphen
It would be fine!
Thanks!
Great! for now, you can define an env like ANSIBLE_ARGS and use that until we support it.
ANSIBLE_ARGS="-vvvvvvv" atmos ansible run
then use ANSIBLE_ARGS
in your custom command
Indeed, that should work
TF_CLI_ARGS_*
Handling @milldr (#898)
what
• Added logic and warning messages when the user specifies any TF_CLI_*
environment variable, since this may conflict with Atmos behavior.
• Append any TF_CLI env vars to the determined TF_CLI env var for terraform shell
why
• When executing Terraform Shell, we should append generate var file to the (if) specified env var rather than overwriting it • When executing Terraform, var files are already appended (merged). We should add a warning nonetheless
atmos terraform shell
atmos terraform plan
Y’all’s development pace is wild! Keep up the great work and great features and fixes
This was one of @Dan Miller (Cloud Posse)’s first contributions to the atmos core! Nice work
I’ve got 3 more queued
2025-01-08
Add --query
(shorthand -q
) flag to all atmos describe
commands @aknysh (#920)
what
• Added --query
(shorthand -q
) flag to all atmos describe <subcommand>
commands
• Updated CLI command documentation to reflect new querying functionality
• Added examples for new query-based commands
why
• Query (and filter) the results of atmos describe <subcommand>
commands using yq expressions
• Before, an external yq
or jq
binary was required to query/filter the results of atmos describe
commands. With the yq
functionality now embedded in Atmos, installing the yq
binary is not required
examples
atmos describe component vpc -s plat-ue2-prod –query .vars atmos describe component vpc -s plat-ue2-prod -q .vars.tags atmos describe component vpc -s plat-ue2-prod -q .settings atmos describe component vpc -s plat-ue2-prod –query .metadata.inherits
atmos describe stacks -q keys # this produces the same result as the native atmos list stacks
command
atmos describe config -q .stacks.base_path
atmos describe workflows -q ‘.0 | keys’ |
atmos describe affected -q <yq-expression>
atmos describe dependents -q <yq-expression>
references
• https://mikefarah.gitbook.io/yq • https://github.com/mikefarah/yq • https://mikefarah.gitbook.io/yq/recipes • https://mikefarah.gitbook.io/yq/operators/pipe
Should we move the release notifications to a new channel? We will probably have 20 or more releases this month.
Were your thoughts to move it to atmos-dev
?
No, it was too noisy in there. Maybe atmos-releases or atmos-notifications
Update the descriptions of Atmos commands @samtholiya (#845)
what
• Update the descriptions of Atmos commands
why
• Improve clarity and specificity regarding Atmos commands functionality
Introduce a centralized theming system for consistent UI styling @Cerebrovinny (#913)
what
• Introduced a centralized theming system for consistent UI styling • Moved terminal colors to global constants • Added theme-based color and style configurations
why
• Centralize color and style management across the application • Make it DRY and consistent
2025-01-09
Having some state problems with atmos that I’m struggling to resolve…
$ atmos terraform plan msk -s staging-data-msk
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
$ atmos terraform plan msk -s staging-data-msk -- -reconfigure
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Any tips on how to actually fix this problem?
in your atmos.yaml
, do you have this
components:
terraform:
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE' ENV var, or '--init-run-reconfigure' command-line argument
init_run_reconfigure: true
also, you can always execute atmos terraform shell msk -s staging-data-msk
and then inside the shell use the native terraform commands
Ooo…okay.
This command starts a new SHELL
configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.
Shell doesn’t work either:
atmos terraform shell msk -s staging-data-msk
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
do you have init_run_reconfigure: true
?
Yep:
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
try
atmos terraform init msk -s staging-data-msk -- -reconfigure
That is the first thing I tried. You can see it in the initial code I pasted.
Ah. Crumb. That says plan
. One sec.
$ atmos terraform init msk -s staging-data-msk -- -reconfigure
exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
are you using atmos.Component or !terraform.output functions in your stacks?
Yep.
In this one, specifically !terraform.output
.
ok, I prob know what the issue is - the !terraform.output
functions does not take into account init_run_reconfigure: true
(it calls a Hashi lib to execute terraform init
)
we’ll test it and release a fix in a day or two
for now, you can do the following:
the issue with the backend is not for this component msk -s staging-data-msk
, but for the component in the stack in the !terraform.output
func
so you can do
cd <path to that component>
terraform init -reconfigure
Makes sense. Thanks for the help!
let me know if it fixes the issue
@Gabriela Campana (Cloud Posse) please create a task for Atmos:
In atmos.Component
and !terraform.output
functions, detect the init_run_reconfigure: true
setting in atmos yaml
and execute terraform init -reconfigure
Turns out this was a bug with a separate module not providing expected outputs. Thanks for helping me debug, though!
(I found this by switching to atmos.Component
values, which at least gave slightly more error:
$ atmos terraform init msk -s staging-data-msk
template: all-atmos-sections:205:35: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
But I’d say that surfacing the error more clearly would be nice. (I know that surfacing errors through the stack is a pain)
Turns out this was a bug with a separate module not providing expected outputs
the component in the !terraform.output
func did not have the expected outputs?
(we def need to improve error handling here)
Yes. The !terraform.ouptut
/atmos.Component
was missing outputs. Adding those missing outputs (fortunately we control that module) fixed the error.
@Andriy Knysh (Cloud Posse) DEV-2921: In atmos.Component
and !terraform.output
functions, detect the init_run_reconfigure: true
setting in atmos yaml
and execute terraform init -reconfigure
Is it common practice to use Terraform child modules inside components? The docs encourage component scope not to be too small, so I was wondering where sensible defaults for more resource-level configurations across stacks should live. We’ve typically used child modules for this purpose, but I’m unsure if the intention behind some of the atmos abstractions is to flatten the root graph?
@Andriy Knysh (Cloud Posse)
You should not use YAML as a replacement for HCL
Learn the opinionated “Best Practices” for using Components with Atmos
See the warning there
Do you mean writing a component that uses multiple child modules?
or using “child modules” as components?
The former, writing a component that uses multiple child modules! I saw those notes in the docs, but was a little confused! It seems like its a balancing act.
I was planning to organize each of our different product surfaces into their own components so they can be managed/deployed independently.
I’m not sure there’d be a case in which we’d have a component that was general enough to re-use, but certainly I could foresee HCL modules that we’d reuse (e.g. database, internal api servers, storage buckets).
I noticed that a lot of the CloudPosse components are for specific pieces of infrastructure, for example aws-vpc
or aws-aurora-postgres
. Something like a Postgres instance is only used by an API server feels like it’d be too small to warrant splitting it into its own component, so we should just let Terraform handle the dependencies rather than splitting it into two lifecycles.
Thanks for your response, it clears a lot up!
But the lifecycle of the API and the database are almost entirely disjoint
That’s why they we argue they should be separate
True! Perhaps the stacks should be organized by product. Do end-users typically solely the utilize output atmos describe affected
as opposed using workflows to apply/plan each component in a stack?
Describe affected was implemented mostly for our GitHub actions
We also plan to improve the command line ability to do the same, but it’s not available today
We have a task assigned to implement:
atmos terraform apply —-affected
This would under the hood run describe affected and iterate over each component in dependency order and apply it.
Support Relative Path Imports @milldr (#891)
what
• Add support for relative paths in imports, when the path starts with ./
or ../
• Support all path types:
import:
- ./relative_path
- ../relative_path
or
import:
- path: ./relative_path
- path: ../relative_path
why
• Allow less path duplication and support simpler code Test Cases as a Directory @milldr (#922)
what
• Added support for test cases as a directory rather than a single file
why
• We are adding many more tests and as such the single file is becoming unruly
If we want to create new subnets on an existing vpc, is there an upstream component that can be used?
I looked at this component https://github.com/cloudposse-terraform-components/aws-vpc and couldn’t find a way to add more subnets such as a subnet specifically for databases, for example.
This component is responsible for provisioning a VPC and corresponding Subnets
I also searched for a component with subnet
in its name and no luck
This component is responsible for provisioning a VPC and corresponding Subnets
i was imagining a component wrapper for this module https://github.com/cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC
or would you folks be open to setting up a for_each
for the cloudposse/dynamic-subnets/aws
module directly in the vpc component ?
module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
version = "2.4.2"
What do you folks think?
You should/could be able to use this directly: https://github.com/cloudposse/terraform-aws-dynamic-subnets
Terraform module for public and private subnets provisioning in existing VPC
since atmos supports provider & backend generation
Use Component manifests to make copies of 3rd-party components in your own repo.
oh very cool
and you can use vendor manifests the same way. doesn’t have to be a component manifest.
If i vendored the module as a component, I’d still need to hardcode the vpc unless i either
• modified the vendored module as a component to read from the remote state
• Or modified the dynamic-subnets module to allow for a data source to use tags to collect the vpc id, then vendor it in as a component, and then there is no hard coding
Heres a draft pr that would make it easier to vendor dynamic-subnets as a component without hard coding
https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/219
Or you can use !terraform.output to retrieve the vpc id
Read the remote state of any Atmos component
Holy moly, that’s great
So something like this?
vpc/example/dynamic-subnets/db:
metadata:
component: dynamic-subnets
vars:
vpc_id: !terraform.output vpc/example vpc_id
2025-01-10
Embed JSON Schema for validation of Atmos manifests inside Atmos binary @aknysh (#925)
what
• Embed the JSON Schema for validation of Atmos manifests inside Atmos binary • Update docs
why
• Embedding the JSON Schema inside the Atmos binary allows keeping the Atmos code and the schema in sync, and does not force users to specify JSON Schema in atmos.yaml
and monitor it when it needs to be updated
description
Atmos uses the Atmos Manifest JSON Schema to validate Atmos manifests, and has a default (embedded) JSON Schema.
If you don’t configure the path to a JSON Schema in atmos.yaml
and don’t provide it on the command line using the --schemas-atmos-manifest
flag, the default (embedded) JSON Schema will be used when executing the command atmos validate stacks
.
To override the default behavior, configure JSON Schema in atmos.yaml
:
• Add the Atmos Manifest JSON Schema to your repository, for example in stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
• Configure the following section in the atmos.yaml
Validation schemas (for validating atmos stacks and components)
schemas:
# JSON Schema to validate Atmos manifests
atmos:
# Can also be set using ‘ATMOS_SCHEMAS_ATMOS_MANIFEST’ ENV var, or ‘–schemas-atmos-manifest’ command-line arguments
# Supports both absolute and relative paths (relative to the base_path
setting in atmos.yaml
)
manifest: “stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json”
# Also supports URLs
# manifest: “https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json”
• Instead of configuring the schemas.atmos.manifest
section in atmos.yaml
, you can provide the path to
the Atmos Manifest JSON Schema file by using the ENV variable ATMOS_SCHEMAS_ATMOS_MANIFEST
or the --schemas-atmos-manifest
command line flag:
ATMOS_SCHEMAS_ATMOS_MANIFEST=stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json atmos validate stacks atmos validate stacks –schemas-atmos-manifest stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json atmos validate stacks –schemas-atmos-manifest https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json
Summary by CodeRabbit
• Configuration Updates
• Enhanced schema configuration flexibility in atmos.yaml
• Added support for remote and embedded JSON schema locations
• Documentation Improvements
• Updated CLI command documentation for stack validation
• Added new sections explaining validation processes and schema management
• Clarified usage of URLs for schema manifests in documentation
• Testing
• Added new test for JSON schema validation
I saw a peer hard code directly in the vpc
component. One block per unique subnet like mqtt and rds subnets direclty in the component instead of making it based on inputs. I skimmed through the component best practices and didn’t see anything regarding this. Id assume that we want to move stuff more into yaml and less in terraform if we can help it and if it’s in terraform it should be multi-region, multi-account. I suppose if we’re so used to atmos that this may be obvious but it may not be obvious to newcomers.
Any specific line items in the docs we can share to prevent this ?
Were they using the overrides pattern?
Learn the opinionated “Best Practices” for using Terraform with Atmos
@RB does this count? https://atmos.tools/best-practices/components#use-parameterization-but-avoid-over-parameterization
Learn the opinionated “Best Practices” for using Components with Atmos
No they were updating the component directly after downstreaming
Long and short, hard coding subnets entirely defeats the purpose of reusable components. I don’t think we explicitly point out not to do it.
Ah ok
If I were to contribute it, which page would it be best to call that out on ?
Component best practices?
Yes, please do!
Support Component Lock with metadata.locked
@milldr (#908)
what
• Added support for metadata.locked
with components
• Separate atmos CLI tests into many files: tests/test_cases.yaml
–> tests/test-cases/*.yaml
why
• The metadata.locked
parameter prevents changes to a component while still allowing read operations. When a component is locked, operations that would modify infrastructure (like terraform apply
) are blocked, while read-only operations (like terraform plan
) remain available. By default, components are unlocked. Setting metadata.locked
to true
prevents any change operations.
Lock a production database component to prevent accidental changes
components: terraform: rds: metadata: locked: true vars: name: production-database
:wave: Hello! I’m just getting started w/ Atmos and I’m running into an issue when attempting to use atmos.Component
in combination w/ a key that contains a -
. When doing (atmos.Component "vpc" .stack).outputs.foo-bar
I’m getting bad character U+0022 '-'
. I then attempted to use index (atmos.Component "vpc" .stack) "outputs" "foo-bar"
however that gives index error calling index: index of nil pointer
. Any suggestions?
Can you share the yaml instead?
Also, recommend using !terraform.output
instead of {{ atmos.Component }}
Read the remote state or configuration of any Atmos component
atmos.Component
literally injects text into the document, not an object.
So if your text is not well formatted for YAML you’ll get these kinds of errors probably.
Using !terraform.output
injects a well structured object automatically, avoiding the risks
Sure, let me get the YAML. Just a moment. The value I’m trying to get is a a string, not an object.
ok
Attempt #1 was:
components:
terraform:
application:
metadata:
component: organization/application
vars:
subnet: '{{ (atmos.Component "vpc" .stack).outputs.public.subnet-id }}'
Then I tried:
components:
terraform:
application:
metadata:
component: organization/application
vars:
subnet: '{{ index (atmos.Component "vpc" .stack) "outputs" "public" "subnet-id" }}'
I attempted to use !terraform.output
as you suggested however the function isn’t variadic so attempting to do !terraform.output vpc public subnet-id
gives invalid number of arguments in the Atmos YAML function: !terraform.output vpc public subnet-id
@Kyle Decot the correct expression would be (if oyu have the variable public
which is a map with the key subnet-id
{{ (atmos.Component "vpc" .stack).outputs.public.subnet-id }}
however …
In Go templates, using dashes in keys will cause issues because the template syntax does not natively support accessing keys with dashes directly.
@Andriy Knysh (Cloud Posse) what about the !template.output
syntax
use index
function
{{index .person "last-name"}}
for !terraform.output
, this is not correct syntax
!terraform.output vpc public subnet-id
the function accepts a component and output name (2 params), or a component, stack,, and output name (3 params)
try this
'{{ index ((atmos.Component "vpc" .stack).outputs.public) "subnet-id" }}'
@Andriy Knysh (Cloud Posse) is the limitation with !terraform.output
that you cannot retrieve a attribute from the output?
@Andriy Knysh (Cloud Posse) bumping this up
we are finishing up some changes to improve the !terraform.output
function, will be in the next release
Hi everyone, enjoying using Atmos :heart: I just had a quick clarification question regarding setting the remote_state_backend
configuration, reading the backend configuration docs it says
When working with Terraform backends and writing/updating the state, the terraform-backend-read-write role will be used. But when reading the remote state of components, the terraform-backend-read-only role will be used.
Could someone clarify, this refers to using the remote_state terraform module only and not say if I ran
atmos terraform output my_component -s my_stack
or if I referenced an output in a stack via a yaml function such as
!terraform.output my_component my_stack my_output_value
This is the behavior I’m seeing, just wanting to know if I’m not doing something wrong
Atmos supports configuring Terraform Backends to define where Terraform stores its state, and Remote State to get the outputs of a Terraform component, provisioned in the same or a different Atmos stack, and use the outputs as inputs to another Atmos component.
When working with Terraform backends and writing/updating the state, the terraform-backend-read-write role will be used. But when reading the remote state of components, the terraform-backend-read-only role will be used
Atmos supports configuring Terraform Backends to define where Terraform stores its state, and Remote State to get the outputs of a Terraform component, provisioned in the same or a different Atmos stack, and use the outputs as inputs to another Atmos component.
yes, it applies only when using the remote-state
module
and you specify the backend role in backend
and remote state backend role in remote_state_backend
right, gotcha - It’s what I thought but just wanted to check
when executing !terraform.output
, it just executes terrafrom output
using the already assumed role or configured in the TF module - so this function does not know anything about roles and depends on the role config in the corresponding TF component (or, if not configured in the TF component, the caller role will be used by TF)
@David Elston a correction: the !terraform.output
YAML function (and atmos.Component
template function) does generate the backend file before executing terraform output
so the configured backend will be used
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: "your-s3-bucket-name"
dynamodb_table: "your-dynamodb-table-name"
key: "terraform.tfstate"
region: "your-aws-region"
role_arn: "arn:aws:iam::xxxxxxxx:role/terraform-backend-read-write"
2025-01-11
Configure and use GitHub auth token when executing atmos vendor pull
commands @Listener430 @aknysh (#912)
what
• Use GitHub token when executing atmos vendor pull
commands
• Intorduce environment variables GITHUB_TOKEN
and ATMOS_GITHUB_TOKEN
to specify a GitHub Bearer token for authentication in GitHub API requests
• Add custom GitHub URL detection and transformation for improved package downloading
why
• When pulling from private GitHub repos, GitHub has a rate limit for anonymous requests (which made atmos vendor pull
fail when vendoring from private GitHub repositories)
description
In the case when either the ATMOS_GITHUB_TOKEN
or GITHUB_TOKEN
variable is configured with a GitHub Bearer Token, vendor files are downloaded with go-getter
using the token. The token is put into the URL, and the requests to the GitHub API are not affected to anonymous users rate limits
test
[github.com/analitikasi/Coonector.git](http://github.com/analitikasi/Coonector.git)
is a private repo
- component: “weather”
source: “github.com/analitikasi/Coonector.git//quick-start-simple/components/terraform/{{ .Component }}?ref={{.Version}}”
version: “main”
targets:
- “components/terraform/{{ .Component }}/{{.Version}}” tags:
- demo
We’re investigating why a binary was not created with this release. Stay tuned.
Configure and use GitHub auth token when executing atmos vendor pull
commands @Listener430 @aknysh (#912)
what
• Use GitHub token when executing atmos vendor pull
commands
• Intorduce environment variables GITHUB_TOKEN
and ATMOS_GITHUB_TOKEN
to specify a GitHub Bearer token for authentication in GitHub API requests
• Add custom GitHub URL detection and transformation for improved package downloading
why
• When pulling from private GitHub repos, GitHub has a rate limit for anonymous requests (which made atmos vendor pull
fail when vendoring from private GitHub repositories)
description
In the case when either the ATMOS_GITHUB_TOKEN
or GITHUB_TOKEN
variable is configured with a GitHub Bearer Token, vendor files are downloaded with go-getter
using the token. The token is put into the URL, and the requests to the GitHub API are not affected to anonymous users rate limits
test
[github.com/analitikasi/Coonector.git](http://github.com/analitikasi/Coonector.git)
is a private repo
- component: “weather”
source: “github.com/analitikasi/Coonector.git//quick-start-simple/components/terraform/{{ .Component }}?ref={{.Version}}”
version: “main”
targets:
- “components/terraform/{{ .Component }}/{{.Version}}” tags:
- demo
cc @Igor Rodionov
This release will be skipped as there was a problem in main at this commit. Please use 1.146.1 instead.
2025-01-13
:rocket: Enhancements
Bump github.com/aws/aws-sdk-go-v2/config from 1.28.9 to 1.28.10 @dependabot (#932)Bumps [github.com/aws/aws-sdk-go-v2/config](http://github.com/aws/aws-sdk-go-v2/config) from 1.28.9 to 1.28.10. Commits
• 7a7d202
Release 2025-01-10
• fdaa7e2
Regenerated Clients
• 52aba54
Update endpoints model
• 7263a7a
Update API model
• See full diff in compare view
Dependabot compatibility score
Dependabot will resolve any conflicts with this PR as long as you don’t alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
• @dependabot rebase
will rebase this PR
• @dependabot recreate
will recreate this PR, overwriting any edits that have been made to it
• @dependabot merge
will merge this PR after your CI passes on it
• @dependabot squash and merge
will squash and merge this PR after your CI passes on it
• @dependabot cancel merge
will cancel a previously requested merge and block automerging
• @dependabot reopen
will reopen this PR if it is closed
• @dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
• @dependabot show <dependency name> ignore conditions
will show all of the ignore conditions of the specified dependency
• @dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
• @dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
• @dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
:robot_face: Automatic Updates
Bump github.com/aws/aws-sdk-go-v2/config from 1.28.9 to 1.28.10 @dependabot (#932)Bumps [github.com/aws/aws-sdk-go-v2/config](http://github.com/aws/aws-sdk-go-v2/config) from 1.28.9 to 1.28.10. Commits
• 7a7d202
Release 2025-01-10
• fdaa7e2
Regenerated Clients
• 52aba54
Update endpoints model
• 7263a7a
Update API model
• See full diff in compare view
Dependabot compatibility score
Dependabot will resolve any conflicts with this PR as long as you don’t alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
• @dependabot rebase
will rebase this PR
• @dependabot recreate
will recreate this PR, overwriting any edits that have been made to it
• @dependabot merge
will merge this PR after your CI passes on it
• @dependabot squash and merge
will squash and merge this PR after your CI passes on it
• @dependabot cancel merge
will cancel a previously requested merge and block automerging
• @dependabot reopen
will reopen this PR if it is closed
• @dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
• @dependabot show <dependency name> ignore conditions
will show all of the ignore conditions of the specified dependency
• @dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
• @dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
• @dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
I’m having some trouble figuring out why vendoring isn’t working. I can’t get any related logs other than Failed to vendor 1 components.
I’ve tried setting the various log level parameters (via atmos.yaml, environment variable, and --logs-level
).
I’m attempting to vendor from a private OCI registry, so my guess would be that go-containerregistry
is having trouble getting credentials
Dry run shows:
✓ cloud-sql-instance (latest)
Done! Dry run completed. No components vendored.
I am a bit confused. With a dry run nothing should be vendored.
What happens with out dry run?
This is all the information I get with atmos vendor pull
$ atmos vendor pull
x cloud-sql-instance (latest)
Vendored 0 components. Failed to vendor 1 components.
Aha, ok - that’s helpful
Is this a public oci image?
It’s private- my brief look at go-containerregistry
made it seem like my Docker-configured credHelpers
should be used to authenticate
apiVersion: atmos/v1
kind: AtmosVendorConfig
spec:
sources:
- component: "cloud-sql-instance"
source: "<oci://us-central1-docker.pkg.dev/org/repo/default:{{.Version}}>"
version: latest
targets:
- components/terraform/vendor/cloud-sql-instance
included_paths:
- '**'
My guess is it’s Auth related and I don’t know if we’ve tested on private repo
That’d be my guess, too. Should these installPkgMsg
s end up in the logs? https://github.com/cloudposse/atmos/blob/main/internal/exec/vendor_model.go#L289
if err := processOciImage(atmosConfig, p.uri, tempDir); err != nil {
If you have a chance to take a look and fix it, we’ll gladly merge the PR :-)
Otherwise I will add a task to fix it, but not sure when we can get to it
The error message is only printed if we’re moving on to install another component. In my case I only have one vendored component, so the completion message is printed instead
I’m also not sure I understand why all the logging is guarded by !isTTY
checks. I would think we’d want logs even if we’re displaying the terminal ui?
here’s a pr- https://github.com/cloudposse/atmos/pull/936
what
Adds an error message to the output of atmos vendor pull
for the last component in a vendor manifest
references
closes #935
Hrmmm, yes, that is probably because it messes up the UI, but that just means we need to find a better way to handle it.
Maybe we should force a log file, if UI, and direct the user to the log file in the event of errors
Or in the UI we should print it below the component before proceeding to the next component
Is it possible to generate required_providers
block via atmos?
{
"terraform": {
"required_providers": {
{ "test": {
"source": "test"
}
}
}
}
As it is done now for the backend block
Not at this time. Providers and Backends. Can you describe your use case?
Is it possible to generate code from atmos itself, we need an analog of the generate
function of terragrunt to override the configuration?
@Petr Dondukov this is definitely a common request, especially from users migrating from other toolchains. I wouldn’t rule it out, and we’ve definitely losened our stance on this (we used to be starkly against it), but we have added some conventional ways for providers and backends. I think if we were to implement generation, we do not want to take the same approach as for backends and providers. We do not want to replace HCL with YAML.
Could you help share some of your use-cases?
Note, we have over 165 components that we’ve written and the only code generation we rely on is provider generation and backend generation. Everything else we’ve accomplished with atmos, without code generation.
Collection of Cloud Posse Terraform Components used with the Cloud Posse Reference Architecture
Also, if we showed you how you could use terragrunt with atmos, would that be a good stop-gap?
Yes, I realize we’re using terragrunt right now and our infrastructure is complex and we’d like to simplify it. I am currently looking for a way to redefine provider versions (or define from scratch), as we have historically had a parent and child module approach and have had to completely remove the provider block from hundreds of our terraform modules. We generate the provider block via terragrunt and that’s how we have the infrastructure built, not the best solution of course. Also, even without terragrunt we over-utilize child modules in other parent modules, that’s why it’s necessary for us.
Also, I found that we need to define in terraform modules the configuration for name_template for atmos to work, or I’m throwing this through the cloudposse/context provider for the whole stack, like this:
name_template: "{{{.providers.context.values.tenant}}-{{{.providers.context.values.environment}}-{{{.providers.context.values.resource.name}}-{{{.providers.context.values.region}}-{{{{.providers.context.values.resource.number}}}"
. But without specifying the source = cloudposse/context
parameter in the required_providers
block, terraform gives an error that it can’t find the context provider in the registry. An option would be to describe this block in each terraform module, but we want to retain versatility.
In general, the first one is not very critical. Now I’m more interested in how to throw a source for the cloudposse/context provider without fixing the terraform modules themselves.
@Erik Osterman (Cloud Posse)
Provider is definitely in the registry. Have you seen this demo? https://github.com/cloudposse/atmos/tree/main/examples/demo-context
Yes, I understand. However, in order to use the context
provider we need to prescribe it in the terraform module, to pass the source = "cloudposse/context"
parameter. We don’t want to add this to the terraform module itself, so as not to lose versatility, but to make do with the atmos tool.
We don’t want to add this to the terraform module itself, so as not to lose versatility, but to make do with the atmos tool.
Just curious, is this because this context provider is perceived an atmos feature?
If the resources in the module do not use the resources provided by the context provider, then there’s no reason to use the context provider.
The context provider is a modern alternative to the terraform-null-label module (or any naming module, as others exist now)
Re: name_template
this is how the “slug” or “id” is generated for atmos components, so you can refer to it with a simple, programmatically consistent name.
So this setting:
name_template: "{{.providers.context.values.tenant}}-{{.providers.context.values.environment}}-{{.providers.context.values.resource.name}}-{{.providers.context.values.region}}-{{.providers.context.values.resource.number}}"
you can configure that based on any vars instead.
So let’s say all of your root modules defined “name”, “location”, “account”, then you can define a name template liek this:
name_template: "{{.vars.account}}-{{{.vars.location}}-{{.vars.name}}-{{.component_name}}"
This assumes every root module has a HCL variable defined for account, location, name.
Not sure if i’m reading between the lines correctly :smiley: it sounds like maybe the concern was needing to use .providers.context
and that’s not needed.
Unfortunately, we don’t use these variables. And if we write them at the stack level, terraform writes an error of unused variables
Ok, so I think we’re getting somewhere
So basically, you need to be able to pull these from somewhere else, that’s neither providers nor vars.
I think there’s a solution for this. The settings
field of a component is a free-form map.
You should be able to use .settings
instead of .providers
name_template: "{{.settings.context.account}}-{{{.settings.context.location}}-{{.settings.context.name}}-{{.component_name}}"
@Andriy Knysh (Cloud Posse) can you confirm
and by the way, .account
and .location
and .name
were totally arbitrary. You can establish whatever convention you like.
yes, the settings
section is a free-form map, so it can be used to “store” data/metadata and then use it in other configs.
Since settings
is a free-form map, you can use any sunsection under it (e.g. settings.context
)
and in the name_template
you can use any Atmos section, not only vars
or providers
For the
Go
template tokens, and you can use any Atmos sections (e.g.vars
,providers
,settings
)
so this is perfectly fine
name_template: "{{.settings.context.account}}-{{{.settings.context.location}}-{{.settings.context.name}}"
settings:
context:
account: xxxx
location: yyyy
name: zzzz
(it will not define any new vars and will not send them to the TF components)
@Petr Dondukov ^
let us know if we can help with this or you need more details
Thanx a lot! That is exactly what I need!
2025-01-14
Hey! I am pretty new to atmos, so sorry for the beginner questions Anyhow I am a little confused about how templating works for the yaml files. So first of all, I do not seem to be able to have them evaluated: I got a pretty simple stack config like this:
import:
- deploy/dev/_defaults.yaml
- catalog/tfstate-backend.yaml
vars:
tags:
Stack: tfstate-backend
test: !template '{{ .stack }}'
test2: !exec echo value2
I would expect to have the test key in the tags variable templated (somehow, I am not exactly sure what I would see there), but it does not get evaluated. On the other hand the !exec function is evaluated nicely. This is the output for “atmos describe stacks –components tfstate-backend”
dev:
components:
terraform:
tfstate-backend:
...
stack: dev
vars:
enable_server_side_encryption: true
enabled: true
force_destroy: false
name: tfstate
prevent_unencrypted_uploads: true
region: us-east-1
stage: dev
tags:
Managed-By: Terraform
Stack: tfstate-backend
test: '{{ .stack }}'
test2: |
value2
workspace: dev
So can you help me figure out how I can make these templates evaluated? Or a complete example would be nice, where I could see something like this work, unfortunately I did not find anything similar in the provided examples.
Also, what I would actually like to figure out is how I could reference variable defined on any level (global, or any other level) and use them to construct values e.g. to pass to the component (like define some prefix on an upper level and a list of items on some lower level, and pass the values combined to the component) in the stack configuration. So I was previously using terragrunt a lot, and there it is very straightforward to do so, because you can just reference any object and use the same functions as I would use in terraform (and some more). Can you help me with some example how atmos designed to handle such use case? I found this in the docs, which seem to be similar to what I am looking for: https://atmos.tools/core-concepts/stacks/yaml-functions/template#advanced-examples But this example is not very complete, and with not being able to evaluate the templates I am just stuck in how to move forward. Also it would be nice to have a reference of what exactly the context of this template evaluation is (e.g. this was the only place i found in the documentation where .settings are referenced in a template, so I wonder what else is available)
Thanks for the attention and for reading all of this ^^ :)
Handle outputs containing maps or lists returned from the atmos.Component template function
Looks like templating is disabled in your atmos.yaml. It’s disabled by default
Handle outputs containing maps or lists returned from the atmos.Component template function
I will get back to this later. In general, I recommend refactoring questions into individual messages as they are easier to answer individually with threads for each topic.
thanks Erik, that is right, I had a typo I had this in my config:
tempates:
settings:
enabled: true
Also one more thing, I find a little strange. Previously when I was working with terraform and terragrunt one of the things we thought important was that I want to be able to explicitly define the list of variables that are passed to a specific terraform root module (which are called components in Atmos’s case). The reason for this is that I want to see in the configuration which components depend on which value. So I can be sure about the blast radius of a change, because when I change a variable (or delete it for that matter) I know to which components are referencing them (and finding them is really just a cmd+shift+f in my editor). As I see with Atmos doing deep merging of variables it was not designed this way, any common vars coming from a _default file or a mixin are passed to every single component, so it is hard to see the exact dependencies (and not to mention issues with having two components which are expecting some input in a different way with the same variable name). Probably I could devise a setup using templates to handle this but I would feel like I am not using Atmos the way it was intended to be used. So I would love some clarification on this, or hear some opinions on how you think this should be handled. thanks again :)
Look into multiple inheritance and abstract components.
For detecting changes, we have atmos describe affected, which compiles the fully deep merged configurations, and then compare the them between two git refs.
Right now, this returns a large amount of json data and is predominantly concerned with supporting our GitHub actions. We intend to present a much simpler command with human friendly output :-)
I expect that we’ll get to it in Q1
I can see why explicitly referencing variables the way you are in Terragrunt is convenient for the use-case you described. I think what would be helpful is if we wrote up some docs specifically for users coming from Terragrunt because atmos is quite a different way of approach the problem. Terramate, for example, is very similar to Terragrunt and is simpler migration for Terragrunt users. Anyways, I will have to pause here and follow up through out the day.
Thanks Erik again for the answer. However, as far as I understand (but I might be wrong here) the diff of the deep merged config does not help, my concern is that we just pass all the vars that are merged in any way (through inheritance or import, etc.) to the component, but some of them might be not actually used by the component, they are there for others only. And maybe in such situation the imports and inheritance were not used properly, but you cannot guarantee that everyone will do such good job, so in my opinion explicit passing of inputs for the root modules is always better (not instead of the inheritance and imports, but together with them).
So as an example, I have var a, b and c defined somehow. component1 might be using a and b, component 2 might be using a, c and there might be a 3rd one not using any of these at all, but as I understand all of these vars will be passed to all 3 components in the stack. So how will I know (other then running a plan) if I change var c which component will be affected? And I understand that in many cases this is not going to be a problem, but once you have a complex system with a lots of components (20+ is a lot already I think) variables and environments, this can make the difference between having a big ball of mud and a system which you can actually understand.
Anyway it might be just some additional convention I might use, I would just like to see the options and how exactly this was intended to be used :)
running 80+ components, and a large number of atmos stacks:
• most variables are nicely scoped per component, so unless you put stuff in _defaults, it should not be a real issue
• we are keeping the atmos describe stacks (postprocessed by yq) in our repo, as well as the tfvars. That way we have full visibility on change impact. (we use the files also for various other reasons, incl more advanced opa policies)
Thanks Hans for the answer. I think the might be the key, to have the variables scoped to the component, thanks for pointing it out. I will still need to figure out exactly how to organize things to my liking, but at least I think I am starting to understand the idea.
Yes, what @Hans D points out is the right convention. I guess we haven’t really documented that somewhere. In a related way, I think it would be neat if atmos had an optional mode to validate variables passed to terraform are acceptable by the component. This would reduce the error prone nature.
@Andriy Knysh (Cloud Posse) would that be complicated? I know you already load the variables of the HCL component somewhere in atmos.
@Dave in Atmos, all variables are scoped to a particular component (inheritance or imports, should not matter)- at least this is how it should be configured.
If you are using global vars (e.g. scoped to the entire stack), then yes, all of them will be sent to all TF components used in the stack - but it’s not how it should be configured - no globals (except the context variables like stage
, environment
etc. which are used in all (Cloud Posse’s ) components
you can share your config, we can take a look
@Andriy Knysh (Cloud Posse) not scoped to a particular it the global vars are used; then inherited by all components
So as an example, I have var a, b and c defined somehow. component1 might be using a and b, component 2 might be using a, c and there might be a 3rd one not using any of these at all, but as I understand all of these vars will be passed to all 3 components in the stack
i might be not understanding the problem completely, but it sounds like those vars are “global”
the question prob is: how do we prevent declaring vars in Atmos manifests for Atmos components that are not used by the corresponding TF components. we can add this to Atmos (it already has all the information about the configured vars in the Atmos component and all the vars used by the TF component/module)
we are going to add the validation to atmos validate stacks
and atmos validate component
commands: if a variable was configured in Atmos stack manifests, but it’s not used by the TF component(s), show an error
will be in one of the new releases
@Dave ^
that sounds pretty nice, and thanks a lot for explaining all this
Hi all. I saw some questions related to remote state. Are there docs on the decision to use remote state vs terraform data sources? I skimmed this document and the data sources that it references are related to stack yaml instead of terraform.
I think @Matt Calhoun is working on some updates to the docs
Everything related to sharing data, including remote state will be consolidated in the share data section
We should probably have a high level overview since there are different methods with different trade offs
next thing I am struggling with: I am trying to use the vpc and the vpc-flow-log-bucket components, somehow I cannot make it work. I was following the example (in simplified way, I have a pretty simple structure), but I get the following error:
╷
│ Error: stack name pattern '{stage}' includes '{stage}', but stage is not provided
│
│ with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0],
│ on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
Unfortunately I can’t seem to figure out what is wrong, or what I should be checking… So if anyone could give a hint that would be great :)
so this is happening when the vpc-flow-log-bucket is already deployed, and I am trying to create a plan for the vpc component.
Just to confirm, are you trying the advanced quick start?
@Dave please review this doc https://atmos.tools/core-concepts/components/terraform/remote-state/#atmos-configuration
Terraform natively supports the concept of remote state and there’s a very easy way to access the outputs of one Terraform component in another component. We simplify this using the remote-state module, which is stack-aware and can be used to access the remote state of a component in the same or a different Atmos stack.
if the vpc-flow-log-bucket
is already deployed, vpc
uses the remote-state
to get outputs from the vpc-flow-log-bucket
component, which uses the utils
provider.
and since terraform executes the provider code from the component/module directory, the provider can’t find atmos.yaml
there - it should be in one of the known locations (or provided via the ENV vars)
Yes, I am trying to follow the advanced quick start, I just have it a little simplified, because I only have the stage in the name pattern (since I have a single level hierarchy).
unfortunately I did not find anything new in that guide, I think I followed everything described over there, but I still get this error
I already had the env vars set (without them the error was completely different)
Are you using absolute or relative paths?
(Also, this particular rough edge will be addressed in a forth coming update to atmos; Pr is in progress)
I’m not 100% sure if this is conceptually a good question, but is there a notion of reserved/official/already used by Atmos keys in a component’s settings
? Is there a systematic way to find out what these keys are? For example, It looks to me like github
and integrations
might be examples of such keys?
Fair question. It would be in the atmos schema.json
We should have this list though. Let me see what we can do.
Thanks for the info @Erik Osterman (Cloud Posse)!
The use case is that I want to add some info to stacks but I want to be sure doing so won’t impact Atmos. Is there a designated path for such custom info?