#atmos (2024-12)
2024-12-01
2024-12-02
Hi All! Getting Started with Atmos and have some questions about the bootstrapping. I appreciate some guidance.
I am a little bit confused about the deployment of the tfstate-backend component with the temporary SuperAdmin IAM user, before setting up the organization. I am using the {tenant}-{environment}-{stage}
naming convention.
• If I want to have a seperate state bucket/dynamodb table for each account, where should I deploy the tfstate-backend component? Currently it is deployed at core-gbl-root
but it kinda does not make sense to me as the region of the dynamodb table is at eu-west-1.
• Where should I deploy the account
and account-map
components? The management account? If so what is the tenant, environment and stage for the management account?
Thanks!
Current Directory Structure:
.
├── README.md
├── atmos.yaml
├── components
│ └── terraform
│ ├── account
│ │ ├── context.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ ├── account-map
│ │ ├── account-info.tftmpl
│ │ ├── context.tf
│ │ ├── dynamic-roles.tf
│ │ ├── main.tf
│ │ ├── modules
│ │ │ ├── iam-roles
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ ├── providers.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── versions.tf
│ │ │ ├── roles-to-principals
│ │ │ │ ├── README.md
│ │ │ │ ├── context.tf
│ │ │ │ ├── main.tf
│ │ │ │ ├── outputs.tf
│ │ │ │ └── variables.tf
│ │ │ └── team-assume-role-policy
│ │ │ ├── README.md
│ │ │ ├── context.tf
│ │ │ ├── github-assume-role-policy.mixin.tf
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ ├── remote-state.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ └── tfstate-backend
│ ├── context.tf
│ ├── iam.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ ├── variables.tf
│ └── versions.tf
├── stacks
│ ├── catalog
│ │ ├── account
│ │ │ └── defaults.yaml
│ │ ├── account-map
│ │ │ └── defaults.yaml
│ │ └── tfstate-backend
│ │ └── defaults.yaml
│ ├── mixins
│ │ ├── region
│ │ │ ├── eu-west-1.yaml
│ │ │ └── global.yaml
│ │ ├── stage
│ │ │ └── root.yaml
│ │ ├── tenant
│ │ │ └── core.yaml
│ │ └── tfstate-backend.yaml
│ └── orgs
│ └── dunder-mifflin
│ ├── _defaults.yaml
│ └── core
│ ├── _defaults.yaml
│ └── root
│ ├── _defaults.yaml
│ └── eu-west-1.yaml
└── vendor.yaml
22 directories, 55 files
Desired Organization Structure
[ACC] Root/Management Account (account name: dunder-mifflin-root)
│
├── [OU] Security
│ ├── [ACC] security-log-archive
│
├── [OU] Core
│ ├── [ACC] core-monitoring
│ └── [ACC] core-shared-services
│
└── [OU] Workloads
├── [OU] Production
│ └── [ACC] workloads-prod
│
└── [OU] Non-Production
└── [ACC] workloads-non-prod
@Dan Miller (Cloud Posse)
• If I want to have a seperate state bucket/dynamodb table for each account, where should I deploy the tfstate-backend component? Currently it is deployed at core-gbl-root
but it kinda does not make sense to me as the region of the dynamodb table is at eu-west-1.
The default reference architecture has a single bucket/table, but it’s very common to add additional deployments to separate state. However, I’d recommend keeping all of the state component deployments in the same stack, core-gbl-root
. For example, something like this:
# stacks/orgs/acme/core/root/us-east-1/baseline.yaml
...
components:
terraform:
#
# Each tenant has a dedicated Terraform State backend
#
tfstate-backend/core:
metadata:
component: tfstate-backend
inherits:
- tfstate-backend
vars:
attributes:
- "core"
tfstate-backend/plat:
metadata:
component: tfstate-backend
inherits:
- tfstate-backend
vars:
attributes:
- "plat"
then in the defaults for each tenant, you can specify the given backend configuration like this:
# stacks/orgs/acme/plat/_defaults.yaml
terraform:
# Valid options: s3, remote, vault, etc.
backend_type: s3
backend:
s3:
bucket: acme-core-use1-root-tfstate-plat
dynamodb_table: acme-core-use1-root-tfstate-plat-lock
role_arn: arn:aws:iam::xxxxxxxxxx:role/acme-core-gbl-root-tfstate-plat
encrypt: true
key: terraform.tfstate
acl: bucket-owner-full-control
region: us-east-1
remote_state_backend:
s3:
# This ensures that remote-state uses the role_arn even when
# the backend has role_arn overridden to be set to `null`.
role_arn: arn:aws:iam::xxxxxxxxxx:role/acme-core-gbl-root-tfstate-plat
• Where should I deploy the
account
andaccount-map
components? The management account? If so what is the tenant, environment and stage for the management account? Yup the management account, specifically thecore-root
account. The root of the organization needs to manage accounts in that org
thanks for the reply, it will help a lot. 2 follow-up questions:
• any method for for making the bucket/table name dynamic?
• if I am based in europe and eu-west-1 is the closest region, should I still deploy all this to us-east-1?
• any method for for making the bucket/table name dynamic?
Yup in my example see the inherits
option. that way I can define a default, abstract tfstate-backend
component configuration, and then only specify what’s different with the instantiated tfstate-backend/plat
component
See this page: https://atmos.tools/design-patterns/multiple-component-instances
Multiple Component Instances Atmos Design Pattern
• if I am based in europe and eu-west-1 is the closest region, should I still deploy all this to us-east-1?
No not at all! Choose the region that makes sense for you
Hey! FYI, if anyone uses Mise/asdf to manage Atmos version + Renovate to handle dependency updates, I recently created PR adding support for Atmos in respective Renovate managers. It was merged already and deployed by Renovate. I see it works for my repos, so if you use the same setup, it should take care of the updates, too. See docs: https://docs.renovatebot.com/modules/manager/mise/ https://docs.renovatebot.com/modules/manager/asdf/
Anything we can add here? https://atmos.tools/install#other-ways-to-install
There are many ways to install Atmos. Choose the method that works best for you!
…some examples of your configuration maybe
No need I think as I added asdf and Mise tabs a while ago. They are available in the link you provided. Renovate will take care of updating .tool-versions/.mise.toml respectively.
I have a question about abstract components, should they be ignore on the UI since they are not real components?
I have a defaults.yaml
file on the catalog that is abstract
that I’m using on some real components that I’m inheriting this abstract component but I see both of this on the atmos UI is this intended? should abstract components not be hidden from this UI?
currently we show all the components in the Atmos UI.
i agree we don’t need to show the abstract components to execute the commands like atmos terraform plan/apply
, but they can be used in the commands like atmos describe component
we’ll probably disable the abstract components in the UI
@Andriy Knysh (Cloud Posse) we display abstract in the UI? I think we should remove that.
But add it to atmos list components
As one of the columns
A big +1 to this. gets really confusing when you follow the pattern below, but all components from configuration.yaml
are shown even though not enabled in deploy.yaml
configuration.yaml
components:
terraform:
sample/component1:
metadata:
component: component1
type: abstract
vars:
enabled: false
sample/component2:
metadata:
component: component2
type: abstract
vars:
enabled: false
sample/component3:
metadata:
component: component3
type: abstract
vars:
enabled: false
deploy.yaml
import:
- catalog/configuration.yaml
components:
terraform:
sample/component1:
metadata:
component: componen1
type: real
vars:
enabled: true
Thanks!!
2024-12-03
Question about remote-state
and atmos.Component
for sharing data between components/stacks. Basically, trying to share VPC and Subnet IDs.
I see this comment saying template functions should rarely be used https://sweetops.slack.com/archives/C031919U8A0/p1732031315369959?thread_ts=1732030781.552169&cid=C031919U8A0 but then in documentation template functions are the "new and improved way to share data
. So which is it? https://atmos.tools/core-concepts/components/terraform/remote-state/
I’m hitting strange errors with atmos.Component where locally values are grabbed correctly but in TFC outputs are just null, despite existing in state. Or the first run grabs the Ids but then in subsequent runs the Ids are lost. All troubleshooting has failed - I’m looking at remote-state, but this doesnt quite seem like the best option for passing values
How are others passing values between stacks or components? I have several hundred VPCs so hardcoding values is not an option. I could use data calls against the VPC name tag but trying to avoid them.
@Erik Osterman (Cloud Posse)
@Andrew Chemis, great questions. Here’s what we’ve realized about the limitations of overusing atmos.Component
in template functions:
- Performance Issues: a. Calling Terraform outputs is inherently slow. It requires initializing the root module, including downloading the providers. b. Template functions are always processed, even if the section of the stack configuration they reference isn’t used. This can significantly impact performance.
- Permission Requirements: a. Running Terraform outputs requires sufficient permissions to access the outputs, which also means full access to the Terraform state. b. This can become a bottleneck, especially in environments with hundreds of VPCs or deeply nested stacks.
- Provisioning Dependency: Terraform outputs will throw an error if the root module hasn’t been provisioned yet.
- Interpolation issues: Working with lists and maps is cumbersome.
In short, while
atmos.Component
can be useful, its current implementation introduces challenges around performance, permissions, and reliability. A better solution is needed to address these issues effectively.
We’re working introducing improvements to this, that vastly expand the power of Atmos.
First, we’re introducing native support for Atmos Functions using YAML explicit types (tags). These are not evaluated by the Go template engine, and therefore, we have full control over when/how they work.
Then, we’re implementing the ability to efficiently store/retrieve values from a pluggable backend. This will solve the performance issues, since we will not require terraform init
it solves the permissions requirements, since it doesn’t require the same permissions context for the backend, it solves provisioning dependencies, since it will not error if not defined and supports default values, and it doesn’t rely on interpolation with gotemplates, and instead uses explicit functions.
what
• Introduce Atmos YAML functions
• Update docs
• https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/
• https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/template/
• https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/exec/
• https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/terraform.output/
why
Atmos YAML Functions are a crucial part of Atmos stack manifests. They allow you to manipulate data and perform operations on the data to customize the stack configurations.
Atmos YAML functions are based on YAML Explicit typing and user-defined Explicit Tags (local data types). Explicit tags are denoted by the exclamation point (”!”) symbol. Atmos detects the tags in the stack manifests and executes the corresponding functions.
NOTE: YAML data types can be divided into three categories: core, defined, and user-defined. Core are ones expected to exist in any parser (e.g. floats, ints, strings, lists, maps). Many more advanced data types, such as binary data, are defined in the YAML specification but not supported in all implementations. Finally, YAML defines a way to extend the data type definitions locally to accommodate user-defined classes, structures, primitives, and functions.
Atmos YAML functions
• The !template
YAML function can be used to handle template outputs containing maps or lists returned from the atmos.Component
template function
• The !exec
YAML function is used to execute shell scripts and assign the results to the sections in Atmos stack manifests
• The !terraform.output
YAML function is used to read the outputs (remote state) of components directly in Atmos stack manifests
NOTE; You can use Atmos Stack Manifest Templating and Atmos YAML functions in the same stack configurations at the same time. Atmos processes the templates first, and then executes the YAML functions, allowing you to provide the parameters to the YAML functions dynamically.
Examples
components:
terraform:
component2:
vars:
# Handle the output of type list from the atmos.Component
template function
test_1: !template ‘{{ toJson (atmos.Component “component1” “plat-ue2-dev”).outputs.test_list }}’
# Handle the output of type map from the `atmos.Component` template function
test_2: !template '{{ toJson (atmos.Component "component1" .stack).outputs.test_map }}'
# Execute the shell script and assign the result to the `test_3` variable
test_3: !exec echo 42
# Execute the shell script to get the `test_label_id` output from the `component1` component in the stack `plat-ue2-dev`
test_4: !exec atmos terraform output component1 -s plat-ue2-dev --skip-init -- -json test_label_id
# Execute the shell script to get the `test_map` output from the `component1` component in the current stack
test_5: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_map
# Execute the shell script to get the `test_list` output from the `component1` component in the current stack
test_6: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_list
# Get the `test_label_id` output of type string from the `component1` component in the stack `plat-ue2-dev`
test_7: !terraform.output component1 plat-ue2-dev test_label_id
# Get the `test_label_id` output of type string from the `component1` component in the current stack
test_8: !terraform.output component1 {{ .stack }} test_label_id
# Get the `test_list` output of type list from the `component1` component in the current stack
test_9: !terraform.output component1 {{ .stack }} test_list
# Get the `test_map` output of type map from the `component1` component in the current stack
test_10: !terraform.output component1 {{ .stack }} test_map
Summary by CodeRabbit
Release Notes
• New Features
• Introduced new YAML functions: !exec
, !template
, and !terraform.output
for enhanced stack manifest capabilities.
• Added support for custom YAML tags processing in Atmos configurations.
• Enhanced configuration options for Atlantis integration, allowing for more flexible setups.
• Updated Atmos and Terraform versions in the Dockerfile for improved functionality.
• Introduced new constants related to Atmos YAML functions for better configuration handling.
• Documentation Updates
• Enhanced documentation for using remote state in Terraform components.
• Updated guides for the atmos.Component
function and the new YAML functions.
• Clarified Atlantis integration setup options and workflows.
• Improved explanations on handling outputs and using the new YAML functions.
• Added documentation for new functions and updated existing guides for clarity.
• Dependency Updates
• Upgraded various dependencies to their latest versions for improved performance and security.
Hopefully this will be ready in about 1-2 weeks time.
Thank you! Thats great news to see those changes! I think I will use data calls for now and implement those fixes later! Now I dont need to troubleshoot the failures any further
:wave: Question about the
remote_state_backend_type: static
remote_state_backend:
static:
params outlined in the Brownfield considerations doc. Am I right in thinking that this can be used to mimic the outputs of the module run, such that I could bypass applying certain modules and just statically define the outputs of, say, the account-map
module and not actually apply it, thereby leveraging your shared module logic without having to reverse engineer everything to completion?
Example:
remote_state_backend_type: static
remote_state_backend:
static:
artifacts_account_account_name: "artifacts"
audit_account_account_name: "myaccount"
aws_partition: "aws"
dns_account_account_name: "myaccount"
full_account_map:
dev-data: "1111111111111"
myaccount: "1111111111111"
sandbox: "1111111111111"
security: "1111111111111"
sre: "1111111111111"
could I define that under my account-map
component invocation and have other modules (which use the account-map
state lookup) read that in?
There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.
@Josh Simmonds yes, correct
There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.
by using the static
backend, you can “mimic” the component as if it was provisioned w/o actually provisioning it
Fantastic! I’ll keep playing with that and see if I can make it work for my use case!
we made it work already in some infras, so if you have any issues, let us know, and we’ll help you
I’d like to jump on this thread as I noticed an issue trying to use the remote_state_backend*
params.
I’ve followed the same example in “Brownfield Considerations” doc, but I get an error:
invalid ‘components.terraform.remote_state_backend_type’ section @Josh Simmonds and myself looked through the schema and found that
backend_type
nested under the component name seems to be the correct format.
So the stacks/catalog/vpc-flow-logs-bucket.yaml example:
components:
terraform:
vpc-flow-logs-bucket/defaults:
metadata:
type: abstract
remote_state_backend_type: static
remote_state_backend:
static:
vpc_flow_logs_bucket_arn: "arn:aws:s3::/my-vpc-flow-logs-bucket"
should actually look like this?
components:
terraform:
vpc-flow-logs-bucket/defaults:
metadata:
type: abstract
backend_type: static
backend:
static:
vpc_flow_logs_bucket_arn: "arn:aws:s3::/my-vpc-flow-logs-bucket"
yes, the indent is wrong, thanks for pointing it out, we’ll fix it
Is the change remote_state_backend_type
-> to backend_type
also correct?
remote_state_backend_type: static
remote_state_backend:
static:
full_account_map:
should be remote_state_backend_type
and remote_state_backend
backend
is used to work with TF state
remote_state_backend
is used to read the remote state of other components (as you are doing with the account-map
)
@Andriy Knysh (Cloud Posse) https://github.com/cloudposse/atmos/pull/818
thanks
you’ll have to go through the CodeRabbit AI first
coderabbit flagged the same thing I did
in this case, the AI is correct (needs to be fixed)
vpc_flow_logs_bucket_arn: "arn:aws:s3::/my-vpc-flow-logs-bucket"
Thats a neat tool!
Related to the use of templating and gomplate, what’s the recommended way to combine two different outputs together into a single value? I have two lists(string) values I need to combine, both of which are outputs from the vpc module.
['{{ coll.Flatten (atmos.Component "vpc" .stack).outputs.private_subnet_ids .outputs.public_subnet_ids }}']
Was my initial thought, but that syntax doesn’t work for pulling multiple values out of the outputs from the component
In my case, i’m working around this by producing an additional output (and will submit a PR to the module itself), but am still curious what the prescriptive guidance is
@Josh Simmonds there are a few diff topics in the question:
- concat lists in the templates
Use the Sprig
concat
function
Useful template functions for Go templates.
'{{ concat (atmos.Component "vpc" .stack).outputs.private_subnet_ids (atmos.Component "vpc" .stack).outputs.public_subnet_ids }}'
- The expression above will return a Go list. You can convert it to JSON by using the
toJson
Sprig function'{{ toJson (concat (atmos.Component "vpc" .stack).outputs.private_subnet_ids (atmos.Component "vpc" .stack).outputs.public_subnet_ids) }}'.
Useful template functions for Go templates.
- The expression above will output a JSON-encoded string, not a list. You can send the string to the terraform module, but the variable type must be a string, and you’ll have to decode the string into TF list by using the
jsondecode
function
- If you don’t want or can’t make the variable type a string, we are releasing a new version of Atmos today/tomorrow with the functionality to convert the JSON-encoded strings from templates into native YAML types (maps and lists), so they can be send directly to the terraform module
@Josh Simmonds we just published the new Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.111.0
it will allow you to do what you described (by using either the !template
or !terraform.output
Atmos YAML function)
@Andriy Knysh (Cloud Posse) You’re a hero, this worked perfectly and the example in the docs is nearly identical to what I needed to make my TGW module work. Thanks for releasing that today and for all the efforts around it. Ya’ll are crushing it with this tool, so thank you!
I’d like to hear your thoughts on how people are approaching GitOps in Kubernetes these days. What are some of the popular practices, how are you handling it, and what do you think are the pros and cons of your methods?
Currently, I’m using Argo CD’s App of Apps strategy to manage different ecosystems through GitOps. It’s been working well overall, but there are a few pain points I’ve run into:
• Drift detection for Kubernetes resources with Terraform hasn’t been easy to manage.
• Automating the deployment of the same setup across multiple clusters worked well a year or two ago but now feels a bit outdated or clunky.
• Adding cluster-specific prefixes or suffixes to Sub App names created by Bootstrap Apps is tedious and adds unnecessary complexity.
• As the number of clusters increases, Argo CD seems to struggle with the load, which makes scaling a challenge. I’m curious about how others have solved these kinds of issues or whether there are better ways to do what I’m doing. What’s been working for you, and what do you think are the strengths and weaknesses of your approach? Looking forward to hearing everyone’s ideas and learning from your experiences. Thanks!
2024-12-04
Introduce Atmos YAML functions @aknysh (#810)
what
• Introduce Atmos YAML functions
• Update docs
• https://atmos.tools/core-concepts/stacks/yaml-functions/
• https://atmos.tools/core-concepts/stacks/yaml-functions/template/
• https://atmos.tools/core-concepts/stacks/yaml-functions/exec/
• https://atmos.tools/core-concepts/stacks/yaml-functions/terraform.output/
why
Atmos YAML Functions are a crucial part of Atmos stack manifests. They allow you to manipulate data and perform operations on the data to customize the stack configurations.
Atmos YAML functions are based on YAML Explicit typing and user-defined Explicit Tags (local data types). Explicit tags are denoted by the exclamation point (”!”) symbol. Atmos detects the tags in the stack manifests and executes the corresponding functions.
NOTE: YAML data types can be divided into three categories: core, defined, and user-defined. Core are ones expected to exist in any parser (e.g. floats, ints, strings, lists, maps). Many more advanced data types, such as binary data, are defined in the YAML specification but not supported in all implementations. Finally, YAML defines a way to extend the data type definitions locally to accommodate user-defined classes, structures, primitives, and functions.
Atmos YAML functions
• The !template
YAML function can be used to handle template outputs containing maps or lists returned from the atmos.Component
template function
• The !exec
YAML function is used to execute shell scripts and assign the results to the sections in Atmos stack manifests
• The !terraform.output
YAML function is used to read the outputs (remote state) of components directly in Atmos stack manifests
NOTE: You can use Atmos Stack Manifest Templating and Atmos YAML functions in the same stack configurations at the same time. Atmos processes the templates first, and then executes the YAML functions, allowing you to provide the parameters to the YAML functions dynamically.
Examples
components:
terraform:
component2:
vars:
# Handle the output of type list from the atmos.Component
template function
test_1: !template ‘{{ toJson (atmos.Component “component1” “plat-ue2-dev”).outputs.test_list }}’
# Handle the output of type map from the `atmos.Component` template function
test_2: !template '{{ toJson (atmos.Component "component1" .stack).outputs.test_map }}'
# Execute the shell script and assign the result to the `test_3` variable
test_3: !exec echo 42
# Execute the shell script to get the `test_label_id` output from the `component1` component in the stack `plat-ue2-dev`
test_4: !exec atmos terraform output component1 -s plat-ue2-dev --skip-init -- -json test_label_id
# Execute the shell script to get the `test_map` output from the `component1` component in the current stack
test_5: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_map
# Execute the shell script to get the `test_list` output from the `component1` component in the current stack
test_6: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_list
# Get the `test_label_id` output of type string from the `component1` component in the stack `plat-ue2-dev`
test_7: !terraform.output component1 plat-ue2-dev test_label_id
# Get the `test_label_id` output of type string from the `component1` component in the current stack
test_8: !terraform.output component1 {{ .stack }} test_label_id
# Get the `test_list` output of type list from the `component1` component in the current stack
test_9: !terraform.output component1 {{ .stack }} test_list
# Get the `test_map` output of type map from the `component1` component in the current stack
test_10: !terraform.output component1 {{ .stack }} test_map
Very cool updates. Is there a plan to unveil atmos on hacker news and other platforms?
Might help add more popularity (such as raising the star count)
I had some push back a couple months back when other devs complained star count was lower than other competitors
Yea, would like to do that. Though there seems to be a trick to doing it well.
Note, as an org, we have 18K+ stars, while none of the competitors maintain as many projects
Oh i meant for atmos itself
Yes the org star count is high, thats nice. How did you get the overall org star count number? Is there a way to see that on github itself to showcase for non believers ?
the easies way to use a ENV variable or override a stack global variable value is to use templates? https://atmos.tools/core-concepts/stacks/templates/datasources/#environment-variables
Advanced
I thought there was a way to pass to the CLI something
Advanced
the env
section can be used to define ENV vars globally, per account, per component etc. It supports all the inheritance and deep-merging and is similar to the vars
section
the ENV vars defined in the env
section will be used in the process that runs atmos terraform
and all other Atmos commands
you can see the ENV vars in the logs if you set
ATMOS_LOGS_LEVEL=Trace atmos terraform plan ...
components:
terraform:
test:
vars:
enabled: true
env:
TEST_ENV_VAR1: "val1"
TEST_ENV_VAR2: "val2"
TEST_ENV_VAR3: "val3"
in the env
section, you can use templates to get the values for each ENV var
and you can use the new Atmos YAML functions like !exec
and !template
https://atmos.tools/core-concepts/stacks/yaml-functions/#examples
Advanced
ok, so I still need to use templates to grab from value from the ENV var that comes outside atmos
if you are not hardcoding the ENV vars (e.g. some of them are secrets or are dynamic), yes, you can use templates, datasources, or YAML functions to get the values for the ENV vars
ok, cool, I will use the new functions
so I added this to atmos.yaml
gomplate:
enabled: true
timeout: 5
# <https://docs.gomplate.ca/datasources>
datasources:
envs:
prnumber: "env:///PR_NUMBER"
and in my component I added a var
pr: '{{ (datasource "envs").prnumber }}'
but I got
template: all-atmos-sections:265:41: executing "all-atmos-sections" at <"envs">: can't evaluate field prnumber in type []interface {}
datasources:
prnumber:
url: "env:/PR_NUMBER"
name: 'outputs{{ (datasource "prnumber") }}'
why do you need outputs
in there?
outputs is the value of var.name
I need to append something to it
I do not know if there is a better way or cleaner way to add that to the name
We are not using the context.tf the way cloudposse does
if you need to add something (static string) to the template value, that’s the way
since this is azure
Add ATMOS_SHLVL
environment variable and increment it each time atmos terraform shell
is called @pkbhowmick (#803)
what
• Add ATMOS_SHLVL
environment variable and increment it each time atmos terraform shell
is called
• Update documentation for the atmos terraform shell
command to clarify the new ATMOS_SHLVL functionality and its impact on shell nesting levels
Update docs @matts-path (#818)
what
• Move remote state backend under component • Update docs
why
• Parameters should be nested under the component
@Jeremy G (Cloud Posse) heads up
Add ATMOS_SHLVL
environment variable and increment it each time atmos terraform shell
is called @pkbhowmick (#803)
what
• Add ATMOS_SHLVL
environment variable and increment it each time atmos terraform shell
is called
• Update documentation for the atmos terraform shell
command to clarify the new ATMOS_SHLVL functionality and its impact on shell nesting levels
Update docs @matts-path (#818)
what
• Move remote state backend under component • Update docs
why
• Parameters should be nested under the component
@Erik Osterman (Cloud Posse) Necessary but not sufficient for Geodesic Prompt support of atmos terraform shell
.
why….
atmos vendor pull -stack sandbox
Atmos supports native 'terraform' commands with all the options, arguments and flags.
In addition, 'component' and 'stack' are required in order to generate variables for the component in the stack.
atmos terraform <subcommand> <component> -s <stack> [options]
atmos terraform <subcommand> <component> --stack <stack> [options]
For more details, execute 'terraform --help'
command 'atmos vendor pull --stack <stack>' is not supported yet
the vendor command could use some refactoring
specially filters
stacks can solve part of the issue for me
Atmos does not support vendoring stacks yet
I have to do all this to get one folder
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
- "**/qa/**"
- "**/development/**"
- "**/staging/**"
- "**/management/**"
could I use something like
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/{{env "STACK"}}**"
is this in vendor.yaml
?
yes
currently, in vendor.yaml
templates are supported only in source
and target
but… can be impooved to support templates in all other sections like excluded_paths
to be honest, all I want is catalog, stack/mystack, components
in the easiest way possible
can you give more details about why are you using this config (what exactly do you want and don’t want to vendor)
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
- "**/qa/**"
- "**/development/**"
- "**/staging/**"
- "**/management/**"
I basically want vendor specific stacks depending on how my github workflows decides the changes are only for one stack or another
for example:
if: github.event_name == 'pull_request' then the stack is sandbox
so then I want to vendor sandbox only
(does something bad happen if you vendor (download) everything, or you just don’t want to download it to save time)
well, after you asked that, I was looking at the reasons why I needed this
this is related to when decribe affected did not support –stack
but now it does
so I can just vendor all the stacks and pass to describe affected the stack I want to execute on
oh i see, yes you can use —stack now
this is still a valid concern to be able to use more descriptive expressions to vendor to not download unnecessary components and stacks
I agree, the fact that the excluded_paths
can’t match subdirs with **
it is very limiting
@jose.amengual can you open the issue regarding excluded paths behavior
And a separate issue with the filtering behavior you would like
Also, are you familiar with using tags for filtering?
That is supported today
mmm not much, familiar with the tags filtering
Describe the Bug
atmos vendor pull can’t exclude subdirs.
If I have :
stacks/
mystacks/
us-east-1
pepe1
pepe2
and vendor yaml with :
exclude_paths:
- “/stacks//pepe2”
Atmos is not able to match the path of the subdir and it vendors all the files under pepe2
dir
it is also not possible to templatize the exclude or include paths, which is helpful when you have automated pipelines that can expose the names of dirs you want to exclude.
- “/stacks//${exclude_me}”
That will be nice to have.
Atmos 1.110.0
Expected Behavior
Atmos should support regexes fully for the exclude and include paths
Steps to Reproduce
atmos vendor pull
Screenshots
No response
Environment
No response
Additional Context
No response
:rocket: Enhancements
Fix vendor pull
directory creation issue @Cerebrovinny (#782)
what
• Fixed the regression where the vendor command would error if the vendor manifests file path did not exist.
why
• Previously, the atmos vendor pull
command would fail with a “no such file or directory” error when the vendor manifests file path was missing
When vendor.yaml exists and using -c flag:
Correctly checks if the component exists in vendor.yaml
Error if component not found in vendor.yaml
When no vendor.yaml but component.yaml exists:
Proceeds without vendor configurations
Pulls from component.yaml configuration
Shows a Warning about vendor.yaml not existing
When component specified but no vendor.yaml or component.yaml:
Shows vendor.yaml doesn’t exist
Properly errors that component.yaml doesn’t exist in the component folder
When no component specified:
a. With vendor.yaml:
Attempts to process vendor.yaml
Shows appropriate error for invalid configuration
b. Without vendor.yaml:
Shows vendor.yaml doesn’t exist
Provides an error message about needing to specify –component flag
Add bool
flags to Atmos custom commands @pkbhowmick (#807)
what
• Add bool
flags to Atmos custom commands
why
• Atmos custom commands supported only flags of type string
Add --everything
and --force
flags to the atmos terraform clean
command @haitham911 (#727)
What
• Add --everything
and --force
flags to the atmos terraform clean
command to delete the Terraform-related folders and files, including:
• backend.tf.json
• .terraform
• terraform.tfstate.d
• .terraform.lock.hcl
The following scenarios are covered:
- If no component is specified:
atmos terraform clean --everything
deletes all state files for all Terraform components and requires a confirmation from the user
atmos terraform clean --everything --force
deletes all state files for all Terraform components without a confirmation from the user - If a specific component is specified:
atmos terraform clean <component> --everything
- deletes the state files for the specified component - If both a component and a stack are specified:
atmos terraform clean <component> --stack<stack> --everything
- deletes the state files for the specified component and stack
Why
• Cleaning state files is useful when running tests (it should not be the default behavior to avoid unintended data loss)
Usage
atmos terraform clean <component> -s <stack> [--skip-lock-file] [--everything] [--force]
Add custom help message and Atmos upgrade notification @Listener430 (#809)
what
• Enhance the Atmos CLI by adding a bordered update notification message at the bottom of the help output for all commands, including atmos --help
and subcommands like atmos terraform --help
. The update notification informs users if a newer version of atmos is available
why
• Prompts users to upgrade • The bordered message is more noticeable
2024-12-05
:wave: hello, newbie here experimenting with Atmos! I’ve been following the examples, and now trying to align to our own organization layout I came across a problem with the workspace auto-generation, which apparently fails to find the variables from the pattern. If I explicitly set:
terraform_workspace_pattern: "{tenant}--{stage}--{region}--{component}"
it works as expected (resolves to e.g. foo--prod--eu-west-1--eks
), but if I add another variable (in my case domain
) it doesn’t (resolves to foo--prod--eu-west-1--{domain}--eks
:
terraform_workspace_pattern: "{tenant}--{stage}--{region}--{domain}--{domain}--{component}"
I might be missing something very obvious, but cannot find what. Any hints? thanks in advance!
Terraform Workspaces.
The following context tokens are supported by the metadata.terraform_workspace_pattern attribute:
{namespace}
{tenant}
{environment}
{region}
{stage}
{attributes}
{component}
{base-component}
@Jesus Fernandez , terraform_workspace_pattern
does not support Go
templates (yet), it only supports the context tokens above
instaed of using {domain}
, you can prob use {namespace}
(or {environment}
)
uhm so I guess it’s tightly coupled with null-label right? The problem is that I’m using both as vars for many modules and there might be collisions right?
Nono, instead use name_template which is not tightly coupled
name_pattern is the original way, which is tightly coupled and our examples should be updated to use name_template. Since we have many user still using the original way, we are not deprecating it .
so then if I use name_pattern I don’t need to play with terraform_workspace_pattern?
Erik, this is about terraform workspace pattern, not related to stack names
@Jesus Fernandez we also support this
// Run `terraform workspace` before executing other terraform commands
// only if the `TF_WORKSPACE` environment variable is not set by the caller
so you can set TF_WORKSPACE
env variable and then execute atmos terraform ...
commands
The terraform workspace pattern doesn’t support go templates? That will be a problem for anyone using our context provider.
@Andriy Knysh (Cloud Posse) please confirm, otherwise I will create a task to address this
we have two attributes in the metadata
section for each component:
components:
terraform:
vpc:
metadata:
terraform_workspace:
terraform_workspace_pattern:
terraform_workspace_pattern
supports the context tokens (those are not Go
templates)
https://atmos.tools/tutorials/atmos-component-migrations-in-yaml/#overriding-the-workspace-name
Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance
.
but the metadata
section supports Go
templates
so we can use someting like this
components:
terraform:
vpc:
metadata:
terraform_workspace: '{{ .vars.tenant}}-{{ .vars.stage}}'
or even this (using atmos.Component
template function to get the values from a different component)
components:
terraform:
vpc:
metadata:
terraform_workspace: '{{ (atmos.Component <component> <stack>).settings.settings1 }}-{{ (atmos.Component <component> <stack>).settings.settings2 }}'
Atmos YAML functions can be used as well
https://atmos.tools/core-concepts/stacks/yaml-functions/
components:
terraform:
vpc:
metadata:
terraform_workspace: !exec <shell script>
Advanced
Ok, great, so @Jesus Fernandez you should be able to do what you want using simply terraform_workspace
terraform_workspace: "{{ var.tenant }}--{{ var.stage }}--{{ var.region }}--{{ var.domain }}--{{var.domain }}--{{ .component_name }}"
@Andriy Knysh (Cloud Posse) did I get that last one right?
terraform_workspace: "{{ .vars.tenant }}--{{ .vars.stage }}--{{ .vars.region }}--{{ .vars.domain }}--{{ .vars.domain }}--{{ .component_name }}"
for the values, a dot .
needs to be used
where’s the missing dot?
thanks, let me try
leading one I guess
the dots should be before the vars
.vars.tenant
(w/o a dot, it’s a function call in Go
templates)
uhm I get an error:
The workspace name "{{ .var.tenant }}-{{ .var.account }}-{{ .var.region }}-{{ .var.domain }}-{component}" is not allowed. The name must contain only URL safe
characters, and no path separators.
Looks like atmos might be validating before processing the template
{component}
is not a template token
it’s a “legacy” context token and can be used only in terraform_workspace_pattern
@Jesus Fernandez what component name do you want in the TF namespace, terraform component or Atmos component (which can be diff from the TF component)
you can use the following in the Go
templates
atmos_component: Atmos component
atmos_manifest: Atmos stack manifest file
atmos_stack: Atmos stack
atmos_stack_file: Same as atmos_manifest
component: Terraform component
{{ .var.tenant }}-{{ .var.account }}-{{ .var.region }}-{{ .var.domain }}-{{ .atmos_component }}. - if you want Atmos component
{{ .var.tenant }}-{{ .var.account }}-{{ .var.region }}-{{ .var.domain }}-{{ .component }}. - if you want Terraform component
@Jesus Fernandez please try this ^
I tried with the most simple one:
components:
terraform:
network:
metadata:
component: network
terraform_workspace: '{{ .var.tenant }}'
and still get the same error
btw for the stacks.name_template
I saw that it should be {{ .vars.tenant }}
(vars rather than var)
yes, should be {{ .vars.xxx }}
The workspace name "{{ .vars.tenant }}" is not allowed. The name must contain only URL safe
characters, and no path separators.
it doesn’t seem to realize that’s a go template
did you enable Go templates in atmos.yaml
?
oh.. that was it, I had gotemplate
rather than gomplate
# `Go` templates in Atmos manifests
# <https://atmos.tools/core-concepts/stacks/templates>
# <https://pkg.go.dev/text/template>
templates:
settings:
enabled: true
evaluations: 2
# <https://masterminds.github.io/sprig>
sprig:
enabled: true
# <https://docs.gomplate.ca>
gomplate:
enabled: true
timeout: 5
# <https://docs.gomplate.ca/datasources>
datasources: {}
if you have that in atmos.yaml
, templates should work in metadata.terraform_workspace
sorry to ask again, but I’m confused now… I was testing your solution with atmos describe component
and it properly renders the workspace name :white_check_mark: but when I do atmos terraform plan
it doesn’t render it
it doesn’t render it
what does terraform show for the workspace?
Executing command:
/home/linuxbrew/.linuxbrew/bin/tofu workspace select {{ .vars.tenant }}
The workspace name "{{ .vars.tenant }}" is not allowed. The name must contain only URL safe
characters, and no path separators.
@Jesus Fernandez i’ll check templates in metadata
for terraform plan/apply
and will get back to you
@Jesus Fernandez please try this release https://github.com/cloudposse/atmos/releases/tag/v1.118.0
sorry for the delay, confirmed that now it works! Wondering if there’re any plans to extend this logic to a terraform_workspace_template
alike field under atmos.yaml
, so that the workspace can follow the same pattern without the need to set per-component metadata.terraform?
yes, this would be a useful features. @Gabriela Campana (Cloud Posse) please create a task for Atmos:
Add the ability to configure terraform_workspace
and terraform_workspace_template
in atmos.yaml
to be applied to all components
@Jesus Fernandez for now, you can accomplish this in a very DRY way using inheritance
(just want to make sure you are aware of that solution)
Erik, unfortunately, those are part of the metadata section, and the metadata section is not inherited since it has other attributes related to a single component only
we can probably inherit some parts of the metadata section
or add another a different section
let’s discuss this
I’m confused. Don’t we often enable things like spacelift by setting it enabled to true in the metadata section? and doing that via imports.
it’s in the settings section
and settings is in metadata
no, settings is a separate first class section
aha, my fault
terraform:
components:
$component:
settings:
Perhaps we should move some things out of metadata
For example, the backend config.
(we can continue to support it in both places, but only settings would work with inheritance)
we now want to inherit some parts of the metadata section, but not inherit the other parts like “inherits:” :)
Well, I think that gets complicated to explain.
Maybe things should just not be in metadata.
yes
in metadata we should have things only relevant to one component
for example, we don’t want to allow inheriting this
components:
terraform:
derived-component-2:
metadata:
inherits:
- base-component-2
- derived-component-1
b/c it would be a huge inheritance mess
@Jesus Fernandez we are going to discuss it internally and implement a solution to be able to inherit TF workspaces and workspace templates
I think we should implement this as part of a follow-on PR from https://github.com/cloudposse/atmos/pull/834
what
Add the concept of atmos stores
to the app.
why
Atmos stores allow values to be written to an external store (i.e., AWS SSM Param Store) after apply-time and read from at plan/before apply time.
Stores are pluggable and configurable and multiple may be enabled at the same time (i.e. for prod/non-prod, east/west, etc).
The initial set of stores we are implementing as part of this PR are:
• AWS SSM Parameter Store • Artifactory
This is the first-class way we want to support it using YAML functions, rather than templating.
@Andriy Knysh (Cloud Posse) do we still need the new task below or this will be included with @Matt Calhoun’s PR?
• Add the ability to configure terraform_workspace
and terraform_workspace_template
in atmos.yaml
to be applied to all components
For those interested, here’s the current Atmos development roadmap: https://github.com/orgs/cloudposse/projects/34/views/1
If anyone is interested in development of any of these features, reach out to me and join our atmos-dev channel
@Erik Osterman (Cloud Posse) what are the plan for atmos pro service? If I want to use dependency in components is it my only option? Spacelift is too expensive, what will be your pricing model?
We haven’t formalized the pricing model. There will likely be a free tier like with AWS which enables light usage. At this time, it’s wide open, free and unmetered.
Hi guys. Using atmos describe affected
I implemented a pipeline that runs on PR to find affected stacks and runs plan/apply on them. In that PR I had to change some values in the output of account-map/modules/iam-roles/outputs.tf
, and as result, atmos describe affected
included literally all of the stacks instead of just a couple due to affected: cmponent.module
as documented here, because all of the components reference account-map component as a local module to get the IAM roles for providers and other stuff.
For now, I just committed and pushed this account-map/modules/iam-rokes/outputs.tf
change to the main branch first and rerun the PR pipeline as a workaround, which helped. However, is there a better approach (or a plan to implement) for filtering out some of the triggers for the atmos describe affected
? I did not find anything regarding this in the docs
This command produces a list of the affected Atmos components and stacks given two Git commits.
@Roman Orlovskiy this looks like correct behavior: all (or almost all) TF components depend on the account-map/modules/iam-roles
module, and once it changes, tmos detects it
This command produces a list of the affected Atmos components and stacks given two Git commits.
but I see what you are saying about triggering all the stacks
on the one hand, all the components are affected now since iam-roles
changed. On the other hand, you know that the changes should not affect all the other components
for cases like this, we can prob add a filter flag to atmos describe affected
to allow you to select the triggers
Thanks for the reply! For my use case in the PR, I would not want to send different filters per different execution, I would prefer something more static that would work in any PR/pipeline. So, maybe, it could be possible to provide some kind of –ignore parameter to atmos describe affected
with a list of component path’s to ignore? Or, if the output of the describe affected
command included a list of components that triggered/affected it, then it would be possible to write some jq command to filter out such items too.
Actually, I would love to be able to ignore/filter out components like account/account-map/aws-teams/aws-team-roles/tfstate-backend components for atmos describe affected
globally, as those components require SuperUser permissions in any case, so the CI/CD agent will not be able to execute them at all. But, I can do this with jq already, so this is not a big issue.
you can also do it using jq
to filter out those that have affected: component.module
(since it means that a TF module was changed)
Right, but if the PR includes changes for valid submodules (not triggered from account-map/modules/…), then they will be ignored too in this case, as there is no way for me to separate them right now based on command’s output.
Show failed command how to resume after atmos workflow
failure @Cerebrovinny (#767)
What
• Show failed command how to resume after atmos workflow
failure
• Added improved workflow failure handling with clear error messages
• Added documentation and help text for workflow failure handling
• Added example workflow demonstrating failure scenarios
Why
• Makes it easier to debug and fix workflow failures by showing:
• Which step failed
• The exact command that failed
• How to resume from the failed step
• Saves time by allowing users to resume workflows from failed steps instead of restarting from beginning
• Improves user experience with clear guidance on how to recover from failures
Tests
Example workflow demonstrating the feature:
workflows:
test-1: description: “Test workflow with intentionally failing steps” steps: - name: “step-1” type: shell command: “echo ‘This step will succeed’ && exit 0”
- name: "step-2"
type: shell
command: "echo 'This step will fail' && exit 1"
- name: "step-3"
type: shell
command: "echo 'This step should not execute'"
When step-2 fails
, users will see:
Add Support for Automatic Templated File Search for Atmos Imports @Cerebrovinny (#795)
what
• Add Support for Automatic Templated File Search for Atmos Imports
• Automatic detection and import of templated files (.yaml.tmpl
or .yml.tmpl
) in Atmos
why
• Simplify import configuration by removing the need to explicitly specify .tmpl
extensions
• Improve developer experience by reducing manual configuration overhead
2024-12-06
Hello again :slightly_smiling_face:
Getting close to set up my imaginary organization dunder-mifflin and wanted to ask something. I deployed tfstate-backend and accounts for core-gbl-root and now working on the account-map component. the issue I am having is, if I do not set the export ATMOS_CLI_CONFIG_PATH=/Users/denizgokcin/codes/infrastructure
and ATMOS_BASE_PATH=/Users/denizgokcin/codes/infrastructure
, before executing my atmos commands, although the plan for the account-map succeeds, I get the following long error. For some reason, one of the dependencies of the account-map is looking for the stacks in the wrong place. Interestingly this did not happen with tfstate-backend or the account component. Does anyone have an idea how to pass this problem without hardcoding paths from my local machine?
... successful plan
You can apply this plan to save these new output values to the Terraform state, without changing any real
infrastructure.
╷
│ Error: failed to find a match for the import '/Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs/**/*.yaml' ('/Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs' + '**/*.yaml')
│
│
│ CLI config:
│
│ base_path: .
│ components:
│ terraform:
│ base_path: components/terraform
│ apply_auto_approve: false
│ deploy_run_init: true
│ init_run_reconfigure: true
│ auto_generate_backend_file: true
│ command: ""
│ helmfile:
│ base_path: components/helmfile
│ use_eks: true
│ kubeconfig_path: /dev/shm
│ helm_aws_profile_pattern: '{namespace}-{tenant}-gbl-{stage}-helm'
│ cluster_name_pattern: '{namespace}-{tenant}-{environment}-{stage}-eks-cluster'
│ command: ""
│ stacks:
│ base_path: stacks
│ included_paths:
│ - orgs/**/*
│ excluded_paths:
│ - '**/_defaults.yaml'
│ name_pattern: '{tenant}-{environment}-{stage}'
│ name_template: ""
│ workflows:
│ base_path: stacks/workflows
│ logs:
│ file: /dev/stdout
│ level: Debug
│ schemas:
│ jsonschema:
│ base_path: stacks/schemas/jsonschema
│ opa:
│ base_path: stacks/schemas/opa
│ templates:
│ settings:
│ enabled: true
│ sprig:
│ enabled: true
│ gomplate:
│ enabled: true
│ timeout: 0
│ datasources: {}
│ initialized: true
│ stacksBaseAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks
│ includeStackAbsolutePaths:
│ - /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/orgs/**/*
│ excludeStackAbsolutePaths:
│ - /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/stacks/**/_defaults.yaml
│ terraformDirAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/components/terraform
│ helmfileDirAbsolutePath: /Users/denizgokcin/codes/infrastructure/components/terraform/account-map/components/helmfile
│ default: true
│
│
│ with module.accounts.data.utils_component_config.config[0],
│ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
exit status 1
is this the reason why the rootfs directory is provided as an alternative to running commands directly?
So, first off, we have a task assigned to improve the DX on this. There’s no reason atmos cannot set these environment variables, if they aren’t already set.
if I do not set the export ATMOS_CLI_CONFIG_PATH=/Users/denizgokcin/codes/infrastructure
and ATMOS_BASE_PATH=/Users/denizgokcin/codes/infrastructure
The reason these are required, is the cloudposse/terraform-provider-utils
reads the stack configurations. It’s used inside many of our opinionated components.
When terraform executes a provider, it runs somewhere inside the .terraform
directory
It doesn’t have the “context” of where the stacks are or the atmos.yaml
So it works when you set those environment variables, because it will then use to those to locate everything.
We have a task to infer these values and set the environment variables, if they are not set, ensuring that modules/providers can find the configs.
thanks for the reply! would be a really nice addition if the absolute path of the directory where atmos.yaml
is in in setted for these variable. is there a link to the issue/task for this so that I can follow it and maybe even contribute
and also is it possiblr to work around this limitation for all commands through the custom cli commands?
@Erik Osterman (Cloud Posse)
@haitham911eg should be picking this up this week
This is our task in Linear.
@deniz gökçin we’re definitely looking for help, so if you’re interested in contributing, join us in atmos-dev and we can find some issues that are mutually beneficial
hello again I joined the dev-channel and am definitely interested in contributing. I will follow the channel and see if I can be helpful. Thanks!
Enable Go
templates in metadata.terraform_workspace
section @aknysh (#823)
what
• Enable Go
templates in metadata.terraform_workspace
and metadata.terraform_workspace_template
sections
why
• Allow using Go
templates to dynamically construct Terraform workspaces for Atmos components
components:
terraform:
test:
metadata:
# Point to the Terraform component
component: “test”
# Override Terraform workspace using Go
templates
terraform_workspace: ‘{{ .vars.tenant }}-{{ .vars.environment }}-{{ .vars.stage }}-test’
Upload build preview site as artifacts @goruha (#822)
what
• Upload preview site static files as GitHub actions artifacts
why
• This is the first step of preview deployment strategy refactoring to support preview deployments from forks
Question: base on this apply workflow:
jobs:
pr:
name: PR Context
runs-on: ubuntu-latest
steps:
- uses: cloudposse-github-actions/get-pr@v2
id: pr
outputs:
base: ${{ fromJSON(steps.pr.outputs.json).base.sha }}
head: ${{ fromJSON(steps.pr.outputs.json).head.sha }}
auto-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'auto-apply') }}
no-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'no-apply') }}
atmos-affected:
name: Determine Affected Stacks
if: needs.pr.outputs.no-apply == 'false'
needs: ["pr"]
runs-on: ubuntu-latest
steps:
- id: affected
uses: cloudposse/[email protected]
with:
base-ref: ${{ needs.pr.outputs.base }}
head-ref: ${{ needs.pr.outputs.head }}
atmos-config-path: ${{ inputs.atmos_cli_config_path }}
atmos-version: ${{ inputs.atmos_version }}
outputs:
stacks: ${{ steps.affected.outputs.matrix }}
has-affected-stacks: ${{ steps.affected.outputs.has-affected-stacks }}
Do you guys filter out the content of the PR to then decide what stacks, components where changed before you do the apply? now that describe affected
supports --stack
it is possible to filter the stack to detect changes on and apply only that stack
but now that I think about it, this will force a “One PR per stack change”
Yea, I think it’s possible, but with the aforementioned side effect. The trick is to separate deployment from release.
Some use multiple PRs for that.
The workaround we have in place, is to create the issues tracking the releases that are required (with GH issues)
Then labeling those issues for release.
With Atmos Pro, it’s all managed with dispatching workflows and workflows (jobs) can have environment protection rules.
That enables the multi-staged deployment.
I see, this is getting conplicated
There’s always atlantis style chat ops
lol
And applying before merge (I say that since I know you’re favorable to it)
A GitHub action that facilitates “ChatOps” by creating repository dispatch events for slash commands
2024-12-07
Hello team! :wave:
i’ve just started experimenting Atmos and i’ve also setup the GH actions to plan and apply.
Everything’s working more or less as expected, except destroying a resource.
If i set enabled: false
in the resource yaml file, the github action detects that there’s a change in that stack but the plan shows zero to change…is it possible/supported to destroy a resource via the GH action? (it is very possible that i’m missing something )
Stacks are used to compose multiple components together and provide their configuration. The schema is the same for all stacks, but the configuration can be different. Use a combination of imports, inheritance, and catalogs for a template-free way to reuse configuration and override values when needed.
Disabling a component does not cause deletion. It just signals that it’s no longer managed by Atmos.
That said, we have an issue planned to support deletions. @jose.amengual has been asking for it.
I renamed a component from pepetest
to pepetest1
, describe affected saw the new pepetest1
component got deployed, but the old one is still there in the cloud environment
Any usage of --help
should not require stack configurations @Listener430 (#825)
what
• All help commands (using the --help
flag) should not require stack configurations
why
• We need to show help message and version/upgrade information irrespective if stack configs are in place or not fix: Cut release only when all tests passed @goruha (#721)
what
• Run release job only if all tests passed
why
• Do not release failed binaries fix: typo @jedwardsCTL (#819)
what
• Fixes a minor grammar error
2024-12-08
Export configuration information in shell launched by Atmos @Nuru (#827)
what
• Export Atmos Terraform configuration information in shell launched by Atmos
why
• When Atmos launches a shell, it configures the shell for Terraform to reference a certain component and stack. The configuration is such that Terraform commands will work with a specific component and stack when run from a specific directory, and that informations is output to the user, but it has not been available in the environment for scripts or other automation to reference. By exposing that information in the environment, tools (e.g. the command prompt) can notify the user of the invisible state information and also warn the user if they are executing commands from the wrong directory and help them get back to the right one, among other possibilities.
• Some commands, like atmos terraform shell
, spawn an interactive shell with certain environment variables set, in order to enable the user to use other tools (in the case of atmos terraform shell
, the Terraform or Tofu CLI) natively, while still being configured for a specific component and stack. To accomplish this, and to provide visibility and context to the user regarding the configuration, Atmos sets the following environment variables in the spawned shell
Variable | Description |
---|---|
ATMOS_COMPONENT | The name of the active component |
ATMOS_SHELL_WORKING_DIR | The directory from which native commands should be run |
ATMOS_SHLVL | The depth of Atmos shell nesting. When present, it indicates that the shell has been spawned by Atmos. |
ATMOS_STACK | The name of the active stack |
ATMOS_TERRAFORM_WORKSPACE | The name of the Terraform workspace in which Terraform comamnds should be run |
PS1 | When a custom shell prompt has been configured in Atmos, the prompt will be set via PS1 |
TF_CLI_ARGS_* | Terraform CLI arguments to be passed to Terraform commands |
Hello team! I have a question about Atmos GitHub Actions workflows.
I’m trying to make some improvements based on the example provided in the Atmos Terraform Dispatch Workflow. I’ve set up and stored reusable workflows as shown in the example. When I push changes and execute the dispatch workflow, it runs successfully. However, the workflow doesn’t seem to track changes from the plan step or produce any meaningful summary of those changes. All required configurations, such as OIDC settings, GitOps S3 bucket, DynamoDB, and Terraform backend, are correctly configured. Do you have any hints on why this might be happening?
Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.
Additionally, I’ve heard about Atmos Pro from various sources and even tried signing up for it, but it seems incomplete at the moment. Is it not available for use yet?
Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.
@Igor Rodionov
It’s rare that the dispatch workflow is used other than for instrumenting it with other systems like atmos pro.
Can we take a step back and first understand what you want to accomplish with a dispatch workflow rather than a workflow that triggers on conventional commit events?
Before creating a workflow that runs on commit, the purpose was to test the workflows one by one. I also plan to apply the workflow execution by commit or PR that you mentioned.
When I push changes and execute the dispatch workflow, it runs successfully. However, the workflow doesn’t seem to track changes from the plan step or produce any meaningful summary of those changes.
Ok, I think I understand now, based on your description.
The dispatch workflow is not intended to work the way you are using it.
It don’t believe it calls describe affected.
2024-12-09
Hi,
i have a question regardig the new !template function. As it handles outputs containing maps and lists, i can use it to pass yaml lists to a component. but what i can’t do is use a subkey of this object in the context of the component. i can only use it ‘asis’
Example
high level component to import other components
suppose i define this in the stack component
settings
ami_filters:
owner_id: "redhat"
name: "foobar"
component:
terraform:
foo:
....
catalog/high_level.yaml
import:
- path: catalog/components/low_level
context:
ami_filters: !template '{{ .settings.ami_filters | toJson }}'
which i then can use in catalog/components/low_level.yaml
components:
terraform:
foo:
vars:
ami_filters: '{{ .ami_filters }}' <--- this will work
ami_owner: '{{ .ami_filters.owner_id }}' <--- this will fail, at <.ami_filters.owner_id>: can't evaluate field owner_id in type interface {}
@Stephan Helas then using templates, you can ref any subkey in any object (including lists and maps) in the template itself).
for example:
ami_filters: !template '{{ .settings.ami_filters.owner_id.name }}'
(no need to use "| toJson")
get any value from embedded maps at any level ^
to get an item from a list, use the template index
function
{{ index .MyList 0 }}
i want to pass the structure, no just one key, this is why i used the toJson function. i tried to use {{ index }} with number and name, but both failed
for example
❯ kn_atmos_vault describe component wms-cluster/1 -s wms-xe02-sandbox | yq .vars
ami_filters:
name: KN-RHEL8-*
owner_id: "146727266814"
if i use index
ami_filters: '{{ .ami_filters }}'
ami_filter_owner: '{{ index .ami_filters 0 }}'
i’ll get
❯ atmos describe component wms-cluster/1 -s wms-xe02-sandbox | yq .vars
ami_filter_owner: "33"
ami_filters:
name: KN-RHEL8-*
owner_id: "146727266814"
if i use {{ index .aws_filters “owner_id” }}, i’ll get
at <index .ami_filters “owner_id”>: error calling index: cannot index slice/array with type string
if i use index
ami_filters: '{{ .ami_filters }}'
ami_filter_owner: !template '{{ .ami_filters }}'
it shows a string?? (i guess)
❯ atmos describe component wms-cluster/1 -s wms-xe02-sandbox | yq .vars
ami_filter_owner: '!template {"name":"KN-RHEL8-*","owner_id":"146727266814"}'
ami_filters:
name: KN-RHEL8-*
owner_id: "146727266814"
so i think, the !template function is executed somehow too late to use it that way.
Is there any way to pass not only a single value but a subkey (i don’t know if thats the correct term) to a abstract component and then use all or part of the keys to pass variables.
you use the !template
function when you want it to convert lists and maps to YAML lists and maps.
if the output is a string, you can use “plain” Go templates to get any key from any map
try this
ami_filters: '{{ .settings.ami_filters.owner_id.name }}'
oh, you mean this
ami_filter_owner: "33"
shows as a string
that’s how templates work
it’s ok, you can send it to Terraform var even if the var type is a number
also I think you are asking many diff questions in one
please post you config again, and we’ll start from the beginning
for example, this is not a correct usage of templates
catalog/high_level.yaml
import:
- path: catalog/components/low_level
context:
ami_filters: !template '{{ .settings.ami_filters | toJson }}'
which i then can use in catalog/components/low_level.yaml
components:
terraform:
foo:
vars:
ami_filters: '{{ .ami_filters }}' <--- this will work
ami_owner: '{{ .ami_filters.owner_id }}' <--- this will fail, at <.ami_filters.owner_id>: can't evaluate field owner_id in type interface {}
b/c all those templates get evaluated at the first pass
and this '{{ .ami_filters }}'
does not exist yet
but 33 is neither the name nor the owner_id. i don’t know where the index is pointing at.
i try to explain my problem at bit wider. so i uses (i call them) high-level catalog components, which include other components. now, if i define any key to be passed to the component, i can never remove it again. But, if the key doesn’t contain anything, i’ll get <no value> which will be passed on to terraform.
so, i try to only set keys, that are actually defined in the stack. to not set every subkey, i’d like to pass subkeys from my high level component down to the component. but sometimes i don’t want to pass everyhing. for example one module might need a.b.c the other might need a.b.f. so if i could reference sub keys in my component, everything would be very nice
you need to set evaluations: 2
i have, evalutations set to 2
and at least use this
ami_filters: '{{`{{ .ami_filters }}`}}'
so it will be evaluated at the second pass
ok, one second
hm, now it shows no error, but <no value>
# this will be passed as yaml structure to the component
ami_filters: '{{ .ami_filters }}'
ami_filter_owner: !template '{{ `{{.ami_filters.name}}` }}'
❯ atmos describe component wms-cluster/1 -s wms-xe02-sandbox | yq .vars
ami_filter_owner: <no value>
ami_filters:
name: KN-RHEL8-*
owner_id: "146727266814"
oh
you don’t need to use such a complex config
imports with templates then templates again
show me the final config that you are using
this is my high level catalog item
import:
- path: catalog/components/remote-state/v0_1
context:
name: wms-account-region
component: wms-account-region
version: v0.2
tenant: '{{ .settings.account.tenant }}'
tier: 'wms'
instance: '{{ .settings.account.instance }}'
stage: '{{ .settings.account.stage }}'
workspace_tier: account
region: '{{ .settings.region }}'
- path: catalog/components/wms-base/v1_3
- path: catalog/components/wms-cluster/v1_5
context:
automation_account_id: '{{ .settings.aws_automation_account_id }}'
automation_user_name: '{{ .settings.aws_automation_user_name }}'
ex_environment_id: '{{ .settings.wms.ex_environment_id }}'
subnet_index: '{{ .settings.wms.subnet_index }}'
ami_filters: !template '{{ .settings.wms.ami_filters | toJson }}'
this is the component wms-cluster/v1_5
components:
terraform:
wms-cluster/_defaults:
metadata:
type: abstract
settings:
version: wms-cluster/v1.5
depends_on:
1:
component: wms-base
vars:
remote_state:
wms_base: wms-base
ex_environment_id: '{{ .ex_environment_id }}'
automation_account_id: '{{ .automation_account_id }}'
automation_user_name: '{{ .automation_user_name }}'
# this will be passed as yaml structure to the component
ami_filters: '{{ .ami_filters }}'
ami_filter_owner: '{{ `{{.ami_filters.name }}` }}'
this is the stack component settings (redacted)
❯ atmos describe component wms-cluster/1 -s wms-xe02-sandbox | yq .settings
account:
instance: "2"
stage: sandbox
tenant: accounts
tier: wms
account_id: "xxxxxxx"
aws:
ami_image: ami-xxxxxx
aws_automation_account_id: "xxxxxxx"
depends_on:
1:
component: wms-base
instance: xe02
version: wms-cluster/v1.5
wms:
ami_filters:
name: KN-RHEL8-*
owner_id: "xxxxx"
ex_environment_id: xxxxxx
subnet_index: 0
and the vars
❯ atmos describe component wms-cluster/1 -s wms-xe02-sandbox | yq .vars
ami_filter_owner: <no value>
ami_filters:
name: KN-RHEL8-*
owner_id: "146727266814"
default_tags:
Managed-By: Terraform
....
ok, thanks, I’ll give it another look in an hour
Thx!
@Stephan Helas please change this
ami_filter_owner: '{{ `{{.ami_filters.name }}` }}'
to this
ami_filter_owner: '{{ .ami_filters.name }}'
and see what you get
also,
f your map key is a complex string, contains special characters, or is not a string, you can use the index function:
KN-RHEL8-*
has special chars
so maybe try this
ami_filter_owner: '{{ index .ami_filters "name" }}'
i already did I’l try to create a minimal repo where you can reproduce the problem
please do
it’s not easy to look and work with templates, and find issues by just looking at it
i understand
Imports for overrides
@aknysh (#831)
what
• Allow importing stack manifests with the overrides
sections
• Update docs
• https://atmos.tools/core-concepts/stacks/overrides
• https://atmos.tools/core-concepts/stacks/overrides/#importing-the-overrides
why
• To make the overrides
configuration DRY and reusable, you can place the overrides
sections into a separate stack manifest, and then import it into other stacks
For example:
Define the overrides
sections in a separate manifest stacks/teams/testing-overrides.yaml
:
Global overrides
Override the variables, env, command and settings ONLY in the components managed by the testing
Team.
overrides:
env:
# This ENV variable will be added or overridden in all the components managed by the testing
Team
TEST_ENV_VAR1: “test-env-var1-overridden”
settings: {}
vars: {}
Terraform overrides
Override the variables, env, command and settings ONLY in the Terraform components managed by the testing
Team.
The Terraform overrides
are deep-merged with the global overrides
and takes higher priority (it will override the same keys from the global overrides
).
terraform:
overrides:
settings:
spacelift:
# All the components managed by the testing
Team will have the Spacelift stacks auto-applied
# if the planning phase was successful and there are no plan policy warnings
# https://docs.spacelift.io/concepts/stack/stack-settings#autodeploy
autodeploy: true
vars:
# This variable will be added or overridden in all the Terraform components managed by the testing
Team
test_1: 1
# The testing
Team uses tofu
instead of terraform
# https://opentofu.org
# The commands atmos terraform <sub-command> ...
will execute the tofu
binary
command: tofu
Import the stacks/teams/testing-overrides.yaml
manifest into the stack stacks/teams/testing.yaml
:
import:
# The testing
Team manages all the components defined in this stack manifest and imported from the catalog
- catalog/terraform/test-component-2
# The
overrides
inteams/testing-overrides
will affect all the components in this stack manifest # and all the components that are imported AFTER theoverrides
fromteams/testing-overrides
. # It will affect the components imported fromcatalog/terraform/test-component-2
. # Theoverrides
defined in this manifest will affect all the imported components, includingcatalog/terraform/test-component-2
. - teams/testing-overrides
- catalog/terraform/test-component
- catalog/terraform/test-component-override
The overrides
in this stack manifest take precedence over the overrides
imported from teams/testing-overrides
Global overrides
Override the variables, env, command and settings ONLY in the components managed by the testing
Team.
overrides:
env:
# This ENV variable will be added or overridden in all the components managed by the testing
Team
TEST_ENV_VAR1: “test-env-var1-overridden-2”
settings: {}
vars: {}
Terraform overrides
Override the variables, env, command and settings ONLY in the Terraform components managed by the testing
Team.
The Terraform overrides
are deep-merged with the global overrides
and takes higher priority (it will override the same keys from the global overrides
).
terraform:
overrides:
vars:
# This variable will be added or overridden in all the Terraform components managed by the testing
Team
test_1: 2
NOTES:
• The order of the imports is important. The overrides
in teams/testing-overrides
will affect all the components in
this stack manifest and all the components that are imported after the overrides
from teams/testing-overrides
.
In other words, the overrides
in teams/testing-overrides
will be applied to the catalog/terraform/test-component
and catalog/terraform/test-component-override
components, but not to catalog/terraform/test-component-2
• On the other hand, the overrides
defined in this stack manifest stacks/teams/testing.yaml
will be applied to all
components defined inline in stacks/teams/testing.yaml
and all the imported components, including catalog/terraform/test-component-2
• The overrides
defined inline in the stack manifest stacks/teams/testing.yaml
take precedence over the overrides
imported from teams/testing-overrides
(they will override the same values defined in teams/testing-overrides
)
can i disable the atmos update check?
Not yet, but I can create a task for that.
We should only be running it on atmos version
and any --help
command.
Can you confirm why you want to disable it, so we fix the right thing.
yes please. i’d like it to be optional or at least only check on atmos version
Looks like it runs on Terraform commands as well
√ . [devops] (HOST) infra ⨠ atmos terraform plan eks/app -s plat-use1
╭──────────────────────────────────────────────────────────────╮
│ Update available! 1.119.0 » 1.121.0 │
│ Atmos Releases: <https://github.com/cloudposse/atmos/releases> │
│ Install Atmos: <https://atmos.tools/install> │
╰──────────────────────────────────────────────────────────────╯
Initializing the backend...
Grr ok, that should not be happening
When is this one going to get merged? https://github.com/cloudposse/atmos/pull/762 Sadly gotemplate does not support azure keyvault as a datasource
what
Integrate vals as a template function.
why
Loading configuration values and secrets from external sources, supporting various backends.
Summary by CodeRabbit
• New Features
• Introduced the atmos.Vals
template function for loading configuration values and secrets from external sources.
• Added a logging mechanism for improved tracking of value operations.
• Updates
• Updated various dependencies to newer versions, enhancing compatibility with cloud services and improving overall performance.
• Documentation
• Added comprehensive documentation for the atmos.Vals
template function, including usage examples and security best practices.
@Igor Rodionov is this possible :
yaml
- name: Get atmos settings
id: atmos-settings
uses: cloudposse/github-action-atmos-get-setting@v2
with:
settings: |
- stack: ${{ inputs.stack }}
settingsPath: settings.integrations.github.gitops.azure.ARM_CLIENT_ID
outputPath: ARM_CLIENT_ID
not passing a component to the get-settings?
nope
is supposed to be required: false but I get a Error: invalid input
I think no
ok, so the docs are wrong
required: false
is there a default
component?
with terraform version 1.10.1..
I am getting this error for atmos plan or apply
│ Error: Extraneous JSON object property
│
│ on backend.tf.json line 11, in terraform.backend.s3:
│ 11: “role_arn”: “arniam:role/abc-usw2-e1”,
│
│ No argument or block type is named “role_arn”.
╵
How to resolve this?
I resolved it with adding the
assume_role:
role_arn:
@Alcp you added that under terraform > providers > aws ?
in a stack?
yes in the stack for the backend and remote state section
Hello! Anybody had an error like this one? When I add a new context value (‘environment’ in my particular case) to an existing stack and try to deploy, I get: … template: … executing … at <.environment>: map has no entry for key “environment”. I’ve checked the import parameters - they are OK. Somehow the problem persists when I try to introduce new values to context
Can you share the corresponding YAML
2024-12-10
FYI, atmos scheme is missing new metadata.enabled field
"metadata": {
....
"component": {
"type": "string"
},
"enabled": {
"type": "boolean"
},
^^^^^^^^^^^^
"inherits": {
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
....
thanks @Stephan Helas, it will be added in the next release
@Andriy Knysh (Cloud Posse) should I create a task?
Or did you fix it in 1.122?
no, i missed it in 1.22.0
will add in another release
@Stephan Helas the validation schema has been updated (and merged to main
)
what
• Update Atmos stack manifest JSON schema • Fix docs
why
• Add the new attribute metadata.enabled
to the Atmos stack manifest validation schema
• Fix some typos in the docs
Summary by CodeRabbit
• New Features
• Added an enabled
property (boolean) to the metadata definition in various Atmos Stack Manifest JSON schemas, enhancing configuration capabilities.
• Documentation
• Updated the “Terraform Backends” document for clarity and added details on supported backends and configuration examples.
• Enhanced the “Override Configurations” document with explicit examples for using the overrides
section in stack manifests.
• Revised the Atlantis integration document to clarify configuration options and workflows.
• Restructured the “Configure Terraform Backend” document, emphasizing remote backend use and providing detailed configuration instructions.
• Chores
• Updated the Atmos tool version in the Dockerfile for the quick-start advanced examples.
Ignore template files when executing atmos validate stacks
@Cerebrovinny (#830)
what
• Ignore template files when executing atmos validate stacks
(exclude template files .yaml.tmpl
, .yml.tmpl
, .tmpl
from being validated)
• Update documentation to clarify template file handling behavior
why
• Some template files are not valid YAML files, and should not be validated before processing the templates
• Preserve backward compatibility for existing configurations that previously used .tmpl
stack manifest files
Refactor preview deployments workflows @goruha (#836)
what
• Decouple website-deploy-preview
deployment workflow to website-preview-build
and website-preview-deploy
workflows
• Renamed website-destroy-preview
workflow to website-preview-destroy
• Inactivate GitHub deployment on preview destroy workflow
why
• Support preview environment deployments for PRs from forks • Follow workflow naming consistency • Support cleaning up environments on preview destroy
Question about using static remote state. https://atmos.tools/core-concepts/components/terraform/brownfield#hacking-remote-state-with-static-backends
I have a very basic MVP based on the simple tutorial.
• Terraform component wttr
I have set up abstract inheritance with static remote_state.
• Atmos abstract component weather-report-abstract
based on wttr
component with static remote state overriding an output location
• Atmos component weather-report-disneyland
inherits from weather-report-abstract
I am using the terraform.output function to reference the output from the static component.
• Atmos component hello-world
which uses weather-report-disneyland
output to do some string concat
But when run, its not using the static values at all
Here is the stack config:
vars:
stage: static
components:
terraform:
weather-report-abstract:
metadata:
type: abstract
component: wttr
remote_state_backend_type: static
remote_state_backend:
static:
location: disneyland
vars:
location: Seattle
language: en
format: ''
options: ''
units: m
weather-report-disneyland:
metadata:
component: wttr
inherits:
- weather-report-abstract
vars: {}
hello-world:
vars:
#name: !terraform.output weather-report-abstract static location
name: !terraform.output weather-report-disneyland static location
The wttr terraform component get the weather The hello-world just concats the var with a message.
Here is the output I am seeing:
Changes to Outputs:
+ text = "Seattle says hello world"
@Matt Schmidt you are correct, the !terraform.output
func does not take into account a static backend (it needs to be improved)
the static backend is currently used by this TF module https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state which uses this TF provider https://github.com/cloudposse/terraform-provider-utils
we will add that to the yaml function in one of the new releases
Great, let me see if I can get it working without the !terraform.output. I’ll post back here if I need more help
Hi there. I’ve got a question about the go templating abilities. I’m trying to construct a JSON object to pass into a module and through testing I’ve paired it down to just a range loop:
records: '{{ range $i, $fqdn := ((atmos.Component "nproxy" .stack).outputs.fqdns) }}{{$fqdn}}{{end}}'
However, any time I try to run atmos, it gives
template: all-atmos-sections:233: unexpected "}" in operand
Could anyone point me in the right direction? Thanks
what’s the data type of records
?
instead of using range
, take a look at the Atmos YAML functions
Advanced
!template '{{ toJson (atmos.Component
will give you back the correct type
if fqdns
is a list of strings, you will get back a list of strings
same with using !terraform.output
- it will return the correct types (complex types like maps and lists)
so you don’t need to construct all of that using templates
Hey Andriy, thanks for replying. I’m looking to take a few different outputs from a module and stitch them into a json object for another module.
It was hoping to keep the outputs of Module1 general, but it seems the least annoying path forward here might be to just output the object already constructed
I have my fqdns and their associated IPs in two different outputs (list of strings) from Module1. I was hoping to stitch them together with some templating to send them into Module2 as a complex object
Given
fqdns = ["host1","host2"]
ips = ["x.x.x.x", "y.y.y.y"]
I was hoping to construct something like
records:
- name: host1.example.com
type: A
values:
- x.x.x.x
- name: host2.example.com
...
this is a complicated case :slightly_smiling_face: I don’t know why it throws the error (I can test it later).
Even if it did not throw the error, it would not work (in principle) b/c '{{ range $i, $fqdn := ((atmos.Component "nproxy" .stack).outputs.fqdns) }}{{$fqdn}}{{end}}'
will return a string with the content generated by the template, not a list of objects
please try
records: !template '{{ (atmos.Component "nproxy" .stack).outputs.fqdns }}'
and see what it generates
Appreciate your time on this. I’m aware that my current attempted template wouldn’t return the object Im trying to build. I had written it all out and I kept removing pieces to try to debug the }
error. This was the simplest statement I could think to test.
run atmos describe component
to see the result after template evaluation
That works as expected:
records:
- host1 host2 host3
tho, I needed the toRawJson
to get terraform to pass the object. I went with the pre-building approach:
records: !template '{{ toRawJson ((atmos.Component "nproxy" .stack).outputs.r53_private_record_obj) }}'
Is there a way to narrow the scope of Atmos describe affected to a particular stack?
now it supports --stack
tyty
same with the github action
Since updating my Atmos I get json schema validations errors when I run atmos describe stacks
(don’t know the previous behavior). For example one error seems to me to indicate that I should have components
in definitions/
not at the root of the file. Could I get some broader context of what may be going on here?
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse)
in the atmos.yaml it looks like I have schemas: {}
the latest version of Atmos will automatically use the remote scema https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json to validate stacks if you don’t specify it in atmos.yaml
i don’t know what’s happening in your case, maybe since you did not have validation before, your stack configs are not valid?
can you share the error here?
@Andriy Knysh (Cloud Posse) is there a way to turn off validation?
currently, the only way is to copy the schema locally and point atmos.yaml
to it
you can update the local schema to not “validate” some sections
but I think the problem is, some of your stacks are not “valid” - I don’t know if it’s true, but you can check the errors
if it’s true, then it has to be fixed
the errors seem so fundamental that it seems odd that they’ve been working if they truly are invalid
can you show the exact error?
so one is something like:
...
{ "keywordLocation": "properties/components/$ref",
"absoluteKeywordLocation": "<schema store url ending in /components/$ref>",
"instanceLocation": "/components",
"error": "doesn't validate with '/definitions/components'" }
...
Just ran a terraform plan. So I guess the errors aren’t getting in the way (well of at least that stack / component combo).
do you have other parts of the error? (the last part usually shows the location)
the error format is basically a list of the entries matching the following pattern :
Atmos manifest JSON Schema validation error in the file '<file name>':
{ "valid": false, "errors": [<structured like the previous message>...] }
would the location be in the first map I sent? looks to me like they all have the same 4 keys
appreciate the help!
i don’t understand what’s going on here. Can you clone https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json to a local file, and then in atmos.yaml
point to it?
sure I will give that a shot
# Validation schemas (for validating atmos stacks and components)
schemas:
# JSON Schema to validate Atmos manifests
# <https://atmos.tools/cli/schemas/>
# <https://atmos.tools/cli/commands/validate/stacks/>
# <https://atmos.tools/quick-start/advanced/configure-validation/>
# <https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>
# <https://json-schema.org/draft/2020-12/release-notes>
# <https://www.schemastore.org/json>
# <https://github.com/SchemaStore/schemastore>
atmos:
# Can also be set using 'ATMOS_SCHEMAS_ATMOS_MANIFEST' ENV var, or '--schemas-atmos-manifest' command-line argument
# Supports both absolute and relative paths (relative to the `base_path` setting in `atmos.yaml`)
manifest: "stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json"
ty for you help! the interesting/actionable parts of those sequences often seems to be at the end or maybe spread out in a cyclical fashion.
did you use the local file?
yeah though I don’t know if that was the issue. why have that be the path for the local file? my inclination would be to put it somewhere like ./schemas/atmos-manifest-1.json
or something like that
i just wanted to know if there’s a problem on your system downloading the remote schema.
You can download the file, place it in any location you want (inside or outside the repo), point atmos.yaml
to it, and run atmos validate stacks
again
@Andriy Knysh (Cloud Posse) yeah i was more asking from a conceptual point of view why that location. but if it’s just preference that’s a fine answer for me. Thanks again for all your help !
no problem
but what was the result of that? What atmos validate stacks
shows?
@Andriy Knysh (Cloud Posse) nothing now, I worked through the errors. There were some that were actionable and dealing with those also cleared up the rest.
2024-12-11
Hello. i have a quick design question. can i use the remote-state provider with gitlab terraform http backend?
Do you mean Cloud Posse module? If so, then it should be fine. It aims to manage all Terraform/OpenTofu supported backends.
That should work, but we don’t have an example
Hey, is it intended that atmos terraform show <stack_name>.planfile
deletes the planfile afterwards? If yes, can I stop atmos from doing this?
That sounds like an oversight
@Shubham Tholiya took a look at this and could not reproduce it
Can you share what order of commands you ran?
I first run atmos terraform plan -s <stack_name> argocd
and then atmos terraform show -s <stack_name> argocd --json <stack_name>-argocd.planfile --skip-init
The same behaviour happens if I run atmos terraform show -s <stack_name> argocd --skip-init -- --json <stack_name>-argocd.planfile
instead
If I run terraform show --json <stack_name>-argocd.planfile
inside the component directory, the planfile remains, so I was suspecting atmos to delete the file
And can you confirm that version of atmos you are using?
I’m running version 1.123.0
https://github.com/cloudposse/atmos/pull/855 This pr should fix the issue
issue: https://linear.app/cloudposse/issue/DEV-2825/atmos-terraform-show-deletes-planfile
what
• Stop deleting planfile after terraform show
why
We should not delete the terraform plan file if the user uses show to check his results.
Test
Steps:
cd examples/quick-start-simple
../../build/atmos terraform plan -s dev station
../../build/atmos terraform show -s dev station --json dev-station.planfile --skip-init
Verified if the file ./examples/quick-start-simple/components/terraform/weather/dev-station.planfile
still exists.
Before this change: deleted
After this change: present
references
issue: https://linear.app/cloudposse/issue/DEV-2825/atmos-terraform-show-deletes-planfile
conversation: https://sweetops.slack.com/archives/C031919U8A0/p1733924783112799
:rocket: Enhancements
atmos terraform show
command should not delete Terraform planfiles @samtholiya (#855)
what
• atmos terraform show
command should not delete Terraform planfiles
why
• We should not delete the terraform plan file when the user executes atmos terraform show
command
Thanks for fielding our questions today @Erik Osterman (Cloud Posse) ! Just so @Matt Schmidt and I know, when (or with whom) should we follow up with for the example code ya’ll offered up?
@Jeremy G (Cloud Posse) can you share the example static backend for account map
I’d rather not share the static backend for account map we are using for testing, because it is sort of an anti-pattern, very customized for our testing. Let me put together an example of a static backend for account
that is more aligned with our plans for brownfield support.
Dynamic width terminal calculation for atmos --help
@Cerebrovinny (#815)
what
• Help should detect proper screen width
why
• Terminal Width should be dynamic calculated when the user prints help [workflows] Fix preview deployment inactivation @goruha (#843)
what
• Deactivate preview env deployment only for the current branch
why
• Allow to have deployment per PR Update Atmos stack manifest JSON schema. Fix docs @aknysh (#842)
what
• Update Atmos stack manifest JSON schema • Fix docs
why
• Add the new attribute metadata.enabled
to the Atmos stack manifest validation schema
• Fix some typos in the docs
Fix deployments overrides @goruha (#841)
what
• Fix deployments overrides
why
• Allow each PR to have independent preview env deployment
A minor nitpick, it would appear that the 1.0.1 release of cloudflare-zone only had a ref starting with v, which appears to be inconsistent with the other release (and other components) https://github.com/cloudposse/terraform-cloudflare-zone/releases
@Igor Rodionov
See discussion here: https://sweetops.slack.com/archives/CB6GHNLG0/p1732309900582849
this might be a silly question but is why dose CloudPosse modules have some tags with prefix v
and other do not but all release have the prefix, is this a bug?
Like this module cloudposse/terraform-null-label releases are with v
prefix (v0.25.0) but tag is without (0.25.0)
but then we have this module cloudposse/terraform-aws-alb where it matches and both tag and release have the prefix v
(v2.1.0)
what is the prefer way?
Has anyone used atmos to manage an existing cloudflare zone? When you register a domain via cloudflare the zone will automatically be created. I assume I need to import it somehow, but tips on how to go about that in atmos would be appreciated
I think that is more of terraform question, no?
Atmos runs terraform basically
well in terraform you create a resource and run import, but I am struggling to do the mapping between creating a resource in a terraform file and the yaml definition in a stack in atmos
mmm can you do this in vanilla terraform ?
I mean, does it work?
or do you have to do a two-stage deployment to get it done?
yeah and it’s the suggested approach to move from vanilla TF to ATMOS , just struggling to get my head around how you define a resource manually in atmos https://docs.cloudposse.com/learn/maintenance/tutorials/how-to-use-atmos-with-existing-terraform/#part-5-importing-state
How to use Atmos with existing Terraform
atmos will pass the inputs to terraform
so, if the import block needs some var.resource_id
you will need to pass that in the stack yaml
for example :
keyvault/infra:
metadata:
component: keyvault
settings:
depends_on:
1:
component: "vnet"
2:
component: "loganalytics"
vars:
name: "outputs"
public_network_access_enabled: true
network_acls:
bypass: "None"
default_action: "Allow"
import_from: /resource/myresource/id/azure
and you can use var.import_from
in the import block if you need to
FOr anyone following on later on this you can use:
zone_enabled
: false to not create a zone
2024-12-12
Hey, the documentation of atmos for helmfile states, that “deploy” runs “diff” and then “apply”, but the code looks like it runs just “sync”? https://github.com/cloudposse/atmos/blob/v1.123.0/internal/exec/helmfile.go#L128
This is a bit misleading, as I was wondering why diff did not show any changes, but my chart was deployed anyways.
info.SubCommand = "sync"
Also, I don’t think we will invest much more into helmfile
support, without community contirbutions. The fix here is to correct the documentation.
That is sad. How do you handle helm deployments? Also with terraform? How did you overcome the inconsistencies terraform sometimes has with helm?
We predominantly use either a) terraform+helm or b) argocd
I think the inconsistencies with terraform+helm have largely been resolved.
Historically, it was a big problem
Does Atmos have any known incompatibility with terraform 1.10? Trying to upgrade from 1.9.8 -> 1.10.2 to use the new ephemeral
resource type, and atmos complains about the backend configuration changing, but won’t let me try to reinit or reconfigure to migrate state even with this in my config
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: true
Example:
$ atmos terraform deploy aurora -s stackname git:(branchname|✚3…4⚑4)
template: all-atmos-sections:452:52: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure"
Doesn’t work even if I explicitly call atmos terraform init <component> --stack <stack> --args="-reconfigure"
either
which version of atmos?
1.123.0
have you run atmos clean?
and try again?
I hadn’t, but doing so produces the same results. Steps:
- Switch back to tf 1.9.8
- Run atmos terraform clean <component> -s stackname
- Accept deletions
- Switch back to tf 1.10.2
- Attempt to deploy, which fails with the same error as before re: state config changes
I had this happen to me, and I remember running atmos clean
which basically removed the .terraform folder and the backend.json config
Yeah, unfortunately that didn’t do the trick, sadly., but thanks for the idea
Could be my local setup, actually. Looks like 1.10 removed some role deprecated role assumption I might be relying on: https://github.com/hashicorp/terraform/pull/35721/files
ohhhh interesting
@Erik Osterman (Cloud Posse) // @Andriy Knysh (Cloud Posse) https://github.com/cloudposse/atmos/issues/850 issue filed. I think it’s in atmos, given the lack of the assume_role block inside the backend.tf.json
Describe the Bug
Terraform 1.10 dropped support for several legacy backend configuration options related to role assumption see the release notes. The new format, when using role assumption, has the values as nested within an assume_role
block, which Atmos does not produce.
Expected Behavior
Atmos is able to properly support Terraform 1.10 and the underlying changes to s3 backend
Steps to Reproduce
Start with any atmos stack, running any terraform version 1.9.x or older.
- deploy a resource using 1.9.x
- Upgrade terraform to 1.10.x
- Attempt to run atmos commands using this TF version.
Atmos will consistently produce an error stating the backend has changed and will not automatically reconfigure or re-init:
template: all-atmos-sections52: executing “all-atmos-sections” at
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require migrating existing state.
If you wish to attempt automatic migration of the state, use “terraform init -migrate-state”. If you wish to store the current configuration with no changes to the state, use “terraform init -reconfigure”.
Screenshots
No response
Environment
OS: Any (but I’m running Darwin 15.1)
Atmos Version: Latest (1.123.0)
Additional Context
This will likely require that Atmos drops support for older versions of Terraform (hashicorp/terraform#35721) and may have other implications for OpenTofu (I don’t use it, so I can’t begin to guess)
i can’t look at this right now (will be able to look tomorrow).
In Atmos, you can configure any backend properties in the backend
section. Did you try that?
you need to update the assume role section, terraform deprecated the old way some time ago, and now looks like they removed it completely
in Atmos, we support adding any attributes to the backend
section
backend:
s3:
......
I suppose to report this to the developers of atmos, so i’m pasting the error here for help. This is when i’m deploying the ecs-service module.
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "public_lb_enabled" but a value was found in file
│ "cs-ai-apse2-dev-aws-project-tofu.terraform.tfvars.json". If you meant to use this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your
│ organization. To reduce the verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Warning: Deprecated Resource
│
│ with aws_s3_bucket_object.task_definition_template,
│ on main.tf line 570, in resource "aws_s3_bucket_object" "task_definition_template":
│ 570: resource "aws_s3_bucket_object" "task_definition_template" {
│
│ use the aws_s3_object resource instead
╵
╷
│ Warning: Argument is deprecated
│
│ with aws_ssm_parameter.full_urls,
│ on systems-manager.tf line 56, in resource "aws_ssm_parameter" "full_urls":
│ 56: overwrite = true
│
│ this attribute has been deprecated
╵
╷
│ Error: Request cancelled
│
│ with module.alb[0].data.utils_component_config.config[0],
│ on .terraform/modules/alb/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
╷
│ Error: Request cancelled
│
│ with module.ecs_cluster.data.utils_component_config.config[0],
│ on .terraform/modules/ecs_cluster/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
╷
│ Error: Plugin did not respond
│
│ with module.roles_to_principals.module.account_map.data.utils_component_config.config[0],
│ on .terraform/modules/roles_to_principals.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
│ details.
╵
╷
│ Error: Request cancelled
│
│ with module.vpc.data.utils_component_config.config[0],
│ on .terraform/modules/vpc/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵
Stack trace from the terraform-provider-utils plugin:
panic: assignment to entry in nil map
goroutine 62 [running]:
github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0x14000072050, 0x2c}, {{{0x14000b05d40, 0x14}, 0x0, {0x14000f07170, 0x2f}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
github.com/cloudposse/[email protected]/internal/exec/utils.go:438 +0xdac
github.com/cloudposse/atmos/pkg/component.ProcessComponentInStack({0x1400010e3c0?, 0x2?}, {0x140009df870?, 0x2?}, {0x0?, 0x5?}, {0x0?, 0x3?})
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:33 +0x180
github.com/cloudposse/atmos/pkg/component.ProcessComponentFromContext({0x1400010e3c0, 0x1b}, {0x140003985fa, 0x2}, {0x1400039862a, 0x2}, {0x1400039837b, 0x5}, {0x14000398620, 0x3}, ...)
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:80 +0x294
github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceComponentConfigRead({0x1043b9728?, 0x14000570c60?}, 0x140001a0f00, {0x0?, 0x0?})
github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_component_config.go:121 +0x2f8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x14000825ea0, {0x1043b9728, 0x14000570c60}, 0x140001a0f00, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:823 +0xe4
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0x14000825ea0, {0x1043b9728, 0x14000570c60}, 0x140001a0380, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:1043 +0x110
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0x14000bfc150, {0x1043b9728?, 0x14000570ba0?}, 0x14000570b40)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1436 +0x5a0
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0x140009c5860, {0x1043b9728?, 0x140005702a0?}, 0x140004ae460)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:688 +0x1cc
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1042e36a0, 0x140009c5860}, {0x1043b9728, 0x140005702a0}, 0x140001a0180, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:665 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x14000813800, {0x1043b9728, 0x14000570210}, {0x1043c6cc0, 0x140007a8340}, 0x14000c399e0, 0x14000bfe960, 0x106130700, 0x0)
google.golang.org/[email protected]/server.go:1394 +0xb64
google.golang.org/grpc.(*Server).handleStream(0x14000813800, {0x1043c6cc0, 0x140007a8340}, 0x14000c399e0)
google.golang.org/[email protected]/server.go:1805 +0xb20
google.golang.org/grpc.(*Server).serveStreams.func2.1()
google.golang.org/[email protected]/server.go:1029 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 56
google.golang.org/[email protected]/server.go:1040 +0x13c
Error: The terraform-provider-utils plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
exit status 1
please make sure if you are using the latest versions of Atmos and the remote-state
, you are using the latest version of the utils
provider
https://github.com/cloudposse/terraform-provider-utils/releases/tag/v1.27.0 https://github.com/cloudposse/atmos/releases/tag/v1.122.0
both Atmos and utils
provider use the same code to process remote-state
(if still not working, ping us again and we’ll take a look)
i’ve ensure i’m using latest version of atmos, version 1.123, and the utils is at least version 1.27.
Downloading registry.terraform.io/cloudposse/stack-config/yaml 1.8.0 for alb...
- alb in .terraform/modules/alb/modules/remote-state
.
.
.
- Installing cloudposse/utils v1.27.0...
- Installed cloudposse/utils v1.27.0 (self-signed, key ID 7B22D099488F3D11)
still getting the error message.
Error: Plugin did not respond
│
│ with module.alb[0].data.utils_component_config.config[0],
│ on .terraform/modules/alb/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with module.ecs_cluster.data.utils_component_config.config[0],
│ on .terraform/modules/ecs_cluster/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with module.roles_to_principals.module.account_map.data.utils_component_config.config[0],
│ on .terraform/modules/roles_to_principals.account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│
│ with module.vpc.data.utils_component_config.config[0],
│ on .terraform/modules/vpc/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-utils plugin:
panic: assignment to entry in nil map
goroutine 97 [running]:
github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0x14000072050, 0x2c}, {{{0x14000f202b8, 0x14}, 0x0, {0x14000df24e0, 0x2f}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
github.com/cloudposse/[email protected]/internal/exec/utils.go:438 +0xdac
github.com/cloudposse/atmos/pkg/component.ProcessComponentInStack({0x14000767780?, 0x2?}, {0x14000deb080?, 0x2?}, {0x0?, 0x5?}, {0x0?, 0x3?})
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:33 +0x180
github.com/cloudposse/atmos/pkg/component.ProcessComponentFromContext({0x14000767780, 0xb}, {0x140007677da, 0x2}, {0x140007677fa, 0x2}, {0x1400076775b, 0x5}, {0x140007677f0, 0x3}, ...)
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:80 +0x294
github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceComponentConfigRead({0x1035d9728?, 0x140008d6f00?}, 0x14000793800, {0x0?, 0x0?})
github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_component_config.go:121 +0x2f8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x14000177340, {0x1035d9728, 0x140008d6f00}, 0x14000793800, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:823 +0xe4
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0x14000177340, {0x1035d9728, 0x140008d6f00}, 0x14000793680, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:1043 +0x110
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0x14000d83128, {0x1035d9728?, 0x140008d6b70?}, 0x140008d6a20)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1436 +0x5a0
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0x14000a8c1e0, {0x1035d9728?, 0x1400071b8f0?}, 0x1400026ae10)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:688 +0x1cc
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1035036a0, 0x14000a8c1e0}, {0x1035d9728, 0x1400071b8f0}, 0x14000793300, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:665 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x14000289400, {0x1035d9728, 0x1400071b860}, {0x1035e6cc0, 0x14000750000}, 0x14000aac480, 0x1400081fc80, 0x105350700, 0x0)
google.golang.org/[email protected]/server.go:1394 +0xb64
google.golang.org/grpc.(*Server).handleStream(0x14000289400, {0x1035e6cc0, 0x14000750000}, 0x14000aac480)
google.golang.org/[email protected]/server.go:1805 +0xb20
google.golang.org/grpc.(*Server).serveStreams.func2.1()
google.golang.org/[email protected]/server.go:1029 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 42
google.golang.org/[email protected]/server.go:1040 +0x13c
Error: The terraform-provider-utils plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
exit status 1
for the utils
provider to work correctly, make sure your atmos.yaml
is in the correct location, please see https://atmos.tools/core-concepts/components/terraform/remote-state/#atmos-configuration
Terraform natively supports the concept of remote state and there’s a very easy way to access the outputs of one Terraform component in another component. We simplify this using the remote-state module, which is stack-aware and can be used to access the remote state of a component in the same or a different Atmos stack.
@Andriy Knysh (Cloud Posse), i think may have found where the issue is. You’re right in sense i should ensure my configure for atmos is correctly done, which i have.
I think the atmos cli don’t handle well if a user’s home directory have a dot in the name of their directory. Example, my home directory actual name is /Users/samuel.than And i store my atmos components, config and stacks in directory ~/cloud-ops, which specifically is /Users/samuel.than/cloud-ops initially, setting environment $ATMOS_BASE_PATH=/Users/samuel.than/cloud-ops, don’t seem to work (gives the error that i’ve been getting, plugin crashed)
Once i’ve move the atmos repo to something like /usr/local/etc/cloud-ops and setup my $ATMOS_BASE_PATH to /usr/local/etc/cloud-ops. It starts to work
oh, nice find, thanks
yeah, that’ll probably explains the crazy errors i used to encounter previously when it can’t find remote modules…
looking forward to see fix
That is so odd. Why would the dot have any affect on atmos behavior? @Andriy Knysh (Cloud Posse) do you have any ideas?
not yet, need to test it
I’ve encountered this error as well, even on projects that do not make use of remote-state
.
We don’t have a .
in the username, and have set ATMOS_BASE_PATH
.
Running atmos v1.47.0 (i know ) and Terraform v1.5.7
We have pinned utils back to v1.26.0 on a few projects as a workaround
@Tyler Rankin can you elaborate? It sounds like if you don’t have ATMOS_BASE_PATH
set then you have this problem. (this is a known issue, and we will open a PR for this soon)
But I don’t see why pinning versions of atmos changes that behavior
1.26.0 is ancient
Apologies, I mean to say we are setting ATMOS_BASE_PATH
as expected and don’t use a .
in the username (like OP originally thought the issue was). Everything worked for us up until terraform-provider-utils
latest release (1.27.0).
We are pinning terraform-provider-utils
back to the previous 1.26.0, not atmos. This is issue also came up here. I’m not sure the intricacies between remote-state / atmos / utils, but utils 1.27.0 seems to have caused us issues
Describe the Bug
Running Terraform plan, causes the plugin to error with an exit code 1, panic: assignment to entry in nil map
. From the output message, This is always indicative of a bug within the plugin
.
Expected Behavior
Terraform Plan output the following message:
`Error: Plugin did not respond
with module.cross_region_hub_connector[“use2”].data.utils_component_config.config[0],
on .terraform/modules/cross_region_hub_connector/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”:
1: data “utils_component_config” “config” {
The plugin encountered an error, and failed to respond to the
plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
details.
Error: Plugin did not respond
with module.iam_roles.module.account_map.data.utils_component_config.config[0],
on .terraform/modules/iam_roles.account_map/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”:
1: data “utils_component_config” “config” {
The plugin encountered an error, and failed to respond to the
plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
details.
Error: Plugin did not respond
with module.tgw_hub.data.utils_component_config.config[0],
on .terraform/modules/tgw_hub/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”:
1: data “utils_component_config” “config” {
The plugin encountered an error, and failed to respond to the
plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
details.
Error: Plugin did not respond
with module.tgw_hub_role.module.account_map.data.utils_component_config.config[0],
on .terraform/modules/tgw_hub_role.account_map/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”:
1: data “utils_component_config” “config” {
The plugin encountered an error, and failed to respond to the
plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more
details.
Stack trace from the terraform-provider-utils plugin:
panic: assignment to entry in nil map
goroutine 150 [running]:
github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0xc000078090
github.com/cloudposse/[email protected]/internal/exec/utils.go:438 +0x109d
github.com/cloudposse/atmos/pkg/component.ProcessComponentInStack({0xc0028dfb20
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:33 +0x1e7
github.com/cloudposse/atmos/pkg/component.ProcessComponentFromContext({0xc0028dfb20
github.com/cloudposse/[email protected]/pkg/component/component_processor.go:80 +0x3b4
github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceComponentConfigRead({0x3b9a028
github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_component_config.go:121 +0x3fb
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).read(0xc0009e2[540](http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).read(0xc0009e2<https://github.com/QuicksortRx/qsrx-infra/actions/runs/12280259382/job/34267957429?pr=475#step:3:562%7C540), {0x3b9a028, 0xc0027a0e10}, 0xc00184ec00, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:823 +0x119
[github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).ReadDataApply(0xc0009e2540](http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(Resource).ReadDataApply(0xc0009e2540), {0x3b9a028, 0xc0027a0e10}, 0xc00184eb00, {0x0, 0x0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:1043 +0x13a
[github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(GRPCProviderServer).ReadDataSource(0xc000a63710](http://github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(GRPCProviderServer).ReadDataSource(0xc000a63710), {0x3b9a028?, 0xc0027a0d50?}, 0xc0027a0cf0)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1436 +0x6aa
[github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(server).ReadDataSource(0xc000997860](http://github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(server).ReadDataSource(0xc000997860), {0x3b9a028?, 0xc0027a0270?}, 0xc000b86280)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:688 +0x26d
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x3449080
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:665 +0x1a6
[google.golang.org/grpc.(Server).processUnaryRPC(0xc000926800](http://google.golang.org/grpc.(Server).processUnaryRPC(0xc000926800), {0x3b9a028, 0xc0027a01e0}, {0x3ba7740, 0xc000685d40}, 0xc002843d40, 0xc000a6b1d0, 0x598b680, 0x0)
google.golang.org/[email protected]/server.go:1394 +0xe2b
[google.golang.org/grpc.(Server).handleStream(0xc000926800](http://google.golang.org/grpc.(Server).handleStream(0xc000926800), {0x3ba7740, 0xc000685d40}, 0xc002843d40)
google.golang.org/[email protected]/server.go:1805 +0xe8b
[google.golang.org/grpc.(Server).serveStreams.func2.1()](http://google.golang.org/grpc.(Server).serveStreams.func2.1())
google.golang.org/[email protected]/server.go:1029 +0x7f
created by [google.golang.org/grpc.(Server).serveStreams.func2](http://google.golang.org/grpc.(Server).serveStreams.func2) in goroutine 16
google.golang.org/[email protected]/server.go:1040 +0x125
Error: The terraform-provider-utils plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin’s maintainers so that it
can be fixed. The output above should help diagnose the issue.
exit status 1`
Steps to Reproduce
I’m using the Cloud Posse ecosystem, with as much boilerplate setup as possible; the method to recreate this issue is simply calling atmos terraform plan tgw/spoke --stack core-use2-network
…
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) do we need a task to track this?
@Andriy Knysh (Cloud Posse) could this be related to that other report we had this week on the provider? …where you said to pin to 1.26.0?
i don’t know yet what the issue is with the 1.27.0 utils
provider, but one more person reported an issue. I’ll check it and fix. For now, please pin the provider to 1.26.0
oh, I see, this error
panic: assignment to entry in nil map
is fixed in a new PR and will be released as a new Atmos version today
we’ll then update the utils
provider
btw @Tyler Rankin, the error above happens if you are using a non-existing component in Atmos commands like atmos terraform plan
. You should take a look at this in your stack configs. The new Atmos version will display a correct error message instead of panicking, but the issue with invalid component names will not be fixed (it’s a configuration issue, if any)
but let’s not jump to any conclusion, we’ll release a new Atmos version and utils
provider version today, and you can test again
Thanks for the additional details @Andriy Knysh (Cloud Posse). I was beginning to recognize a similar pattern. We have stacks that may or may not make use of certain components, and as such we have utilized var.ignore_errors
in a few places since we only care about a specific component if it is actually deployed.
Looking forward to running tests on the updated versions
@Andriy Knysh (Cloud Posse) I built a local utils
using the new atmos 1.130.0
and was able to see the panic has been removed when calling utils_component_config. Is terraform-provider-utils
scheduled for a release today?
2024-12-13
Has anyone run into an issue where the component name is interpreted by atmos/terraform as a command line flag?
Error parsing command-line flags: flag provided but not defined: -rds-key
I am experimenting with conditionally setting the name_pattern via name_template
and wondering if my config adjustments are causing the error.
[I am using atmos v1.123.0
.]
More context on the specific command I am running,
# run from within a Geodesic container using atmos 1.123.0
$ atmos terraform plan rds-key --stack ue2-stage
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to OpenTofu
- Reusing previous version of cloudposse/utils from the dependency lock file
- Reusing previous version of datadog/datadog from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/external from the dependency lock file
- Using previously-installed datadog/datadog v3.49.0
- Using previously-installed hashicorp/aws v3.76.1
- Using previously-installed hashicorp/local v2.5.2
- Using previously-installed hashicorp/external v2.3.4
- Using previously-installed cloudposse/utils v1.7.1
OpenTofu has been successfully initialized!
Usage: tofu [global options] workspace select NAME
Select a different OpenTofu workspace.
Options:
-or-create=false Create the OpenTofu workspace if it doesn't exist.
Error parsing command-line flags: flag provided but not defined: -rds-key
Usage: tofu [global options] workspace new [OPTIONS] NAME
Create a new OpenTofu workspace.
Options:
-lock=false Don't hold a state lock during the operation. This is
dangerous if others might concurrently run commands
against the same workspace.
-lock-timeout=0s Duration to retry a state lock.
-state=path Copy an existing state file into the new workspace.
Error parsing command-line flags: flag provided but not defined: -rds-key
exit status 1
In atmos.yaml
, I am trying to conditionally use environment
in the name pattern via name_template
for a few new resources I am adding in a different region.
# atmos.yaml
stacks:
base_path: "stacks"
included_paths:
- "org/**/*"
name_pattern: "{stage}" # old format for v1 infra
# name_pattern: "{environment}-{stage}" # new format for v2 infra, but I don't want to rename everything
name_template: "{{- if .vars.environment }}{{.vars.environment}}-{{end}}{{.vars.stage}}"
excluded_paths:
- "**/_*.yaml" # Do not consider any file beginning with "_" as a stack file
I’ll bet @Andriy Knysh (Cloud Posse) know what’s going on here.
Looks like the the name pattern is working when finding the stack but then the workspace name creation isn’t working?
@Andriy Knysh (Cloud Posse) bumping this up
@Andriy Knysh (Cloud Posse) do you have any insight on this? If not, I’m planning to hack a bit on it to see if I can resolve the component name being parsed as a cli option/flag
@Weston Platter what CLI command did you run?
(also note, if both name_template
and name_pattern
are defined in atmos.yaml
, name_template
takes precedence and overrides name_pattern
- both are not supported at the same time (b/c it’s not possible to know which one to use)
i’ll test what you defined here
name_template: "{{- if .vars.environment }}{{.vars.environment}}-{{end}}{{.vars.stage}}"
here’s the specific cli command I ran,
atmos terraform plan rds-key --stack ue2-stage
where rds-key
was the atmos component name
Thanks for taking a look!
Why might I get an error that states no argument or block type is named role_arn for a role_arn under terraform > backend > s3 in a backend.tf.json file?
It seems to me like that field is used in documentation examples.
Looks like failures when being set to null and being set to an ARN
what terraform version are you using?
seems 1.10.2 doesn’t work and 1.9.2 does. I would like to upgrade though, any suggestions on what config to change to enable that?
Terraform deprecated assume_role
block some time ago in favor of just using role_arn
some time ago, and in 1.10 they removed it completely
the new backend config should look like this in Atmos
terraform:
backend:
s3:
bucket: xxxxxxxx-tfstate
dynamodb_table: xxxxxxx-tfstate-lock
role_arn: xxxxxxxxxx
encrypt: true
key: terraform.tfstate
acl: bucket-owner-full-control
region: us-east-2
(it also works in older TF versions like 1.9)
I’m currently poc’ing atmos and I’m running into something that I can’t quite square, hoping someone can shed some light. I’m using a stack name template {{.vars.environment}}-{{.vars.short_region}}
where environment
is defined in a _defaults.yaml
for the respective environments (dev
, prod
, etc.) and short_region
is defined as a regional default (eg. use1
).
This works just fine, however when running terraform it complains about an extra undeclared variable short_region
. I searched and found this thread where the recommendation was to not use global variables. I’m not sure how to square that with the need to declare these variables in a relatively “global” sense. Am I missing something obvious?
whats the best way to share vars between some components but not all of them?
if i place in the stack yaml vars section, i got warnings from the components which are not using them.
Warning: Value for undeclared variable
Yes, so if you add it to defaults, it gets inherited by everything
whats the best way to share vars between some components but not all of them?
if i place in the stack yaml vars section, i got warnings from the components which are not using them.
Warning: Value for undeclared variable
I would first master inheritance, before using templating.
If you pass a variable to terraform, that it is not expecting, it will give you that message.
if you define short_region
in your variables.tf
of the component that warning will go away
whats the best way to share vars between some components but not all of them?
https://atmos.tools/core-concepts/stacks/inheritance/#multiple-inheritance
Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.
Now with multiple inheritance, the assumption is that you use the same variables names, which is a convention that we follow. This reduces the cognitive load on developers.
Now, if you don’t have that option, then templating is more appropriate. That said, I think the underlying problem is the lack of consistency that comes from components that defined multiple variables that fulfill the same purpose, but with different names.
Appreciate the insights, my intent here is to maintain existing naming conventions and reduce the cognitive load that would come from having to redefine (from an org perspective) what environment
means to us. This would translate to what atmos considers stage
and we use region
where it uses environment
. From my reading of the docs, templating seems to be the only way to workaround this difference, and since I want to maintain this naming convention, that feels like a framework decision that I have to square away at a baseline.
I understand that its an opinionated choice that atmos makes, just hoping there’s some means of working with it that doesn’t require defining variables in my terraform components that won’t be used (and then having to deal with the linting issues that would bring).
Historically, we only supported name_pattern. Today we support any convention. But you need to use name_template instead
Also, make sure you saw https://sweetops.slack.com/archives/C031919U8A0/p1734178991069519
Did you know we have a context provider for terraform? This is a magical way to enforce naming conventions like with our null label module. Since it is a provider, however, context is automatically available to all child modules of the root module. This is the best way to implement naming conventions, tag conventions and then validate that it is meeting your requirements. Stop using variables, and start using context.
https://github.com/cloudposse/atmos/tree/main/examples/demo-context
Oh, awesome, this looks like what i was missing! I’ll kick the tires on this today. Thanks!
Great! Let me know how it goes.
:rocket: Enhancements
atmos terraform show
command should not delete Terraform planfiles @samtholiya (#855)
what
• atmos terraform show
command should not delete Terraform planfiles
why
• We should not delete the terraform plan file when the user executes atmos terraform show
command
2024-12-14
Did you know we have a context provider for terraform? This is a magical way to enforce naming conventions like with our null label module. Since it is a provider, however, context is automatically available to all child modules of the root module. This is the best way to implement naming conventions, tag conventions and then validate that it is meeting your requirements. Stop using variables, and start using context.
https://github.com/cloudposse/atmos/tree/main/examples/demo-context
what do you mean by?
Since it is a provider, however, context is automatically available to all child modules of the root module.
you mean inside the same component? bc you will still have to pass in namespace, tenant, stage, ect. to the provider on each component where you use this instead of passing them just on variables, right?
this is great but I prefer the module null-label but maybe I’m missing something
With the provider , you define it once, you don’t have to keep passing it between modules
And it works really well with atmos inheritance
It also means that company A can have one convention, and company B can have an entirely different convention. All without changing a single line of terraform code when using Atmos.
For example, let’s say you are using the AWS provider, you define the configuration once per component, and all child modules just work.
With context, it’s the same. You define the provider configuration for context once, and everything just works. Plus the convention can be anything you want it to be, not strict like with null label.
More about provider generation here: https://atmos.tools/core-concepts/components/terraform/providers
Configure and override Terraform Providers.
aaah that is great okay okay and this can be setup at the org level once right?
now I see it! okay if I set the naming convention order and what not from the org level if I change this on a new org or need to refactor a ton of them this can be done from here right? I like it
bc you will still have to pass in namespace, tenant, stage, ect. to the provider on each component where you use this instead of passing them just on variables, right
Yes, but here is the difference.
With null label, you need to pass it explicitly to all child modules, and nested modules. With variables, they are unchangable. Always the same. When you run terraform docs, have the variables documented are on naming things. The convention is immutable, so everyone has to use the same naming convention.
With the context provider, you set it once in the root, and the context is available to all child modules. The convention can be anything you want it to be, but still validated and enforced. And when you run terraform docs, the docs are’t polluted by naming variables. You also don’t have to copy a file around like “context.tf” and instead leverage provider generation in atmos.
aaah that is great okay okay and this can be setup at the org level once right?
Exactly! Then inherit that convention in all stacks.
But then let’s say you are multi cloud and use different naming conventions for GCP projects, Azure subscriptions and AWS OUs/accounts. No problem. Just change the provider configuration in atmos and it passes down all the way through your modules.
We would be using this in all of our modules today, but we have so many to update, and it would be a breaking change. So we continue using null label. However, if you develop your own modules and components, this is great. Especially in the enterprise setting.
okay I’m convince this is great!! yeah copy the [context.tf](http://context.tf)
file and the polluted variables can be a pain
Add pagination and searching to atmos docs
command @RoseSecurity (#848)
what
• Add pagination and searching to atmos docs
command
• Update website documentation to incorporate new CLI settings
why
• Enhance the user experience by making it possible to paginate and search component documentation
references
Thanks @Michael !
Add pagination and searching to atmos docs
command @RoseSecurity (#848)
what
• Add pagination and searching to atmos docs
command
• Update website documentation to incorporate new CLI settings
why
• Enhance the user experience by making it possible to paginate and search component documentation
references
For sure!
Exclude abstract and disabled components from showing in the Atmos TUI @samtholiya (#851)
what
• Exclude abstract and disabled components from showing in the Atmos TUI
why
• Abstract and disabled components are not deployable
description
Suppose we have the following Atmos stack manifest:
# yaml-language-server: $schema=<https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>
components:
terraform:
station:
metadata:
component: weather
vars:
location: Los Angeles
lang: en
format: ''
options: '0'
units: m
station_disabled:
metadata:
component: weather
enabled: false
vars:
location: Los Angeles
lang: en
format: ''
options: '0'
units: m
station_abstract:
metadata:
component: weather
type: abstract
vars:
location: Los Angeles
lang: en
format: ''
options: '0'
units: m
We would get the following in the Atmos TUI:
Now we get only the station
component (the abstract and disabled components are excluded):
Add support for atmos terraform version
command @samtholiya (#846)
what
• Add support for atmos terraform version
command
Add support for atmos helmfile version
command @samtholiya (#854)
what
• Add support for atmos helmfile version
command
Append new lines to end of the generated JSON files @joshAtRula (#849)
what
• Append new lines to end of the generated JSON files
why
• Better support for linting and pre-comming standards
• It’s common for linters and pre-commit hooks to expect new lines at the end of files. In the case of the pretty-printed JSON (used for the creation of the backend.tf.json
files) these objects are now properly indented, but still were lacking the new line at the end of the file
I think this was maybe posted in the wrong place
2024-12-15
Implement Atmos version check configuration and functionality @Listener430 (#844)
what
• Implement Atmos version check configuration and functionality
• Add the --check
flag to the atmos version
command to force it to check for the latest Atmos release (regardless of the enabled
flag and the cache config) and print the upgrade message
• Add the version config section version
in atmos.yaml
• Update docs
why
• Remove unnecessary upgrade message for some Atmos commands
• Introduce config mechanism to turn on/off the update notifications and their frequency
• The --check
flag for the atmos version
command forces it to check for the latest Atmos release (regardless of the enabled
flag and the cache config) and print the upgrade message
atmos version –check
before for non –help command, the Atmos upgrade box was displayed:
now:
terraform_apply_after_fix
Added the version config section in atmos.yaml
:
version: check: enabled: true timeout: 1000 # ms frequency: 1h
.atmos/cache.yaml
file is generated with time stamp:
and the upgrade box is not displayed until the time interval exceed the specified in the config
2024-12-16
Hi all, i found that Atmos using mergo library to provide merge capabilities for structures, but also it looks like mergo not actively maintained anymore. author, seems, not accepting any new PRs and telling to people to wait for mergo/v2 , but nothing going forward with v2 for more than a year. what you think if cloudposse fork mergo to maintain? one of the problems with mergo missing some nice features like removal of duplicates items from slice on append, and deep copy for slices also working not in a perfect way, most of the things can be solved by a few PRs, but as per above probably it won’t go.
what’s the issue with the provider and what did you change in your fork?
(PRS are always welcome if you have changes to add)
we often need during merge of structures to ensure that slices (lists) not have dups in them, mergo’s AppendSlice on merge, doing its job, but not have de-dup operations, so idea was to add de-dup option into functionality. for Atmos, i had idea to extend its MergeWithOptions to support all options from mergo and by this also expand provider to allow set combination of options if necessary, but preserve its functionality if nothing is set or how it works now (backward-compatibility i mean).
so changes are small, but main problem in mergo
which is literally not maintained anymore or at least it looks like it is not.
but for Atmos, i not sure if MergeWithOptions
used anywhere also outside of Atmos and terraform providers.
the atmos code is used only in atmos and the provider
that makes things easier for that part of PRs. but still issue with mergo
(_https://github.com/darccio/mergo)
what do you think if Cloudposse team will fork it under cloudposse github, is it possible?
thank you
we need to discuss that internally
cc @Erik Osterman (Cloud Posse)
sure
Generally, we just don’t have the financial resources to maintain forks without more substantial GitHub sponsors
yea, i can understand that.
Hi all :wave:
In my previous job, we were heavily using cloudposse modules in our terragrunt setup. The examples/complete folder was helping a lot when understanding how a root module worked. Now that you have migrated to atmos, I am a little bit lost about how to use your modules in a non-atmos, pure terraform environment. For instance, I am trying to deploy the karpenter controller to an eks cluster(cloudposse/eks-cluster/aws
) and copying the *tf files in cloudposse-terraform-components/aws-eks-karpenter-controller
is not working as there are some dependencies to the account-map and iam-roles(I think)
Am I missing anything? Any tips on deploying karpenter using cloudposse modules?
(Note: As this is not directly related to atmos, but somehow it was, I was not sure if this channel was the correct place to ask this)
Thanks!
this is not Atmos-specific. You can use the components w/o using Atmos, just provide the Terraform variables
But you are correct, the components themselves have dependencies on other components, e.g. account-map
(which provides access to the Terraform roles for the aws
provider to assume)
you can always customize the [providers.tf](http://providers.tf)
file - remove the account-map/iam-roles
module, and provide your own IAM role for the aws
provider to assume
note that if the role you aare using to execute Terraform commands already has the required permissions for the aws
provider (and you are not using a separate IAM role for Terraform), then in [providers.tf](http://providers.tf)
you just need to have this
provider "aws" {
region = var.region
}
so to use the Terraform components in a pure terraform environment, and remove dependencies on account-map
, just update the providers.tf
files
thanks a lot! I am used to not changing anything from cloudposse root modules, thats probably what I need to get used to.
Also since the eks-cluster component uses the vpc_component_name instead of a vpc id or subnet_ids, is it true to say I can not use aws-eks-cluster component with vpc and subnets created with
source = “cloudposse/vpc/aws” source = “cloudposse/dynamic-subnets/aws”
Thanks again!
Andriy, if the components are using remote state, I believe they’re reliant on the utils provider and Atmos stack config.
@deniz gökçin So if you’re trying to use our components outside of an Atmos ecosystem, it might not work as well.
our plan is in 2025 to eliminate dependencies on remote state in native terraform and instead support and interface to pass those in natively to all components.
This is a large endeavor and we can’t comment on when it’ll be done.
It’s important to recognize that our components are our opinionated implementation of our modules. We have purposely made our modules less opinionated so they work for everyone. Our components is is where we get to do things our way to ensure that our components are highly reusable across our customer engagements.
If just using vanilla terraform without any kind of framework, things are incredibly painful.
if the components are using remote state, I believe they’re reliant on the utils provider and Atmos stack config.
yes, that’s correct
not only [providers.tf](http://providers.tf)
needs to be updated, but also [remote-state.tf](http://remote-state.tf)
(by not using our remote-state
module, but TF datasource)
using our TF modules (not components) are the way to go in this case, they are not using any remote state, and you just provide the variables (which you can get from other module outputs or remote state uisnf TF datasource)
thanks for the comments, super helpful. I guess I will have to do some surgery to make it all work!
Hi all! I am making a POC with atmos to use it together with Atlantis and I have some doubts that I am not able to solve with the documentation.
According to the documentation here looks like the atmos+atlantis integration requires to “pre-render” the atlantis projects, backends and variables, which is not a big problem for me, but the problem is that there are plenty of features that are not supported by such integration, like Terraform provider overrides or the dependencies between components.
The first thing I would like to know is if I am right with what I’m saying or I understood something wrong… and if there is a way to use all the features of Atmso with atlantis… I guess that this would require atlantis to launch atmos
instead of terraform
to generate the plan. am I right? is this feasible? has anyone tried it?
thanks in advance!
Cloud Posse is primarily using GitHub Actions. @jose.amengual has been super helpful addressing integration questions.
There are some changes we should make to support atlantis dependencies
thanks a lot Erik for answering. And what about the other features like the provider overrides? do you think would be it possible to have it with atlantis?
The override is a deep merge that atmos does before Atlantis sees the files
So it should work
Remember that you can have a custom image with atmos installed and instead of running terraform, you can have a custom workflow that runs atmos instead
yep that was what I’m wondering if will work…
because anyway I’ll need to pre generate the atlantis repo config in advance.. before atlantis does anything
and then run atmos from atlantis to perform the plan step, right?
because there isn’t any command for the provider override and from my tests generate backends|varfiles
doesn’t override the providers
Yes, and apply too
gotcha
in that way, the dependencies can also be managed by atmos directly and not by atlantis?
Maybe…. I have not try depends on that way
hm ok thanks a lot for clarification, I’d see if I can give another try
You will need create a script to plan all the projects that have dependencies
Replace path.Join
with filepath.Join
@samtholiya (#856)
what
• Replaced all instances of path.Join
with filepath.Join
in the codebase
• Updated imports to use the path/filepath
package instead of path
why
• Cross-platform compatibility: path.Join
does not respect platform-specific path separators, causing potential issues on non-Unix systems (e.g. Windows)
• Correct usage: filepath.Join
is designed for working with filesystem paths, ensuring semantic correctness
• Ensures the codebase adheres to Go best practices for handling file paths.
References
• Go Documentation on filepath.Join
Mergify @osterman (#862)
what
• Add mergify • Add tests
why
• Mergify rules have some nice things like letting us know when PRs are stale or conflicted
2024-12-17
Hi there! I’m trying to do a small PoC with Atmos+Github Actions where 1 component (cluster) depends on another (network). I’ve several questions:
1- I see no way to guarantee order rather than explicitly writing a workflow or playing with atmos describe affected --include-dependents
output.
2- When I try to use atmos.Component
function in a component, I get an error in the atmos-settings
step, which I guess tries to resolve the output but still doesn’t have neither tofu
installed, nor credentials set up yet. Looks like a chicken-egg problem to me, unless I’m missing something very obvious?
thanks!
@Igor Rodionov @Matt Calhoun
i there! I’m trying to do a small PoC with Atmos+Github Actions where 1 component (cluster) depends on another (network).
So one interesting thing to keep in mind with Atmos, is atmos provides a schema to express all these relationships
Then depending on how you use atmos, these may or may not be supported
So the depends on releationships are currently supported with Spacelift, and with Atmos Pro which works entirely with GitHub Actions.
However, just using the GHA without Atmos pro, won’t enable dependency order plan/applies.
When I try to use atmos.Component
function in a component, I get an error in the atmos-settings
step, which I guess tries to resolve the output but still doesn’t have neither tofu
installed, nor credentials set up yet. Looks like a chicken-egg problem to me, unless I’m missing something very obvious?
Yes, while that might be technically possible, it sounds like a catch-22.
Can you explain why you’re trying to do that?
…maybe there’s another way
Also, as I tell others, please try to use inheritance as much as possible, and templating as little as possible.
Well I guess I just followed the examples on how to read other component’s output from here https://atmos.tools/core-concepts/share-data/#using-template-functions
Share data between loosely-coupled components in Atmos
Aha, yes, that’s definitely supported
I interpreted “depends on another” component differently, as in ordered dependencies between components
Yes, there will be a chicken and the egg problem with what you describe
Improve !terraform.output
Atmos YAML function. Implement static
remote state backend for !terraform.output
and atmos.Component
functions @aknysh (#863)
what
• Improve !terraform.output
Atmos YAML function
• Implement static
remote state backend for !terraform.output
and atmos.Component
functions
• Add more advanced usage examples of the !template
Atmos YAML function
• Fix the error messages when the user specifies an invalid component names in atmos terraform plan/apply
• Update docs
• https://atmos.tools/core-concepts/stacks/yaml-functions/terraform.output/
• https://atmos.tools/core-concepts/stacks/yaml-functions/template/
• https://atmos.tools/core-concepts/stacks/templates/functions/atmos.Component/
why
Improve !terraform.output
Atmos YAML function
The !template
Atmos YAML function now can be called with either two or three parameters:
# Get the output
of the component
in the current stack
!terraform.output
# Get the output
of the component
in the provided stack
!terraform.output
Examples:
components: terraform: my_lambda_component: vars: vpc_config: # Output of type string security_group_id: !terraform.output security-group/lambda id security_group_id2: !terraform.output security-group/lambda2 {{ .stack }} id security_group_id3: !terraform.output security-group/lambda3 {{ .atmos_stack }} id # Output of type list subnet_ids: !terraform.output vpc private_subnet_ids # Output of type map config_map: !terraform.output config {{ .stack }} config_map
NOTE: Using the .stack
or .atmos_stack
template identifiers to specify the stack is the same as calling the !template
function with two parameters without specifying the current stack, but without using Go
templates.
If you need to get an output of a component in the current stack, using the !template
function with two parameters
is preferred because it has a simpler syntax and executes faster.
Implement static
remote state backend for !terraform.output
and atmos.Component
functions
Atmos supports brownfield configuration by using the remote state of type static
.
For example:
When the functions are executed, Atmos detects that the test
component has the static
remote state configured,
and instead of executing terrafrom output
, it just returns the static values from the remote_state_backend.static
section.
Executing the command atmos describe component test2 -s <stack>
produces the following result:
Add more advanced usage examples of the !template
Atmos YAML function
The !template
Atmos YAML function can be used to make your stack configuration DRY and reusable.
For example, suppose we need to restrict the Security Group ingresses on all components provisioned in the infrastructure
(e.g. EKS cluster, RDS Aurora cluster, MemoryDB cluster, Istio Ingress Gateway) to a specific list of IP CIDR blocks.
We can define the list of allowed CIDR blocks in the global settings
section (used by all components in all stacks)
in the allowed_ingress_cidrs
variable:
settings: allowed_ingress_cidrs: - “10.20.0.0/20” # VPN 1 - “10.30.0.0/20” # VPN 2
We can then use the !template
function with the following template in all components that need their Security Group
to be restricted:
EKS cluster
Allow ingress only from the allowed CIDR blocks
allowed_cidr_blocks: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
RDS cluster
Allow ingress only from the allowed CIDR blocks
cidr_blocks: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
Istio Ingress Gateway
Allow ingress only from the allowed CIDR blocks
security_group_ingress_cidrs: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
The !template
function and the '{{ toJson .settings.allowed_ingress_cidrs }}'
expression allows you to
use the global allowed_ingress_cidrs
variable and the same template even if the components have different
variable names for the allowed CIDR blocks (which would be difficult to implement using
Atmos inheritance or other Atmos design patterns).
NOTE:
To append additional CIDRs to the template itself, use the list
and Sprig concat
functions:
allowed_cidr_blocks: !template ‘{{ toJson (concat .settings.allowed_ingress_cidrs (list “172.20.0.0/16”)) }}’
@Andriy Knysh (Cloud Posse) This looks like it should address our issues from office hours + a few threads here around the static backend “mocking”, no? I’m excited to test it out! Thanks for this release, we’ll report back how it goes!
Improve !terraform.output
Atmos YAML function. Implement static
remote state backend for !terraform.output
and atmos.Component
functions @aknysh (#863)
what
• Improve !terraform.output
Atmos YAML function
• Implement static
remote state backend for !terraform.output
and atmos.Component
functions
• Add more advanced usage examples of the !template
Atmos YAML function
• Fix the error messages when the user specifies an invalid component names in atmos terraform plan/apply
• Update docs
• https://atmos.tools/core-concepts/stacks/yaml-functions/terraform.output/
• https://atmos.tools/core-concepts/stacks/yaml-functions/template/
• https://atmos.tools/core-concepts/stacks/templates/functions/atmos.Component/
why
Improve !terraform.output
Atmos YAML function
The !template
Atmos YAML function now can be called with either two or three parameters:
# Get the output
of the component
in the current stack
!terraform.output
# Get the output
of the component
in the provided stack
!terraform.output
Examples:
components: terraform: my_lambda_component: vars: vpc_config: # Output of type string security_group_id: !terraform.output security-group/lambda id security_group_id2: !terraform.output security-group/lambda2 {{ .stack }} id security_group_id3: !terraform.output security-group/lambda3 {{ .atmos_stack }} id # Output of type list subnet_ids: !terraform.output vpc private_subnet_ids # Output of type map config_map: !terraform.output config {{ .stack }} config_map
NOTE: Using the .stack
or .atmos_stack
template identifiers to specify the stack is the same as calling the !template
function with two parameters without specifying the current stack, but without using Go
templates.
If you need to get an output of a component in the current stack, using the !template
function with two parameters
is preferred because it has a simpler syntax and executes faster.
Implement static
remote state backend for !terraform.output
and atmos.Component
functions
Atmos supports brownfield configuration by using the remote state of type static
.
For example:
When the functions are executed, Atmos detects that the test
component has the static
remote state configured,
and instead of executing terrafrom output
, it just returns the static values from the remote_state_backend.static
section.
Executing the command atmos describe component test2 -s <stack>
produces the following result:
Add more advanced usage examples of the !template
Atmos YAML function
The !template
Atmos YAML function can be used to make your stack configuration DRY and reusable.
For example, suppose we need to restrict the Security Group ingresses on all components provisioned in the infrastructure
(e.g. EKS cluster, RDS Aurora cluster, MemoryDB cluster, Istio Ingress Gateway) to a specific list of IP CIDR blocks.
We can define the list of allowed CIDR blocks in the global settings
section (used by all components in all stacks)
in the allowed_ingress_cidrs
variable:
settings: allowed_ingress_cidrs: - “10.20.0.0/20” # VPN 1 - “10.30.0.0/20” # VPN 2
We can then use the !template
function with the following template in all components that need their Security Group
to be restricted:
EKS cluster
Allow ingress only from the allowed CIDR blocks
allowed_cidr_blocks: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
RDS cluster
Allow ingress only from the allowed CIDR blocks
cidr_blocks: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
Istio Ingress Gateway
Allow ingress only from the allowed CIDR blocks
security_group_ingress_cidrs: !template ‘{{ toJson .settings.allowed_ingress_cidrs }}’
The !template
function and the '{{ toJson .settings.allowed_ingress_cidrs }}'
expression allows you to
use the global allowed_ingress_cidrs
variable and the same template even if the components have different
variable names for the allowed CIDR blocks (which would be difficult to implement using
Atmos inheritance or other Atmos design patterns).
NOTE:
To append additional CIDRs to the template itself, use the list
and Sprig concat
functions:
allowed_cidr_blocks: !template ‘{{ toJson (concat .settings.allowed_ingress_cidrs (list “172.20.0.0/16”)) }}’
yes, it should address the static
backend usage in the template and YAML functions. Before, static
backend was supported only by the remote-state
TF module
TUI package manager for atmos vendor pull
@haitham911 (#768)
what
• TUI package manager for atmos vendor pull
• Show the download progress for each vendored package
• Display the versions of the vendored packages
• Add --everything
flag for non TTY mode
why
• Improve user experience
2024-12-18
Has anyone here ever used Atmos custom commands to extend functionality to support deploying Ansible playbooks? Recently had a use case for this but was curious if anyone else has been running a similar setup
I am trying to run atmos describe affected
using a self-hosted github enterprise. Im getting the error repository host is not supported
.
Is there a work-around? Any plans on supporting this?
VCS?
as long as whatever runs the pipeline has access to the repo, it should work
this is basically doing git clone on main and current branch
and compare the difference
Havent tried running from a pipeline yet. Running locally and im getting that error.
So workaround is to just git pull the repo manually and then compare to a local dir?
no, that is what the command is doing roughly under the hood
in you local you need to make sure your gitconfig is correct and has credentials to connect and clone the repo
well I can clone so i must have it right
if you clone using an alias, that could be an issue
I am, I have multiple repos. So you think itll work in pipeline cus I only have 1?
change your gitconfig, leave one ( the one you are testing) and try it
it should work
I use InstedOf
to avoid this issues
Actually, in current versions of atmos, re-cloning is no longer reqiured
It uses the current checkout
I’m on 1.110.0. Youre thinking itll work in latest?
It’s supported on that version
But we also support the other way too
So the way it’s being invoked is using the old method
This command produces a list of the affected Atmos components and stacks given two Git commits.
2024-12-19
Hello team, I am currently trying to use atmos
on Windows (against my free will) on a project, but I have the following path error and I cannot pinpoint the issue:
failed to find a match for the import 'C:\Users\andrea.zoli\projects\dbridge\infrastructure\stacks\orgs\dbridge\**\*.yaml' ('.' + 'C:\Users\andrea.zoli\projects\dbridge\infrastructure\stacks\orgs\dbridge\**\*.yaml')
The trace logs seems good, the stack files are on that path. The same error is given on Powershell, cmd and mingw64 (git bash for windows). There is no issue under linux running the same commands.
What am I missing?
@Andriy Knysh (Cloud Posse)
Ok, seems a binary problem of the v1.130.0, just tried with an older v1.110.0 executable and it works!
@athlonz thanks for reporting this, we’ll have to run this on Windows to check
Ok, well that makes sense then
What
• Replaced all instances of path.Join
with filepath.Join
in the codebase.
• Updated imports to use the path/filepath
package instead of path
.
Why
• Cross-platform compatibility: path.Join
does not respect platform-specific path separators, causing potential issues on non-Unix systems (e.g., Windows).
• Correct usage: filepath.Join
is designed for working with filesystem paths, ensuring semantic correctness.
• Ensures the codebase adheres to Go best practices for handling file paths.
References
• Closes: #832
• Go Documentation on filepath.Join
• issue: https://linear.app/cloudposse/issue/DEV-2817/replace-pathjoin-with-filepathjoin-in-atmos-code
Summary by CodeRabbit
• New Features
• Enhanced file path handling across various components for improved cross-platform compatibility.
• Bug Fixes
• Improved error reporting for file path issues in multiple functions.
• Documentation
• Updated comments for better clarity on path handling logic in several functions.
• Tests
• Adjusted test cases to utilize filepath.Join
for file path constructions, ensuring compatibility with different operating systems.
Ironically, we made this change predominantly to enhance Support for windows
But it looks like it inadvertently must’ve broken something
Ironically, we made this change predominantly to enhance Support for windows
exactly
I just popped in to discuss the same thing, same exact issue for me ( including versions ) glad to see its already rolling.
I’m gonna try and get the tests to run on windows in our GitHub Actions so we have an extra layer of defense against this, and then once we have those tests running, get one of our engineers to fix the underlying problem.
We haven’t had a chance to look at the specific details yet, but if anyone knows more precisely, what about our path joining is causing it to break, I would be much obliged
I have tests implemented now for linux (always had this), windows and macos.
what
• We broke atmos in #856 attempting to improve windows support, but instead we broke it
why
• We don’t run any tests on windows
references
• Relates to #856 (broke atmos on windows)
@Andriy Knysh (Cloud Posse) I suggest we merge this once the tests complete. Windows is confirmed failing.
Then we can have @Shubham Tholiya look into why this broke windows, since we can reproduce it.
Sure looking into this on priority. Pausing the terraform help refactor for now. This task will be delayed but given our priority for windows support, this is acceptable cc: @Erik Osterman (Cloud Posse)
Please try this release:
Revert “Replace path.Join with filepath.Join (#856)” @osterman (#887)This reverts commit 2051592.
what
• Manual revert of #856
why
• It broke too many things that were are to debug, and we lacked tests to catch the situation.
• We’ve added windows tests in, and should revisit this.
• #877
• #875
Trigger workflows from GitHub actions bot @osterman (#890)
what
• Call workflow dispatch to build previews and run tests
why
• Automated PRs from github-actions[bot]
will not trigger workflows (by design) unless a PAT is used
• This approach doesn’t require introcing a PAT and leverages mergify instead
Fix Mocks for Windows @osterman (#877)
what
• Run commands depending on the flavor of OS • Add e2e smoke tests to verify commands work
why
• Windows behaves differently from Linux/macOS
• #875 introduced windows testing
• Catch certain cases when we have no unit tests for CLI behaviors (tests for condition @mcalhoun encountered with an infinite loop in atmos version
)
Add Windows & macOS Acceptance Tests @osterman (#875)
what
• We broke atmos in #856 attempting to improve windows support, but instead we broke it
why
• We don’t currently run any tests on windows and macOS • Slight differences between OSes can cause big problems and are hard to catch • Most of our developer team is on macOS, but more and more Windows users are using atmos • To maintain stability, it’s critical we test all mainstream platforms
references
• Relates to #856 (broke atmos on windows)
2024-12-20
Hey, everyone. I’m trying to POC GitHub Actions with Atmos. I have a very simple job set up taken straight from the example in the action’s repository. The job technically succeeds, but nothing happens. I’ve created everything needed in AWS and I’ve added the integration settings to my atmos.yaml
. But there’s no output that shows, no plan file in the S3 bucket, and no job summary, despite the fact that the job succeeds. I’ll include more details in a , here. But I’m hoping someone can point out the obvious thing I’m missing.
As I said, the job is very simple. I added a few things trying to troubleshoot. I can confirm that there is a stack called “test-gha” and a component called “test”. All that component does is create a random string (it was originally doing something more complicated, but I wanted to eliminate variables). I can also confirm that when I run a plan against it locally, it shows there are changes (however, again, I don’t even see a “no changes” message).
name: 👽 Hello World
on:
pull_request:
branches: ["main"]
types: [opened, synchronize, reopened, closed, labeled, unlabeled]
workflow_dispatch:
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
jobs:
atmos-plan:
name: Plan Hello-World
runs-on: ubuntu-latest
steps:
- name: Plan Atmos Component
id: tfplan
uses: cloudposse/github-action-atmos-terraform-plan@v4
with:
component: "test"
stack: "test-gha"
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.99.0
pr-comment: true
- name: Echo Info
run: |
echo "${{ steps.tfplan.outputs.plan_file }}"
echo "${{ steps.tfplan.outputs.plan_json }}"
- name: Write to workflow job summary
run: |
echo "${{ steps.tfplan.outputs.summary }}" >> $GITHUB_STEP_SUMMARY
I can’t help but wonder if this is some indication of my problem, but I’m currently at a loss. I see this right after it appears to successfully step through aws-configure
and assume the correct role. This is basically the last thing it does before it proceeds to run clean-up procedures.
Run if [[ "false" == "true" ]]; then
if [[ "false" == "true" ]]; then
STEP_SUMMARY_FILE=""
if [[ "" == "true" ]]; then
rm -f ${STEP_SUMMARY_FILE}
fi
else
STEP_SUMMARY_FILE=""
fi
if [ -f ${STEP_SUMMARY_FILE} ]; then
echo "${STEP_SUMMARY_FILE} found"
STEP_SUMMARY=$(cat ${STEP_SUMMARY_FILE} | jq -Rs .)
echo "result=${STEP_SUMMARY}" >> $GITHUB_OUTPUT
if [[ "false" == "false" ]]; then
echo "Drift detection mode disabled"
cat $STEP_SUMMARY_FILE >> $GITHUB_STEP_SUMMARY
fi
else
echo "${STEP_SUMMARY_FILE} not found"
echo "result=\"\"" >> $GITHUB_OUTPUT
fi
Did you add loglevel to trace or, debug=true on the actions?
That could give some insight
Good question.
I set debug: true
in the action and then set ACTIONS_STEP_DEBUG
to true. I want to say this is the most relevant portion. I can confirm that the component path components/terraform/test
is correct. The repeated mentions of null
make me think I’ve somehow misconfigured something. I can’t figure out what, yet.
##[debug]......=> '{"component-path":"components/terraform/test","base-path":".","command":"terraform","opentofu-version":"1.7.3","terraform-version":"1.10.2","enable-infracost":false,"terraform-plan-role":"[redacted-role-arn]","aws-region":"us-west-2","terraform-state-role":"[redacted-role-arn]","terraform-state-table":"[redacted-dynamodb-table]","terraform-state-bucket":"[redacted-bucket-name]","plan-repository-type":"s3","metadata-repository-type":"dynamo"}'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'enabled'
##[debug]..=> null
##[debug]=> null
##[debug]Expanded: (true && null)
##[debug]Result: null
##[debug]Evaluating: steps.summary.outputs.result
##[debug]Evaluating Index:
##[debug]..Evaluating Index:
##[debug]....Evaluating Index:
##[debug]......Evaluating steps:
##[debug]......=> Object
##[debug]......Evaluating String:
##[debug]......=> 'summary'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'outputs'
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'result'
##[debug]=> '""'
##[debug]Result: '""'
##[debug]Evaluating: steps.vars.outputs.plan_file
##[debug]Evaluating Index:
##[debug]..Evaluating Index:
##[debug]....Evaluating Index:
##[debug]......Evaluating steps:
##[debug]......=> Object
##[debug]......Evaluating String:
##[debug]......=> 'vars'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'outputs'
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'plan_file'
##[debug]=> null
##[debug]Result: null
##[debug]Evaluating: format('{0}.json', steps.vars.outputs.plan_file)
##[debug]Evaluating format:
##[debug]..Evaluating String:
##[debug]..=> '{0}.json'
##[debug]..Evaluating Index:
##[debug]....Evaluating Index:
##[debug]......Evaluating Index:
##[debug]........Evaluating steps:
##[debug]........=> Object
##[debug]........Evaluating String:
##[debug]........=> 'vars'
##[debug]......=> Object
##[debug]......Evaluating String:
##[debug]......=> 'outputs'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'plan_file'
##[debug]..=> null
##[debug]=> '.json'
##[debug]Result: '.json'
##[debug]Finishing: Plan Atmos Component
Overview of current repo (just ran tree .
)
.
├── README.md
├── atmos.yaml
├── components
│ └── terraform
│ └── test
│ ├── backend.tf.json
│ ├── main.tf
│ ├── test-gha-repository.planfile
│ └── test-gha-repository.terraform.tfvars.json
├── stacks
│ ├── _defaults.yaml
│ └── deploy
│ ├── test-gha
│ │ └── stack.yaml
└── vendor.yaml
Stack description
test-gha:
components:
terraform:
test:
atmos_component: test
atmos_manifest: deploy/test-gha/stack
atmos_stack: test-gha
atmos_stack_file: deploy/test-gha/stack
backend:
bucket: [redacted]
dynamodb_table: [redacted]
encrypt: "true"
key: terraform.tfstate
region: us-west-2
workspace_key_prefix: test
backend_type: s3
command: terraform
component: test
env: {}
inheritance: []
metadata:
component: test
overrides: {}
providers: {}
remote_state_backend:
bucket: [redacted]
dynamodb_table: [redacted]
encrypt: "true"
key: terraform.tfstate
region: us-west-2
workspace_key_prefix: test
remote_state_backend_type: s3
settings:
integrations:
github:
gitops:
artifact-storage:
bucket: [redacted]
metadata-repository-type: dynamo
plan-repository-type: s3
region: us-west-2
role: [redacted]
table: [redacted]
infracost-enabled: false
matrix:
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
sort-by: .stack_slug
opentofu-version: 1.7.3
role:
apply: [redacted]
plan: [redacted]
terraform-version: 1.10.2
stack: test-gha
vars:
name: test-gha
region: us-west-2
workspace: test-gha
Locally: atmos terraform plan test --stack test-gha
Executing command:
atmos terraform plan test --stack test-gha
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Using previously-installed hashicorp/random v3.6.3
Terraform has been successfully initialized!
Workspace "test-gha" doesn't exist.
You can create this workspace with the "new" subcommand
or include the "-or-create" flag with the "select" subcommand.
Created and switched to workspace "test-gha"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# random_string.random will be created
+ resource "random_string" "random" {
+ id = (known after apply)
+ length = 16
+ lower = true
+ min_lower = 0
+ min_numeric = 0
+ min_special = 0
+ min_upper = 0
+ number = true
+ numeric = true
+ override_special = "/@£$"
+ result = (known after apply)
+ special = true
+ upper = true
}
Plan: 1 to add, 0 to change, 0 to destroy.
Sorry for the deluge of info. Just trying to share as much as I can. I’m pretty confident I’m missing some small detail that I haven’t noticed despite how long I’ve looked it over.
Enable pr-comment to true to see if the summary has anything inside
You should see a post in your pr with changes or no changes etc
I have it set to true, but no comment is created.
is this correct? "base-path":".",
are your files on the root of the repo?
stacks
, components
are in the root?
you can have atmos-config-path: ./rootfs/usr/local/etc/atmos/
there or you can also have it in the root of the repo or anywhere
but the base_path must be correct for that to work
if you are setting ATMOS_BASE_PATH for your local , you need to use the same directory for the action
I set ATMOS_BASE_PATH
as an ENV in my actions
ahhhh I might have found your problem
it looks like you are missing an integration setting
it should looks like this :
settings:
github:
actions_enabled: true
integrations:
github:
gitops:
if you do not have actions_enabled: true
the action will do nothing
Ah, I think that worked! I added the block to the stack. I knew it had to be something simple. For the sake of transparency, I also needed to add this to my GH Workflow.
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
I’m curious. Do you know if that block included in the documentation anywhere? I’m assuming I simply missed it.
And thank you! I’ve been truly stumped about what I was missing.
I believe that block is included in the action doc
If not PRs are welcome
2024-12-21
2024-12-22
2024-12-23
looks like something changed on 1.130.0 with atmos vendor now it can’t parse the vendor file:
Run cd atmos/
Vendoring from 'vendor.yaml'
unsupported URI scheme: git::https
Error: Process completed with exit code 1.
1.129.0
works @Andriy Knysh (Cloud Posse)
@haitham911eg please take a look
fix bug atmos vendor pull URI cannot contain path traversal sequences and git schema
references
• closes #888 • DEV-2892 • DEV-2891
Summary by CodeRabbit
• Security Changes
• Modified URI validation logic to be less restrictive
• Removed several previous URI validation checks
• Simplified validation process for URI processing
• Bug Fixes
• Improved error handling in package installation feedback, providing detailed error messages to users during failures.
@Andriy Knysh (Cloud Posse)
and 1.123.1
has a bug, too where the apply will fail with Error: invalid character 'â' looking for beginning of value
we updated to 1.129.0 and the error is gone
Can you share more details
Oh, if the error is gone, then we are good
We rolled out a whole new vendor implementation
That was an apply using a new version of atmos
Using the actions, not cli
2024-12-24
@Erik Osterman (Cloud Posse) I’m using the new Yaml templating functions !terraform.output
and hitting some errors with Terraform Cloud backend. I am getting different results when running locally vs pushing changes to cloud.
Atmos version 1.130.0
The plan in TFC is a string literal.
Here is my stack definition:
vars:
enabled: true
name: vpc
transit_gateway_id: '!terraform.output tgw transit_gateway_id'
network_firewall_tgw_route_table_id: '!terraform.output tgw network_firewall_tgw_rt_id'
network_ingress_egress_tgw_route_table_id: '!terraform.output tgw network_ingress_egress_tgw_rt_id'
And when I run locally it is correctly a value.
When I run in cloud, it is "tgw transit_gateway_id"
This shouldn’t work even locally
Because the explicit type is quoted
When you put quotes around the contents then it becomes a literal string
@Erik Osterman (Cloud Posse) Okay ive tried this all 3 ways. For additional context, when I say locally
I mean I am running atmos terraform apply ...
on my CLI with TFC as the backend. It works. It’s when im switching to CICD with GitHub actions with a runner does things get weird.
No quotes, single quote '
, and double quotes "
.
Double quotes is a string literal in both environments.
No quotes and single quote works locally but when I execute the command in my GHA runners returns a string literal of the output & stack name, excluding !terraform.output
The reason I want to switch to !terraform.output
versus atmos.Component
is because I have been finding that in GHA/TFC intermittently the function times out (might be something else, this is the only reason I can come up with) and returned a null value instead of the output. Nothing I tried was able to get the component to properly return the value in GHA/TFC. It’s very strange because it is intermittently failing. First execution it passes but then subsequent runs it cant find the values.
Similar story - laptop executions never fail. Its in my GHA that it cant find values.
Could it be from this timeout filed I have this set in atmos.yaml
?
templates:
settings:
enabled: true
gomplate:
enabled: true
timeout: 10
@Andriy Knysh (Cloud Posse) You are the author of this function - any insights?
@Andrew Chemis let’s review all of this step by step:
timeout here is only relevant if you are using gomplate
datasources (which looks like you are not)
as Erik mentioned, this code should look like this (no quotes needed)
vars:
enabled: true
name: vpc
transit_gateway_id: !terraform.output tgw transit_gateway_id
network_firewall_tgw_route_table_id: !terraform.output tgw network_firewall_tgw_rt_id
network_ingress_egress_tgw_route_table_id: !terraform.output tgw network_ingress_egress_tgw_rt_id
The reason I want to switch to !terraform.output
versus atmos.Component
is because I have been finding that in GHA/TFC intermittently the function times out (might be something else, this is the only reason I can come up with)
both !terraform.output
and atmos.Component
execute the same code to run terraform output
(uisng a lib from Hashi). The only diff is that atmos.Component
uses Go templating engine, while !terraform.output
uses internal parsing of YAML explicit tags
If everything is configured correctly, both functions should work and produce the same results
but all the above does not answer your main questions why it’s working locally and not working on CICD
It’s very strange because it is intermittently failing. First execution it passes but then subsequent runs it cant find the values.
When I run in cloud, it is
"tgw transit_gateway_id"
when I execute the command in my GHA runners returns a string literal of the output & stack name, excluding
!terraform.output
this looks to me that your CICD uses some older version of Atmos that does not support the Atmos YAML functions
the older versions will just ignore all the custom explicit tags (e.g. !terraform.output
) and just return the node value (e.g. tgw transit_gateway_id
from
transit_gateway_id: !terraform.output tgw transit_gateway_id
can you please make sure your GH action is running the latest version of Atmos? Let me know, and we will continue to investigate
:thinking_face: Okay really appreciate the walkthrough!
My image should have 1.127.0
but let me actually validate that. Perhaps something in the CICD process is overwriting whats installed on the image
something is indeed installing a very old version (1.85.0). Okay I need to figure that out, im assuming thatll fix it
Thank you!! Apologise for wasting your time on something like that
Is using v1.85.0 a potential reason why atmos.Component
intermittently pulls null values?
glad you found it
1.85 is an old version, could be an issue with atmos.Component
as well
please install and test the latest version and let me know how it goes
2024-12-25
2024-12-26
Revert “Replace path.Join with filepath.Join (#856)” @osterman (#887)This reverts commit 2051592.
what
• Manual revert of #856
why
• It broke too many things that were are to debug, and we lacked tests to catch the situation.
• We’ve added windows tests in, and should revisit this.
• #877
• #875
Trigger workflows from GitHub actions bot @osterman (#890)
what
• Call workflow dispatch to build previews and run tests
why
• Automated PRs from github-actions[bot]
will not trigger workflows (by design) unless a PAT is used
• This approach doesn’t require introcing a PAT and leverages mergify instead
Fix Mocks for Windows @osterman (#877)
what
• Run commands depending on the flavor of OS • Add e2e smoke tests to verify commands work
why
• Windows behaves differently from Linux/macOS
• #875 introduced windows testing
• Catch certain cases when we have no unit tests for CLI behaviors (tests for condition @mcalhoun encountered with an infinite loop in atmos version
)
Add Windows & macOS Acceptance Tests @osterman (#875)
what
• We broke atmos in #856 attempting to improve windows support, but instead we broke it
why
• We don’t currently run any tests on windows and macOS • Slight differences between OSes can cause big problems and are hard to catch • Most of our developer team is on macOS, but more and more Windows users are using atmos • To maintain stability, it’s critical we test all mainstream platforms
references
• Relates to #856 (broke atmos on windows)
Add support for atmos terraform providers
commands @Listener430 @aknysh (#866)
what
• Add support for atmos terraform providers
commands
• Fix panic executing atmos terraform providers lock <component> --stack <stack>
command when a non existing component or stack were specified
• Update atmos validate component <component> --stack <stack>
command
why
• Support the following commands:
• atmos terraform providers lock <component> --stack<stack>
• atmos terraform providers schema <component> --stack<stack> -- -json
• atmos terraform providers mirror <component> --stack<stack> -- <directory>
atmos terraform providers lock vpc-flow-logs-bucket -s plat-ue2-staging
atmos terraform providers mirror vpc-flow-logs-bucket -s plat-ue2-staging – ./tmp
• Do not show help when executing atmos validate component <component> --stack <stack>
command
atmos validate component vpc-flow-logs-bucket -s plat-ue2-stagin
references
• https://developer.hashicorp.com/terraform/cli/commands/providers • https://developer.hashicorp.com/terraform/cli/commands/providers/lock • https://developer.hashicorp.com/terraform/cli/commands/providers/mirror • https://developer.hashicorp.com/terraform/cli/commands/providers/schema
2024-12-28
2024-12-29
Replace path
with filepath
@haitham911 @aknysh (#900)
what
Replace path
with filepath
why
• Different operating systems use different path separators (/
on Unix-based systems like Linux and macOS, and \
on Windows).
filepath
package automatically handles these differences, making code platform-independent. It abstracts the underlying file system separator, so we don’t have to worry about whether you’re running on Windows or Unix-based systems
2024-12-31
Hi, I am new to atmos. Our team is trying it out so I apologize in advance if I mix up any terms. My colleague has a stack and catalog setup which creates resources in different environments based on various components.
I see in one stack as an example we have:
components:
terraform:
key-vault:
settings:
depends_on:
1:
component: "platform-elements"
and this ensures that platform-elements has run for each environment before we create a keyvault.
Now I want to add a global resource related to DNS. This only needs to be created one time because it is global, rather than once per environment. So I thought I will create this from my lowest environment, sandbox by including it here as so.
components:
terraform:
key-vault:
settings:
depends_on:
1:
file: "stacks/orgs/frontend/azure/eastus/sandbox/jenkins-private-dns-zone-virtual-network-links.yaml"
2:
component: "platform-elements"
But it doesn’t seem to do what I expected. How is the file dependency intended to operate? Is there any difference between that and simply doing component: private-dns-zone-virtual-network-links
My file above includes a catalog file:
components:
terraform:
private-dns-zone-virtual-network-links:
metadata:
component: private-dns-zone-virtual-network-links
vars:
[-snip-]
To be clear, the stacks/orgs/frontend/azure/eastus/sandbox/jenkins-private-dns-zone-virtual-network-links.yaml
file is a stack which references a private-dns-zone-virtual-network-links
catalog via include
and that catalog itself references the component of the same name. The component creates a link between a virtual network and a private DNS zone in Azure, and the stack makes it create one of these resources for Jenkins so that other subsequent stacks in atmos can be run properly from Jenkins after this stack completes.
What’s perplexing me is that this is the first dependency listed and I’ve added it to all of the stacks in the lowest environment as the first dependency, but the stack is being run last instead of first.
(Everyone is out of the office for the next few days)
Have you run the commands to describe your components and stacks?
That’ll show you what the final configuration looks like
Regarding dependencies, it’s important to realize that Atmos is first and foremost, a way of defining relationships, and then those relationships can be implemented anywhere they are supported. Right now, dependencies are not supported on the CLI, although we have short-term plans to implement that.
So the depends on releationships are currently supported with Spacelift, and with Atmos Pro which works entirely with GitHub Actions.
Thanks. Further digging in my repo reveals the source of my confusion is internal code.