#atmos (2025-01)

2025-01-02

Michael avatar
Michael

Is there any documentation for writing Atmos integrations or is this not a recommended practice? https://atmos.tools/terms/integration/

Integration | atmos

An Integration is a mechanism of working with other tools and APIs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we would like to have more, haven’t standardized the interface

Integration | atmos

An Integration is a mechanism of working with other tools and APIs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could you elaborate on what you might want to integrate?

Michael avatar
Michael

I’m considering integrating different tooling that would benefit from Atmos’ YAML deep merging. Right now, I’d like to add a new component type to atmos.yaml. For example:

components:
  ansible:
     base_path: "components/ansible"

I’d like to use the atmos describe stacks and components functionality to create my own version of tfvars.json using a custom wrapper script that is harnessed by a custom Atmos command. I ran into a schema failure because it couldn’t validate the structure, so I added new Ansible component definitions to stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json , created a catalog default for the component, and everything validates but it can’t find the Ansible components. So I’m just looking into if there are any better ways to handle this edge case

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah yes, I have been thinking about this some and how we could add support for custom component types. That’s interesting if atmos is not loading those other custom components. I don’t think we have tested that, and also the implication on schemas. I think to support multiple custom types, we would want to extend support for multiple json schemas that get deep merged

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we would want to register custom types in atmos.yaml, and provide some common conventions like base_path, command, and so forth. This is also where we could define the schema validation to use for the component type.

cricketsc avatar
cricketsc

Is there some best practices users have figured out or recommended by the authors regarding using the atmos.Component template function across different (AWS) accounts? It seems a bit tricky with role assumption. For example if a user or a automation runner assumes role A in account A but has an atmos.Component template function that references to state in stack B then if it seems that the B stack’s state bucket would need to allow cross account access from A role? Is there another way? It seems like you could end up with a lot of access config following this strategy.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It doesn’t work well, unless you have a role that can access all backends

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we are implementing a better long term solution

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#874 add hooks and store write functionality

what

• Add the concepts of hooks that can run before/after other atmos commands • Add a hook type that can write output values to stores (AWS SSM Param Store, Artifactory)

why

In order to share data between components and to decouple the Terraform permissions context from the shared values permissions context.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using this, you can store outputs outside of the terraform state, and read from the store

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This simplifies the permissions model

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ETA for this is end of next week

cricketsc avatar
cricketsc

@Erik Osterman (Cloud Posse) neat, thanks for the info!

2025-01-03

github3 avatar
github3
07:28:06 PM

:rocket: Enhancements

Support default values for arguments in Atmos custom commands @Listener430 (#905)

what

• Support default values for arguments in Atmos custom commands in atmos.yaml

why

• Allow specifying default values and don’t not require the users to provide the values when invoking the custom commands

before

before

a custom cli-command is defined

custom_cli_command

after

after

github3 avatar
github3
04:12:30 AM

Implement Custom Markdown Styling for Workflow Commands @Cerebrovinny (#853)

Implement Custom Markdown Styling for Workflow Commands

What

• Added custom markdown styling support for workflow command outputs • Implemented a configurable markdown styling system through atmos.yaml • Added fallback to built-in default styles when no custom styles are defined • Added new workflow error templates in markdown format • Improved code readability and maintainability in markdown rendering logic

Why

• Enhances user experience • Allows users to define their own color schemes and styling preferences • Improves error message readability with consistent formatting and styling • Makes the CLI more accessible by supporting both default and custom color schemes • Follows modern CLI design patterns with rich text formatting

Technical Details

• Added markdown settings section in atmos.yaml for custom styling configuration • Implemented style inheritance system (custom styles override built-in defaults) • Added support for styling:
• Document text and background colors
• Headings (H1-H6) with custom colors and formatting
• Code blocks with syntax highlighting
• Links with custom colors and underlining
• Blockquotes and emphasis styles • Enhanced error handling with structured markdown templates • Added proper fallback mechanisms for style configuration

References

• Implements styling using glamour for terminal markdown rendering • Follows ANSI terminal styling standards • https://github.com/charmbracelet/bubbleteahttps://github.com/charmbracelethttps://github.com/charmbracelet/glowhttps://github.com/charmbracelet/glamour/blob/master/styles/gallery/README.md

Testing

The implementation has been tested with:

• Custom styling configurations in atmos.yaml Screenshot 2024-12-29 at 23 34 33 • Default styling fallback when no custom styles are defined Screenshot 2024-12-30 at 23 37 04

1
1

2025-01-05

kofi avatar

Hello! I am new to atmos and I am trying to implement it for our new infrastructure (I hope I am asking at the good place, sorry if it is not the case). I am currently blocked on trying to get working atmos with the http backend (interface with gitlab). Unfortunately the http backend doesn’t support workspace (terraform doc) and therefore atmos is crashing when trying to select the workspace. In your documentation (link) the http backend is supported. Is there a way to get it working? Maybe I can use environment variables to force a workspace like here? Am I on the good path or am I missing something? Thanks in advance for your help!

State: Workspaces | Terraform | HashiCorp Developerattachment image

Workspaces allow the use of multiple states with a single configuration directory.

Terraform Backends | atmos

Configure Terraform Backends.

#653 Do not create workspace if overridden by environment TF_WORKSPACE

Describe the Bug

In our deployment pipeline, we create the TF workspace via HTTP API as need to configuration not possible from [backend.tf](http://backend.tf), e.g. working-directory, global-remote-state, etc.

Once the workspace is created/checked, it sets the TF_WORKSPACE variable.

In setting Atmos for the first time, found what I think is incorrect behaviour.

Terraform has been successfully initialized!

Command info:
  Terraform binary: terraform
  Terraform command: plan
  Arguments and flags: []
  Component: ecr/redacted-app
  Terraform component: ecr
  Stack: redacted-dev
  Working dir: components/terraform/ecr

Executing command:
/usr/bin/terraform workspace select long_complex_workspace_name_redacted
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.

Executing command:
/usr/bin/terraform workspace new long_complex_workspace_name_redacted
Workspace "new long_complex_workspace_name_redacted" already exists
exit status 1

Expected Behavior

If the workspace is overridden by TF_WORKSPACE, then atmos should accept that it is managed elsewhere and don’t try create it again.

Steps to Reproduce

Run atmos with TF_WORKSPACE environment variable set.

Screenshots

No response

Environment

• OS: Windows 11 • Atmos: 1.83.1 • Terraform: 1.8.2

Additional Context

I think we could change the error handling in ~L394

atmos/internal/exec/terraform.go

Lines 382 to 409 in8060adb

err = ExecuteShellCommand( 
cliConfig, 
info.Command, 
[]string{“workspace”, “select”, info.TerraformWorkspace}, 
componentPath, 
info.ComponentEnvList, 
info.DryRun, 
workspaceSelectRedirectStdErr, 
) 
if err != nil { 
var osErr *osexec.ExitError 
ok := errors.As(err, &osErr) 
if !ok |osErr.ExitCode() != 1 {
// err is not a non-zero exit code or err is not exit code 1, which we are expecting 
return err 
} 
err = ExecuteShellCommand( 
cliConfig, 
info.Command, 
[]string{“workspace”, “new”, info.TerraformWorkspace}, 
componentPath, 
info.ComponentEnvList, 
info.DryRun, 
info.RedirectStdErr, 
) 
if err != nil { 
return err 
} 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this came up recently in separate thread for a different purpose

State: Workspaces | Terraform | HashiCorp Developerattachment image

Workspaces allow the use of multiple states with a single configuration directory.

Terraform Backends | atmos

Configure Terraform Backends.

#653 Do not create workspace if overridden by environment TF_WORKSPACE

Describe the Bug

In our deployment pipeline, we create the TF workspace via HTTP API as need to configuration not possible from [backend.tf](http://backend.tf), e.g. working-directory, global-remote-state, etc.

Once the workspace is created/checked, it sets the TF_WORKSPACE variable.

In setting Atmos for the first time, found what I think is incorrect behaviour.

Terraform has been successfully initialized!

Command info:
  Terraform binary: terraform
  Terraform command: plan
  Arguments and flags: []
  Component: ecr/redacted-app
  Terraform component: ecr
  Stack: redacted-dev
  Working dir: components/terraform/ecr

Executing command:
/usr/bin/terraform workspace select long_complex_workspace_name_redacted
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.

Executing command:
/usr/bin/terraform workspace new long_complex_workspace_name_redacted
Workspace "new long_complex_workspace_name_redacted" already exists
exit status 1

Expected Behavior

If the workspace is overridden by TF_WORKSPACE, then atmos should accept that it is managed elsewhere and don’t try create it again.

Steps to Reproduce

Run atmos with TF_WORKSPACE environment variable set.

Screenshots

No response

Environment

• OS: Windows 11 • Atmos: 1.83.1 • Terraform: 1.8.2

Additional Context

I think we could change the error handling in ~L394

atmos/internal/exec/terraform.go

Lines 382 to 409 in8060adb

err = ExecuteShellCommand( 
cliConfig, 
info.Command, 
[]string{“workspace”, “select”, info.TerraformWorkspace}, 
componentPath, 
info.ComponentEnvList, 
info.DryRun, 
workspaceSelectRedirectStdErr, 
) 
if err != nil { 
var osErr *osexec.ExitError 
ok := errors.As(err, &osErr) 
if !ok |osErr.ExitCode() != 1 {
// err is not a non-zero exit code or err is not exit code 1, which we are expecting 
return err 
} 
err = ExecuteShellCommand( 
cliConfig, 
info.Command, 
[]string{“workspace”, “new”, info.TerraformWorkspace}, 
componentPath, 
info.ComponentEnvList, 
info.DryRun, 
info.RedirectStdErr, 
) 
if err != nil { 
return err 
} 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Opentofu is considering deprecating workspaces

https://github.com/opentofu/opentofu/issues/2160

Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We need to add a feature flag to toggle usage of workspaces. Relatively easy fix for us to make.

kofi avatar

It would be great!

kofi avatar

Should I open an issue on github to track the issue?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sure, let’s do that, so we can notify you when it’s done

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we can get to it in the next 2 weeks or so.

kofi avatar

Sounds good to me, I will create the issue tomorrow then

kofi avatar
#915 HTTP backend not supported

Describe the Bug

atmos should support the http backend as described here.
Unfortunately, the http backend is not inside of the workspace supported backend list and internally atmos is using workspace.

This is limiting for us as we would like to use atmos with GitLab that only support the http backend. This backend coupled with GitLab (I don’t know other ones that well) is expecting to have different addresses to provide isolation instead of using workspaces. Maybe we could use templated variables from atmos to adjust this backend address?

There is an open issue for workspaces support on GitLab here.

Moreover OpenTofu is considering deprecating workspaces opentofu/opentofu#2160

Expected Behavior

Provide a way to setup the http backend with atmos.

Steps to Reproduce

Setup a classic http backend, then run an atmos terraform command. It should fail with workspaces not supported.

Screenshots

No response

Environment

No response

Additional Context

On Slack a feature flag to toggle usage of workspaces has been suggested to solve this issue.

1
github3 avatar
github3
09:02:06 PM

Enhancements

Support for a circuit breaker to avoid recursive calls to Atmos custom commands (infinite loop) @Listener430 (#906)

what

• Support for a circuit breaker to avoid recursive calls to Atmos custom commands

why

• Avoid infinite loops

test

Add-on to atmos.yaml

atmos_yaml_addon

The circuit breaker in use

atmos_loop_termination

2025-01-06

Georgi Angelov avatar
Georgi Angelov

Hey guys, is there a way to control bucket ACL through the cloudfront-s3-cdn module? I saw that the module in question depends on s3-log-storage which depends on the module s3-buckets which are all managed by CloudPosse. I saw that there is an input grants which I believe controls this. I’m talking about the settings here.

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note this is an refarch topic, not atmos)

2
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

So, I’d just like to clarify something. Since the cloudfront cdn module can have multiple buckets involved, you are seeking to update the grants on the origin bucket? Or on the logs bucket? or on both?

Georgi Angelov avatar
Georgi Angelov

On the logs bucket.

Georgi Angelov avatar
Georgi Angelov

To be honest, I realized that for the logs bucket I only need the external account associated with the canonical ID for log delivery in AWS. But still wanted to know if it’s possible to have the current AWS account have permissions added as well

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

there doesn’t appear to be a way to update the grants for the logs bucket in the module. It’s set to only have one: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/main/main.tf#L435

  grants = [
1
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

to be clear here, you can always try adding more grants outside the module and see if terraform removes them. That is, you might find that additional ACL rules will not be scrubbed when terraform plans for drift/change. I only say that because it seems there’s a dynamic block on the ACL rules which might indicate additional rules will be ignored: https://github.com/cloudposse/terraform-aws-s3-bucket/blob/main/main.tf#L180

    for_each = try(length(local.acl_grants), 0) == 0 || try(length(var.acl), 0) > 0 ? [] : [1]
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

(note, the acl_grants local is formed on the var grants)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

if it plays nice, you could just add more grants after running the module to establish the base ACL

2025-01-07

github3 avatar
github3
04:56:36 PM

Merge Atmos specific and terraform/helmfile help documentation @samtholiya (#857)

what

• Merge Atmos specific and terraform/helmfile help documentation

why

• Show the native terraform/helmfile commands and Atmos-specific terraform/helmfile commands

image

image

Separate Test Fixtures from Examples @milldr (#918)

what

• Move examples/tests to tests/fixtures

why

• Examples are intended to be just that, examples. We should not be including test usages with the examples

1
github3 avatar
github3
06:49:56 PM

Implement atmos about and atmos support commands @Cerebrovinny (#909)

what

• Implement atmos about and atmos support commands

why

• Help users to find information about Atmos and Cloud Posse support

Support

Screenshot 2025-01-04 at 17 42 38

About

Screenshot 2025-01-04 at 17 42 45

kofi avatar

Hey! I would like to be able to pass all extra args of a custom CLI command to the underneath command without defining them in the flag section. Is it possible? My use case would be to give to ansible-playbook some extra flags without defining all of them in the custom CLI of atmos. Do you think it makes sense?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh right, I recall we had discussed adding support for -- to custom commands.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This would be a good escape hatch.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you familiar with the -- convention in CLI commands?

kofi avatar

I am not sure, for me, -- is for long arguments?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, so when it’s not followed by any argument signifies end of arguments. That means that everything after it is to be interpreted as a literal string and not as flags.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve created a task so that we can implement this. I don’t think it’s a lot of work and we had meant to do that before. I think we might get to it in the next few weeks.

kofi avatar

Hummmm

kofi avatar

My use case is to have a custom command atmos ansible run that is calling ansible-playbook

kofi avatar

I would like to be able to add for example the argument -vvvvvvv to be passed to ansible-playbook without defining it in the custom CLI

kofi avatar

Or for example -l flags to be sent too without specifying arguments

kofi avatar

Because at the moment, every extra arguments given to the custom CLI that is not specify ends up in an error and cannot be further processed

kofi avatar

Does it make more sense?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so if we implement this, you could do:

atmos ansible run —- -vvvvvvv
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And then there would be an ENV that contains -vvvvvvv

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So you could pass that literal string to ansible or parse it yourself

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, it would contain all characters after the double hyphen

kofi avatar

It would be fine!

kofi avatar

Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great! for now, you can define an env like ANSIBLE_ARGS and use that until we support it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
ANSIBLE_ARGS="-vvvvvvv" atmos ansible run
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then use ANSIBLE_ARGS in your custom command

kofi avatar

Indeed, that should work

kofi avatar

Thanks for the workaround!

1
github3 avatar
github3
01:43:39 AM

TF_CLI_ARGS_* Handling @milldr (#898)

what

• Added logic and warning messages when the user specifies any TF_CLI_* environment variable, since this may conflict with Atmos behavior. • Append any TF_CLI env vars to the determined TF_CLI env var for terraform shell

why

• When executing Terraform Shell, we should append generate var file to the (if) specified env var rather than overwriting it • When executing Terraform, var files are already appended (merged). We should add a warning nonetheless

atmos terraform shell

2025-01-02 18 37 16

atmos terraform plan

2025-01-02 18 36 15

Michael avatar
Michael

Y’all’s development pace is wild! Keep up the great work and great features and fixes

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This was one of @Dan Miller (Cloud Posse)’s first contributions to the atmos core! Nice work

1
1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I’ve got 3 more queued

2025-01-08

github3 avatar
github3
09:39:54 PM

Add --query (shorthand -q) flag to all atmos describe commands @aknysh (#920)

what

• Added --query (shorthand -q) flag to all atmos describe <subcommand> commands • Updated CLI command documentation to reflect new querying functionality • Added examples for new query-based commands

why

• Query (and filter) the results of atmos describe <subcommand> commands using yq expressions • Before, an external yq or jq binary was required to query/filter the results of atmos describe commands. With the yq functionality now embedded in Atmos, installing the yq binary is not required

examples

atmos describe component vpc -s plat-ue2-prod –query .vars atmos describe component vpc -s plat-ue2-prod -q .vars.tags atmos describe component vpc -s plat-ue2-prod -q .settings atmos describe component vpc -s plat-ue2-prod –query .metadata.inherits

atmos describe stacks -q keys # this produces the same result as the native atmos list stacks command

atmos describe config -q .stacks.base_path

atmos describe workflows -q ‘.0keys’

atmos describe affected -q <yq-expression>

atmos describe dependents -q <yq-expression>

references

https://mikefarah.gitbook.io/yqhttps://github.com/mikefarah/yqhttps://mikefarah.gitbook.io/yq/recipeshttps://mikefarah.gitbook.io/yq/operators/pipe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Should we move the release notifications to a new channel? We will probably have 20 or more releases this month.

1
Michael avatar
Michael

Were your thoughts to move it to atmos-dev?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, it was too noisy in there. Maybe atmos-releases or atmos-notifications

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We had something similar in atmos-dev before we turned it off)

1
github3 avatar
github3
12:55:55 AM

Update the descriptions of Atmos commands @samtholiya (#845)

what

• Update the descriptions of Atmos commands

why

• Improve clarity and specificity regarding Atmos commands functionality

image

github3 avatar
github3
03:38:54 AM

Introduce a centralized theming system for consistent UI styling @Cerebrovinny (#913)

what

• Introduced a centralized theming system for consistent UI styling • Moved terminal colors to global constants • Added theme-based color and style configurations

why

• Centralize color and style management across the application • Make it DRY and consistent

2025-01-09

John Seekins avatar
John Seekins

Having some state problems with atmos that I’m struggling to resolve…

$ atmos terraform plan msk -s staging-data-msk                  
exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".


$ atmos terraform plan msk -s staging-data-msk -- -reconfigure
exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".

Any tips on how to actually fix this problem?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your atmos.yaml, do you have this

components:
  terraform:
    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE' ENV var, or '--init-run-reconfigure' command-line argument
    init_run_reconfigure: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, you can always execute atmos terraform shell msk -s staging-data-msk and then inside the shell use the native terraform commands

John Seekins avatar
John Seekins

Ooo…okay.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform shell | atmos

This command starts a new SHELL configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.

John Seekins avatar
John Seekins

Shell doesn’t work either:

 atmos terraform shell msk -s staging-data-msk         
exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you have init_run_reconfigure: true?

John Seekins avatar
John Seekins

Yep:

components:
  terraform:
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try

atmos terraform init msk -s staging-data-msk -- -reconfigure
John Seekins avatar
John Seekins

That is the first thing I tried. You can see it in the initial code I pasted.

John Seekins avatar
John Seekins

Ah. Crumb. That says plan. One sec.

John Seekins avatar
John Seekins
$ atmos terraform init msk -s staging-data-msk -- -reconfigure
exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you using atmos.Component or !terraform.output functions in your stacks?

John Seekins avatar
John Seekins

Yep.

John Seekins avatar
John Seekins

In this one, specifically !terraform.output.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, I prob know what the issue is - the !terraform.output functions does not take into account init_run_reconfigure: true (it calls a Hashi lib to execute terraform init) we’ll test it and release a fix in a day or two

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for now, you can do the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the issue with the backend is not for this component msk -s staging-data-msk, but for the component in the stack in the !terraform.output func

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can do

cd <path to that component>
terraform init -reconfigure
John Seekins avatar
John Seekins

Makes sense. Thanks for the help!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if it fixes the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriela Campana (Cloud Posse) please create a task for Atmos:

In atmos.Component and !terraform.output functions, detect the init_run_reconfigure: true setting in atmos yaml and execute terraform init -reconfigure

1
John Seekins avatar
John Seekins

Turns out this was a bug with a separate module not providing expected outputs. Thanks for helping me debug, though!

John Seekins avatar
John Seekins

(I found this by switching to atmos.Component values, which at least gave slightly more error:

$ atmos terraform init msk -s staging-data-msk                       
template: all-atmos-sections:205:35: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
John Seekins avatar
John Seekins

But I’d say that surfacing the error more clearly would be nice. (I know that surfacing errors through the stack is a pain)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Turns out this was a bug with a separate module not providing expected outputs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the component in the !terraform.output func did not have the expected outputs?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we def need to improve error handling here)

John Seekins avatar
John Seekins

Yes. The !terraform.ouptut/atmos.Component was missing outputs. Adding those missing outputs (fortunately we control that module) fixed the error.

Dan Hansen avatar
Dan Hansen

Is it common practice to use Terraform child modules inside components? The docs encourage component scope not to be too small, so I was wondering where sensible defaults for more resource-level configurations across stacks should live. We’ve typically used child modules for this purpose, but I’m unsure if the intention behind some of the atmos abstractions is to flatten the root graph?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You should not use YAML as a replacement for HCL

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Component Best Practices | atmos

Learn the opinionated “Best Practices” for using Components with Atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See the warning there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do you mean writing a component that uses multiple child modules?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or using “child modules” as components?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Dan Hansen avatar
Dan Hansen

The former, writing a component that uses multiple child modules! I saw those notes in the docs, but was a little confused! It seems like its a balancing act.

I was planning to organize each of our different product surfaces into their own components so they can be managed/deployed independently.

I’m not sure there’d be a case in which we’d have a component that was general enough to re-use, but certainly I could foresee HCL modules that we’d reuse (e.g. database, internal api servers, storage buckets).

I noticed that a lot of the CloudPosse components are for specific pieces of infrastructure, for example aws-vpc or aws-aurora-postgres. Something like a Postgres instance is only used by an API server feels like it’d be too small to warrant splitting it into its own component, so we should just let Terraform handle the dependencies rather than splitting it into two lifecycles.

Dan Hansen avatar
Dan Hansen

Thanks for your response, it clears a lot up!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But the lifecycle of the API and the database are almost entirely disjoint

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s why they we argue they should be separate

Dan Hansen avatar
Dan Hansen

True! Perhaps the stacks should be organized by product. Do end-users typically solely the utilize output atmos describe affected as opposed using workflows to apply/plan each component in a stack?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Describe affected was implemented mostly for our GitHub actions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also plan to improve the command line ability to do the same, but it’s not available today

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a task assigned to implement:

atmos terraform apply —-affected

This would under the hood run describe affected and iterate over each component in dependency order and apply it.

github3 avatar
github3
09:23:22 PM

Support Relative Path Imports @milldr (#891)

what

• Add support for relative paths in imports, when the path starts with ./ or ../ • Support all path types:

import:

  • ./relative_path
  • ../relative_path

or

import:

  • path: ./relative_path
  • path: ../relative_path

why

• Allow less path duplication and support simpler code Test Cases as a Directory @milldr (#922)

what

• Added support for test cases as a directory rather than a single file

why

• We are adding many more tests and as such the single file is becoming unruly

RB avatar

If we want to create new subnets on an existing vpc, is there an upstream component that can be used?

I looked at this component https://github.com/cloudposse-terraform-components/aws-vpc and couldn’t find a way to add more subnets such as a subnet specifically for databases, for example.

cloudposse-terraform-components/aws-vpc

This component is responsible for provisioning a VPC and corresponding Subnets

RB avatar

I also searched for a component with subnet in its name and no luck

cloudposse-terraform-components/aws-vpc

This component is responsible for provisioning a VPC and corresponding Subnets

RB avatar

i was imagining a component wrapper for this module https://github.com/cloudposse/terraform-aws-dynamic-subnets

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC

RB avatar

or would you folks be open to setting up a for_each for the cloudposse/dynamic-subnets/aws module directly in the vpc component ?

https://github.com/cloudposse-terraform-components/aws-vpc/blob/385e26c4f8f7137efcf9dd1d46d05e166e9854ae/src/main.tf#L140-L142

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.4.2"
RB avatar

What do you folks think?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You should/could be able to use this directly: https://github.com/cloudposse/terraform-aws-dynamic-subnets

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

since atmos supports provider & backend generation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Component Manifest | atmos

Use Component manifests to make copies of 3rd-party components in your own repo.

RB avatar

oh very cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and you can use vendor manifests the same way. doesn’t have to be a component manifest.

RB avatar

If i vendored the module as a component, I’d still need to hardcode the vpc unless i either

• modified the vendored module as a component to read from the remote state

• Or modified the dynamic-subnets module to allow for a data source to use tags to collect the vpc id, then vendor it in as a component, and then there is no hard coding

RB avatar

Heres a draft pr that would make it easier to vendor dynamic-subnets as a component without hard coding

https://github.com/cloudposse/terraform-aws-dynamic-subnets/pull/219

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or you can use !terraform.output to retrieve the vpc id

RB avatar

Holy moly, that’s great

RB avatar

So something like this?

    vpc/example/dynamic-subnets/db:
      metadata:
        component: dynamic-subnets
      vars:
        vpc_id: !terraform.output vpc/example vpc_id
1

2025-01-10

github3 avatar
github3
04:24:28 PM

Embed JSON Schema for validation of Atmos manifests inside Atmos binary @aknysh (#925)

what

• Embed the JSON Schema for validation of Atmos manifests inside Atmos binary • Update docs

why

• Embedding the JSON Schema inside the Atmos binary allows keeping the Atmos code and the schema in sync, and does not force users to specify JSON Schema in atmos.yaml and monitor it when it needs to be updated

description

Atmos uses the Atmos Manifest JSON Schema to validate Atmos manifests, and has a default (embedded) JSON Schema.

If you don’t configure the path to a JSON Schema in atmos.yaml and don’t provide it on the command line using the --schemas-atmos-manifest flag, the default (embedded) JSON Schema will be used when executing the command atmos validate stacks.

To override the default behavior, configure JSON Schema in atmos.yaml:

• Add the Atmos Manifest JSON Schema to your repository, for example in stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json • Configure the following section in the atmos.yaml

Validation schemas (for validating atmos stacks and components)

schemas: # JSON Schema to validate Atmos manifests atmos: # Can also be set using ‘ATMOS_SCHEMAS_ATMOS_MANIFEST’ ENV var, or ‘–schemas-atmos-manifest’ command-line arguments # Supports both absolute and relative paths (relative to the base_path setting in atmos.yaml) manifest: “stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json” # Also supports URLs # manifest: “https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json

• Instead of configuring the schemas.atmos.manifest section in atmos.yaml, you can provide the path to
the Atmos Manifest JSON Schema file by using the ENV variable ATMOS_SCHEMAS_ATMOS_MANIFEST or the --schemas-atmos-manifest command line flag:

ATMOS_SCHEMAS_ATMOS_MANIFEST=stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json atmos validate stacks atmos validate stacks –schemas-atmos-manifest stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json atmos validate stacks –schemas-atmos-manifest https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json

Summary by CodeRabbit

Configuration Updates
• Enhanced schema configuration flexibility in atmos.yaml
• Added support for remote and embedded JSON schema locations • Documentation Improvements
• Updated CLI command documentation for stack validation
• Added new sections explaining validation processes and schema management
• Clarified usage of URLs for schema manifests in documentation • Testing
• Added new test for JSON schema validation

RB avatar

I saw a peer hard code directly in the vpc component. One block per unique subnet like mqtt and rds subnets direclty in the component instead of making it based on inputs. I skimmed through the component best practices and didn’t see anything regarding this. Id assume that we want to move stuff more into yaml and less in terraform if we can help it and if it’s in terraform it should be multi-region, multi-account. I suppose if we’re so used to atmos that this may be obvious but it may not be obvious to newcomers.

RB avatar

Any specific line items in the docs we can share to prevent this ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Were they using the overrides pattern?

Igor M avatar

Do you mean using [_override.tf](http://_override.tf)?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Component Best Practices | atmos

Learn the opinionated “Best Practices” for using Components with Atmos

RB avatar

No they were updating the component directly after downstreaming

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Long and short, hard coding subnets entirely defeats the purpose of reusable components. I don’t think we explicitly point out not to do it.

RB avatar

Ah ok

RB avatar

If I were to contribute it, which page would it be best to call that out on ?

Component best practices?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, please do!

github3 avatar
github3
06:21:38 PM

Support Component Lock with metadata.locked @milldr (#908)

what

• Added support for metadata.locked with components • Separate atmos CLI tests into many files: tests/test_cases.yaml –> tests/test-cases/*.yaml

why

• The metadata.locked parameter prevents changes to a component while still allowing read operations. When a component is locked, operations that would modify infrastructure (like terraform apply) are blocked, while read-only operations (like terraform plan) remain available. By default, components are unlocked. Setting metadata.locked to true prevents any change operations.

Lock a production database component to prevent accidental changes

components: terraform: rds: metadata: locked: true vars: name: production-database

CleanShot 2025-01-03 at 17 38 08@2x

Kyle Decot avatar
Kyle Decot

:wave: Hello! I’m just getting started w/ Atmos and I’m running into an issue when attempting to use atmos.Component in combination w/ a key that contains a - . When doing (atmos.Component "vpc" .stack).outputs.foo-bar I’m getting bad character U+0022 '-' . I then attempted to use index (atmos.Component "vpc" .stack) "outputs" "foo-bar" however that gives index error calling index: index of nil pointer . Any suggestions?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share the yaml instead?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, recommend using !terraform.output instead of {{ atmos.Component }}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

atmos.Component literally injects text into the document, not an object.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if your text is not well formatted for YAML you’ll get these kinds of errors probably.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using !terraform.output injects a well structured object automatically, avoiding the risks

Kyle Decot avatar
Kyle Decot

Sure, let me get the YAML. Just a moment. The value I’m trying to get is a a string, not an object.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok

Kyle Decot avatar
Kyle Decot

Attempt #1 was:

components:
  terraform:
    application:
      metadata:
        component: organization/application
      vars:
        subnet: '{{ (atmos.Component "vpc" .stack).outputs.public.subnet-id }}'
Kyle Decot avatar
Kyle Decot

Then I tried:

components:
  terraform:
    application:
      metadata:
        component: organization/application
      vars:
        subnet: '{{ index (atmos.Component "vpc" .stack) "outputs" "public" "subnet-id" }}'
Kyle Decot avatar
Kyle Decot

I attempted to use !terraform.output as you suggested however the function isn’t variadic so attempting to do !terraform.output vpc public subnet-id gives invalid number of arguments in the Atmos YAML function: !terraform.output vpc public subnet-id

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kyle Decot the correct expression would be (if oyu have the variable public which is a map with the key subnet-id

{{ (atmos.Component "vpc" .stack).outputs.public.subnet-id }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

however …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
In Go templates, using dashes in keys will cause issues because the template syntax does not natively support accessing keys with dashes directly.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) what about the !template.output syntax

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use index function

{{index .person "last-name"}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for !terraform.output, this is not correct syntax

!terraform.output vpc public subnet-id 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the function accepts a component and output name (2 params), or a component, stack,, and output name (3 params)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this

'{{ index ((atmos.Component "vpc" .stack).outputs.public) "subnet-id" }}'
1
Kyle Decot avatar
Kyle Decot

Yes! Using a combination of index and atmos.Component seems to do the trick.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) is the limitation with !terraform.output that you cannot retrieve a attribute from the output?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) bumping this up

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are finishing up some changes to improve the !terraform.output function, will be in the next release

David Elston avatar
David Elston

Hi everyone, enjoying using Atmos :heart: I just had a quick clarification question regarding setting the remote_state_backend configuration, reading the backend configuration docs it says

When working with Terraform backends and writing/updating the state, the terraform-backend-read-write role will be used. But when reading the remote state of components, the terraform-backend-read-only role will be used.

Could someone clarify, this refers to using the remote_state terraform module only and not say if I ran

atmos terraform output my_component -s my_stack

or if I referenced an output in a stack via a yaml function such as

!terraform.output my_component my_stack my_output_value

This is the behavior I’m seeing, just wanting to know if I’m not doing something wrong

State Backend Configuration | atmos

Atmos supports configuring Terraform Backends to define where Terraform stores its state, and Remote State to get the outputs of a Terraform component, provisioned in the same or a different Atmos stack, and use the outputs as inputs to another Atmos component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
When working with Terraform backends and writing/updating the state, the terraform-backend-read-write role will be used. But when reading the remote state of components, the terraform-backend-read-only role will be used
State Backend Configuration | atmos

Atmos supports configuring Terraform Backends to define where Terraform stores its state, and Remote State to get the outputs of a Terraform component, provisioned in the same or a different Atmos stack, and use the outputs as inputs to another Atmos component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, it applies only when using the remote-state module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you specify the backend role in backend and remote state backend role in remote_state_backend

David Elston avatar
David Elston

right, gotcha - It’s what I thought but just wanted to check

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when executing !terraform.output, it just executes terrafrom output using the already assumed role or configured in the TF module - so this function does not know anything about roles and depends on the role config in the corresponding TF component (or, if not configured in the TF component, the caller role will be used by TF)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@David Elston a correction: the !terraform.output YAML function (and atmos.Component template function) does generate the backend file before executing terraform output

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the configured backend will be used

terraform:
  backend_type: s3
  backend:
    s3:
      acl: "bucket-owner-full-control"
      encrypt: true
      bucket: "your-s3-bucket-name"
      dynamodb_table: "your-dynamodb-table-name"
      key: "terraform.tfstate"
      region: "your-aws-region"
      role_arn: "arn:aws:iam::xxxxxxxx:role/terraform-backend-read-write"

2025-01-11

github3 avatar
github3
01:38:43 AM

Configure and use GitHub auth token when executing atmos vendor pull commands @Listener430 @aknysh (#912)

what

• Use GitHub token when executing atmos vendor pull commands • Intorduce environment variables GITHUB_TOKEN and ATMOS_GITHUB_TOKEN to specify a GitHub Bearer token for authentication in GitHub API requests • Add custom GitHub URL detection and transformation for improved package downloading

why

• When pulling from private GitHub repos, GitHub has a rate limit for anonymous requests (which made atmos vendor pull fail when vendoring from private GitHub repositories)

description

In the case when either the ATMOS_GITHUB_TOKEN or GITHUB_TOKEN variable is configured with a GitHub Bearer Token, vendor files are downloaded with go-getter using the token. The token is put into the URL, and the requests to the GitHub API are not affected to anonymous users rate limits

test

[github.com/analitikasi/Coonector.git](http://github.com/analitikasi/Coonector.git) is a private repo

Token_private_repo

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re investigating why a binary was not created with this release. Stay tuned.

Configure and use GitHub auth token when executing atmos vendor pull commands @Listener430 @aknysh (#912)

what

• Use GitHub token when executing atmos vendor pull commands • Intorduce environment variables GITHUB_TOKEN and ATMOS_GITHUB_TOKEN to specify a GitHub Bearer token for authentication in GitHub API requests • Add custom GitHub URL detection and transformation for improved package downloading

why

• When pulling from private GitHub repos, GitHub has a rate limit for anonymous requests (which made atmos vendor pull fail when vendoring from private GitHub repositories)

description

In the case when either the ATMOS_GITHUB_TOKEN or GITHUB_TOKEN variable is configured with a GitHub Bearer Token, vendor files are downloaded with go-getter using the token. The token is put into the URL, and the requests to the GitHub API are not affected to anonymous users rate limits

test

[github.com/analitikasi/Coonector.git](http://github.com/analitikasi/Coonector.git) is a private repo

Token_private_repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This release will be skipped as there was a problem in main at this commit. Please use 1.146.1 instead.

2025-01-13

github3 avatar
github3
01:58:23 PM

:rocket: Enhancements

Bump github.com/aws/aws-sdk-go-v2/config from 1.28.9 to 1.28.10 @dependabot (#932)Bumps [github.com/aws/aws-sdk-go-v2/config](http://github.com/aws/aws-sdk-go-v2/config) from 1.28.9 to 1.28.10. Commits • 7a7d202 Release 2025-01-10 • fdaa7e2 Regenerated Clients • 52aba54 Update endpoints model • 7263a7a Update API model • See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don’t alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

@dependabot rebase will rebase this PR • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it • @dependabot merge will merge this PR after your CI passes on it • @dependabot squash and merge will squash and merge this PR after your CI passes on it • @dependabot cancel merge will cancel a previously requested merge and block automerging • @dependabot reopen will reopen this PR if it is closed • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

:robot_face: Automatic Updates

Bump github.com/aws/aws-sdk-go-v2/config from 1.28.9 to 1.28.10 @dependabot (#932)Bumps [github.com/aws/aws-sdk-go-v2/config](http://github.com/aws/aws-sdk-go-v2/config) from 1.28.9 to 1.28.10. Commits • 7a7d202 Release 2025-01-10 • fdaa7e2 Regenerated Clients • 52aba54 Update endpoints model • 7263a7a Update API model • See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don’t alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

@dependabot rebase will rebase this PR • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it • @dependabot merge will merge this PR after your CI passes on it • @dependabot squash and merge will squash and merge this PR after your CI passes on it • @dependabot cancel merge will cancel a previously requested merge and block automerging • @dependabot reopen will reopen this PR if it is closed • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Dan Hansen avatar
Dan Hansen

I’m having some trouble figuring out why vendoring isn’t working. I can’t get any related logs other than Failed to vendor 1 components.

Dan Hansen avatar
Dan Hansen

I’ve tried setting the various log level parameters (via atmos.yaml, environment variable, and --logs-level).

Dan Hansen avatar
Dan Hansen

I’m attempting to vendor from a private OCI registry, so my guess would be that go-containerregistry is having trouble getting credentials

Dan Hansen avatar
Dan Hansen

Dry run shows:

✓ cloud-sql-instance (latest)
                                                    
  Done! Dry run completed. No components vendored.  
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am a bit confused. With a dry run nothing should be vendored.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What happens with out dry run?

Dan Hansen avatar
Dan Hansen

This is all the information I get with atmos vendor pull

$ atmos vendor pull
x cloud-sql-instance (latest)
                                                         
  Vendored 0 components. Failed to vendor 1 components.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, ok - that’s helpful

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is this a public oci image?

Dan Hansen avatar
Dan Hansen

It’s private- my brief look at go-containerregistry made it seem like my Docker-configured credHelpers should be used to authenticate

Dan Hansen avatar
Dan Hansen
apiVersion: atmos/v1
kind: AtmosVendorConfig
spec:
  sources:
    - component: "cloud-sql-instance"
      source: "<oci://us-central1-docker.pkg.dev/org/repo/default:{{.Version}}>"
      version: latest
      targets:
        - components/terraform/vendor/cloud-sql-instance
      included_paths:
        - '**'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My guess is it’s Auth related and I don’t know if we’ve tested on private repo

Dan Hansen avatar
Dan Hansen

That’d be my guess, too. Should these installPkgMsgs end up in the logs? https://github.com/cloudposse/atmos/blob/main/internal/exec/vendor_model.go#L289

			if err := processOciImage(atmosConfig, p.uri, tempDir); err != nil {
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you have a chance to take a look and fix it, we’ll gladly merge the PR :-)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Otherwise I will add a task to fix it, but not sure when we can get to it

Dan Hansen avatar
Dan Hansen

The error message is only printed if we’re moving on to install another component. In my case I only have one vendored component, so the completion message is printed instead

Dan Hansen avatar
Dan Hansen

I’m also not sure I understand why all the logging is guarded by !isTTY checks. I would think we’d want logs even if we’re displaying the terminal ui?

Dan Hansen avatar
Dan Hansen
#936 [Vendor] Print error message when an error occurs during installation of the last vendored component

what

Adds an error message to the output of atmos vendor pull for the last component in a vendor manifest

references

closes #935

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm, yes, that is probably because it messes up the UI, but that just means we need to find a better way to handle it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe we should force a log file, if UI, and direct the user to the log file in the event of errors

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or in the UI we should print it below the component before proceeding to the next component

Petr Dondukov avatar
Petr Dondukov

Is it possible to generate required_providers block via atmos?

Petr Dondukov avatar
Petr Dondukov
{
  "terraform": {
    "required_providers": {
      { "test": {
        "source": "test"
      }
    }
  }
}

As it is done now for the backend block

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not at this time. Providers and Backends. Can you describe your use case?

Petr Dondukov avatar
Petr Dondukov

Is it possible to generate code from atmos itself, we need an analog of the generate function of terragrunt to override the configuration?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Petr Dondukov this is definitely a common request, especially from users migrating from other toolchains. I wouldn’t rule it out, and we’ve definitely losened our stance on this (we used to be starkly against it), but we have added some conventional ways for providers and backends. I think if we were to implement generation, we do not want to take the same approach as for backends and providers. We do not want to replace HCL with YAML.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could you help share some of your use-cases?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, we have over 165 components that we’ve written and the only code generation we rely on is provider generation and backend generation. Everything else we’ve accomplished with atmos, without code generation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Cloud Posse Terraform Components

Collection of Cloud Posse Terraform Components used with the Cloud Posse Reference Architecture

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, if we showed you how you could use terragrunt with atmos, would that be a good stop-gap?

Petr Dondukov avatar
Petr Dondukov

Yes, I realize we’re using terragrunt right now and our infrastructure is complex and we’d like to simplify it. I am currently looking for a way to redefine provider versions (or define from scratch), as we have historically had a parent and child module approach and have had to completely remove the provider block from hundreds of our terraform modules. We generate the provider block via terragrunt and that’s how we have the infrastructure built, not the best solution of course. Also, even without terragrunt we over-utilize child modules in other parent modules, that’s why it’s necessary for us.

Also, I found that we need to define in terraform modules the configuration for name_template for atmos to work, or I’m throwing this through the cloudposse/context provider for the whole stack, like this: name_template: "{{{.providers.context.values.tenant}}-{{{.providers.context.values.environment}}-{{{.providers.context.values.resource.name}}-{{{.providers.context.values.region}}-{{{{.providers.context.values.resource.number}}}". But without specifying the source = cloudposse/context parameter in the required_providers block, terraform gives an error that it can’t find the context provider in the registry. An option would be to describe this block in each terraform module, but we want to retain versatility.

Petr Dondukov avatar
Petr Dondukov

In general, the first one is not very critical. Now I’m more interested in how to throw a source for the cloudposse/context provider without fixing the terraform modules themselves.

Petr Dondukov avatar
Petr Dondukov

@Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Provider is definitely in the registry. Have you seen this demo? https://github.com/cloudposse/atmos/tree/main/examples/demo-context

Petr Dondukov avatar
Petr Dondukov

Yes, I understand. However, in order to use the context provider we need to prescribe it in the terraform module, to pass the source = "cloudposse/context" parameter. We don’t want to add this to the terraform module itself, so as not to lose versatility, but to make do with the atmos tool.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


We don’t want to add this to the terraform module itself, so as not to lose versatility, but to make do with the atmos tool.
Just curious, is this because this context provider is perceived an atmos feature?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If the resources in the module do not use the resources provided by the context provider, then there’s no reason to use the context provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The context provider is a modern alternative to the terraform-null-label module (or any naming module, as others exist now)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re: name_template this is how the “slug” or “id” is generated for atmos components, so you can refer to it with a simple, programmatically consistent name.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this setting:

name_template: "{{.providers.context.values.tenant}}-{{.providers.context.values.environment}}-{{.providers.context.values.resource.name}}-{{.providers.context.values.region}}-{{.providers.context.values.resource.number}}"

you can configure that based on any vars instead.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So let’s say all of your root modules defined “name”, “location”, “account”, then you can define a name template liek this:

name_template: "{{.vars.account}}-{{{.vars.location}}-{{.vars.name}}-{{.component_name}}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This assumes every root module has a HCL variable defined for account, location, name.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure if i’m reading between the lines correctly :smiley: it sounds like maybe the concern was needing to use .providers.context and that’s not needed.

Petr Dondukov avatar
Petr Dondukov

Unfortunately, we don’t use these variables. And if we write them at the stack level, terraform writes an error of unused variables

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, so I think we’re getting somewhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So basically, you need to be able to pull these from somewhere else, that’s neither providers nor vars.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think there’s a solution for this. The settings field of a component is a free-form map.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You should be able to use .settings instead of .providers

name_template: "{{.settings.context.account}}-{{{.settings.context.location}}-{{.settings.context.name}}-{{.component_name}}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can you confirm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and by the way, .account and .location and .name were totally arbitrary. You can establish whatever convention you like.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, the settings section is a free-form map, so it can be used to “store” data/metadata and then use it in other configs. Since settings is a free-form map, you can use any sunsection under it (e.g. settings.context)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in the name_template you can use any Atmos section, not only vars or providers

For the Go template tokens, and you can use any Atmos sections (e.g. vars, providers, settings)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so this is perfectly fine

name_template: "{{.settings.context.account}}-{{{.settings.context.location}}-{{.settings.context.name}}"

settings:
  context:
    account: xxxx
    location: yyyy
    name: zzzz
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it will not define any new vars and will not send them to the TF components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Petr Dondukov ^

let us know if we can help with this or you need more details

Petr Dondukov avatar
Petr Dondukov

Thanx a lot! That is exactly what I need!

2025-01-14

Dave avatar

Hey! I am pretty new to atmos, so sorry for the beginner questions Anyhow I am a little confused about how templating works for the yaml files. So first of all, I do not seem to be able to have them evaluated: I got a pretty simple stack config like this:

import:
  - deploy/dev/_defaults.yaml
  - catalog/tfstate-backend.yaml
  
vars:
  tags:
    Stack: tfstate-backend
    test: !template '{{ .stack }}'
    test2: !exec echo value2

I would expect to have the test key in the tags variable templated (somehow, I am not exactly sure what I would see there), but it does not get evaluated. On the other hand the !exec function is evaluated nicely. This is the output for “atmos describe stacks –components tfstate-backend”

dev:
    components:
        terraform:
            tfstate-backend:
                ...
                stack: dev
                vars:
                    enable_server_side_encryption: true
                    enabled: true
                    force_destroy: false
                    name: tfstate
                    prevent_unencrypted_uploads: true
                    region: us-east-1
                    stage: dev
                    tags:
                        Managed-By: Terraform
                        Stack: tfstate-backend
                        test: '{{ .stack }}'
                        test2: |
                            value2
                workspace: dev

So can you help me figure out how I can make these templates evaluated? Or a complete example would be nice, where I could see something like this work, unfortunately I did not find anything similar in the provided examples.

Also, what I would actually like to figure out is how I could reference variable defined on any level (global, or any other level) and use them to construct values e.g. to pass to the component (like define some prefix on an upper level and a list of items on some lower level, and pass the values combined to the component) in the stack configuration. So I was previously using terragrunt a lot, and there it is very straightforward to do so, because you can just reference any object and use the same functions as I would use in terraform (and some more). Can you help me with some example how atmos designed to handle such use case? I found this in the docs, which seem to be similar to what I am looking for: https://atmos.tools/core-concepts/stacks/yaml-functions/template#advanced-examples But this example is not very complete, and with not being able to evaluate the templates I am just stuck in how to move forward. Also it would be nice to have a reference of what exactly the context of this template evaluation is (e.g. this was the only place i found in the documentation where .settings are referenced in a template, so I wonder what else is available)

Thanks for the attention and for reading all of this ^^ :)

!template | atmos

Handle outputs containing maps or lists returned from the atmos.Component template function

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like templating is disabled in your atmos.yaml. It’s disabled by default

!template | atmos

Handle outputs containing maps or lists returned from the atmos.Component template function

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I will get back to this later. In general, I recommend refactoring questions into individual messages as they are easier to answer individually with threads for each topic.

Dave avatar

thanks Erik, that is right, I had a typo I had this in my config:

tempates:
  settings:
    enabled: true
1
Dave avatar

Also one more thing, I find a little strange. Previously when I was working with terraform and terragrunt one of the things we thought important was that I want to be able to explicitly define the list of variables that are passed to a specific terraform root module (which are called components in Atmos’s case). The reason for this is that I want to see in the configuration which components depend on which value. So I can be sure about the blast radius of a change, because when I change a variable (or delete it for that matter) I know to which components are referencing them (and finding them is really just a cmd+shift+f in my editor). As I see with Atmos doing deep merging of variables it was not designed this way, any common vars coming from a _default file or a mixin are passed to every single component, so it is hard to see the exact dependencies (and not to mention issues with having two components which are expecting some input in a different way with the same variable name). Probably I could devise a setup using templates to handle this but I would feel like I am not using Atmos the way it was intended to be used. So I would love some clarification on this, or hear some opinions on how you think this should be handled. thanks again :)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Look into multiple inheritance and abstract components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For detecting changes, we have atmos describe affected, which compiles the fully deep merged configurations, and then compare the them between two git refs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Right now, this returns a large amount of json data and is predominantly concerned with supporting our GitHub actions. We intend to present a much simpler command with human friendly output :-)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I expect that we’ll get to it in Q1

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can see why explicitly referencing variables the way you are in Terragrunt is convenient for the use-case you described. I think what would be helpful is if we wrote up some docs specifically for users coming from Terragrunt because atmos is quite a different way of approach the problem. Terramate, for example, is very similar to Terragrunt and is simpler migration for Terragrunt users. Anyways, I will have to pause here and follow up through out the day.

Dave avatar

Thanks Erik again for the answer. However, as far as I understand (but I might be wrong here) the diff of the deep merged config does not help, my concern is that we just pass all the vars that are merged in any way (through inheritance or import, etc.) to the component, but some of them might be not actually used by the component, they are there for others only. And maybe in such situation the imports and inheritance were not used properly, but you cannot guarantee that everyone will do such good job, so in my opinion explicit passing of inputs for the root modules is always better (not instead of the inheritance and imports, but together with them).

So as an example, I have var a, b and c defined somehow. component1 might be using a and b, component 2 might be using a, c and there might be a 3rd one not using any of these at all, but as I understand all of these vars will be passed to all 3 components in the stack. So how will I know (other then running a plan) if I change var c which component will be affected? And I understand that in many cases this is not going to be a problem, but once you have a complex system with a lots of components (20+ is a lot already I think) variables and environments, this can make the difference between having a big ball of mud and a system which you can actually understand.

Anyway it might be just some additional convention I might use, I would just like to see the options and how exactly this was intended to be used :)

Hans D avatar

running 80+ components, and a large number of atmos stacks:

• most variables are nicely scoped per component, so unless you put stuff in _defaults, it should not be a real issue

• we are keeping the atmos describe stacks (postprocessed by yq) in our repo, as well as the tfvars. That way we have full visibility on change impact. (we use the files also for various other reasons, incl more advanced opa policies)

Dave avatar

Thanks Hans for the answer. I think the might be the key, to have the variables scoped to the component, thanks for pointing it out. I will still need to figure out exactly how to organize things to my liking, but at least I think I am starting to understand the idea.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, what @Hans D points out is the right convention. I guess we haven’t really documented that somewhere. In a related way, I think it would be neat if atmos had an optional mode to validate variables passed to terraform are acceptable by the component. This would reduce the error prone nature.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) would that be complicated? I know you already load the variables of the HCL component somewhere in atmos.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dave in Atmos, all variables are scoped to a particular component (inheritance or imports, should not matter)- at least this is how it should be configured. If you are using global vars (e.g. scoped to the entire stack), then yes, all of them will be sent to all TF components used in the stack - but it’s not how it should be configured - no globals (except the context variables like stage, environment etc. which are used in all (Cloud Posse’s ) components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can share your config, we can take a look

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) not scoped to a particular it the global vars are used; then inherited by all components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


So as an example, I have var a, b and c defined somehow. component1 might be using a and b, component 2 might be using a, c and there might be a 3rd one not using any of these at all, but as I understand all of these vars will be passed to all 3 components in the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i might be not understanding the problem completely, but it sounds like those vars are “global”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the question prob is: how do we prevent declaring vars in Atmos manifests for Atmos components that are not used by the corresponding TF components. we can add this to Atmos (it already has all the information about the configured vars in the Atmos component and all the vars used by the TF component/module)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are going to add the validation to atmos validate stacks and atmos validate component commands: if a variable was configured in Atmos stack manifests, but it’s not used by the TF component(s), show an error will be in one of the new releases

@Dave ^

1
Dave avatar

that sounds pretty nice, and thanks a lot for explaining all this

RB avatar

Hi all. I saw some questions related to remote state. Are there docs on the decision to use remote state vs terraform data sources? I skimmed this document and the data sources that it references are related to stack yaml instead of terraform.

https://atmos.tools/core-concepts/share-data/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think @Matt Calhoun is working on some updates to the docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Everything related to sharing data, including remote state will be consolidated in the share data section

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We should probably have a high level overview since there are different methods with different trade offs

1
Dave avatar

next thing I am struggling with: I am trying to use the vpc and the vpc-flow-log-bucket components, somehow I cannot make it work. I was following the example (in simplified way, I have a pretty simple structure), but I get the following error:

╷
│ Error: stack name pattern '{stage}' includes '{stage}', but stage is not provided
│ 
│   with module.vpc_flow_logs_bucket[0].data.utils_component_config.config[0],
│   on .terraform/modules/vpc_flow_logs_bucket/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│ 
╵

Unfortunately I can’t seem to figure out what is wrong, or what I should be checking… So if anyone could give a hint that would be great :)

Dave avatar

so this is happening when the vpc-flow-log-bucket is already deployed, and I am trying to create a plan for the vpc component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just to confirm, are you trying the advanced quick start?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Using Remote State | atmos

Terraform natively supports the concept of remote state and there’s a very easy way to access the outputs of one Terraform component in another component. We simplify this using the remote-state module, which is stack-aware and can be used to access the remote state of a component in the same or a different Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the vpc-flow-log-bucket is already deployed, vpc uses the remote-state to get outputs from the vpc-flow-log-bucket component, which uses the utils provider.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and since terraform executes the provider code from the component/module directory, the provider can’t find atmos.yaml there - it should be in one of the known locations (or provided via the ENV vars)

Dave avatar

Yes, I am trying to follow the advanced quick start, I just have it a little simplified, because I only have the stage in the name pattern (since I have a single level hierarchy).

Dave avatar

unfortunately I did not find anything new in that guide, I think I followed everything described over there, but I still get this error

Dave avatar

I already had the env vars set (without them the error was completely different)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using absolute or relative paths?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(Also, this particular rough edge will be addressed in a forth coming update to atmos; Pr is in progress)

cricketsc avatar
cricketsc

I’m not 100% sure if this is conceptually a good question, but is there a notion of reserved/official/already used by Atmos keys in a component’s settings? Is there a systematic way to find out what these keys are? For example, It looks to me like github and integrations might be examples of such keys?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fair question. It would be in the atmos schema.json

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We should have this list though. Let me see what we can do.

cricketsc avatar
cricketsc

Thanks for the info @Erik Osterman (Cloud Posse)!

cricketsc avatar
cricketsc

The use case is that I want to add some info to stacks but I want to be sure doing so won’t impact Atmos. Is there a designated path for such custom info?

1
    keyboard_arrow_up