#atmos (2024-10)
2024-10-01

hey I’m debuging some template and I notice that atmos is not updating correctly the error dose it cache the template result? if so is there a way to clear this?

@Andriy Knysh (Cloud Posse)

@Miguel Zablah Atmos caches the results of the atmos.Component
functions only for the same component and stack, and only for one command execution. If you run atmos terraform ...
again, it will not use the cached results anymore

Thanks it looks like I had an error on another file that is why I though it was maybe cache, thanks!

I’m having an issue, I’m using atmos to autogenerate the backend. I’m getting a Error: Backend configuration changed. I am not quite, as I don’t see Atmos making the backend

hmm atmos terraform generate backend is not making backend files, Any tips?

run validate stacks to see if you have a wrong yaml

all successful

and make sure the base_path is correct, and atmos cli can find the atmos.yaml

for backends to be auto-generated, you need to configure the following:

- In
atmos.yaml
components: terraform: # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument auto_generate_backend_file: true

• Configure the backend in YAML https://atmos.tools/quick-start/advanced/configure-terraform-backend/#configure-terraform-s3-backend-with-atmos
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

you need to have this config
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: "your-s3-bucket-name"
dynamodb_table: "your-dynamodb-table-name"
key: "terraform.tfstate"
region: "your-aws-region"
role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"

in the defaults for the Org (e.g. in stacks/orgs/acme/_defaults.yaml
) if you have one backend for the entire Org

What is strange it was working, this is the error.

atmos terraform generate backend wazuh -s dev –logs-level debug template: all-atmos-sections35: executing “all-atmos-sections” at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require migrating existing state.
If you wish to attempt automatic migration of the state, use “terraform init -migrate-state”. If you wish to store the current configuration with no changes to the state, use “terraform init -reconfigure”.

try to remove the .terraform
folder and run it again

Same error with the .terraform folder removed

Ahh ran with log_level trace, and found the issue

what was the issue?

The component had references for var in other components

removed .terraform in those other components

TY for trace

I should make a script to just clean-slate all those .terraform directories

Ya, we should have that as a built in command.

@Dennis DeMarco for now, instead of a script add a custom command to atmos

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.
2024-10-02

Hi! I’m having an issue with this workflow:
cloudposse/github-action-atmos-affected-stacks
where it will do a plan for all stacks even when I got some disable stacks on the CI/CD and since one of the components is dependent of another it fails with.
I get the same error when running this locally:
atmos describe affected --include-settings=false --verbose=true
is there a way to skip a stack or mark it as ignore?
this is the error:
template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference
it’s complaining about this concat I do:
'{{ concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets | default (list)) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets | default (list)) | toRawJson }}'
but this works when vpc is apply but since this stack is not being uses at the moment it fails
any idea how to fix this?

@Igor Rodionov

@Andriy Knysh (Cloud Posse)

@Miguel Zablah by “disabled component” do you mean you set enabled: false
for it?

Yes

i see the issue. We’ll review and update it in the next Atmos release

That will be awesome since this is a blocker for us now

@Andriy Knysh (Cloud Posse) any ETL on this?

Hi @Miguel Zablah We don’t have an ETA yet. But should be soon

we’ll try to do it in the next few days

thanks!!

@Andriy Knysh (Cloud Posse) Is this the fix? Release - v1.89.0 If so I have try running it but I still get the same error

what error are you getting?

must be something wrong with the Go template

@Andriy Knysh (Cloud Posse) the problem is that vpc component is not applied yey

to there is no output
atmos.Component "vpc" .stack).outputs.vpc_private_subnets

I do not know how we should handle that

yes but I was thinking that since it had the:
terraform:
settings:
github:
actions_enabled: false
it should get ignore no?

Github actions would

but atmos still have to process yaml properly

so that would not solve your problem

my issue is that github actions is not doing this since it dose a normal:
atmos describe affected --include-settings=false --verbose=true

atmos need valid configuration of all components at any call

have you tried that on local?

yes and I get the same error bc that stack is not applied

but that is intencional since that stack is mostly for quick test/debug

atmos needs to parse valid all stack configuration then it operate on stack you need

if there is any error or misconfigration on yaml atmos will fail at any command

well there is no error on configuration the issue is that the stack is not applied

but that’s the error

but I guess maybe I can mock the values with some go template or something to ignore this

will that work?


@Miguel Zablah try this one
{{ concat (default (list) (atmos.Component "vpc" .stack).outputs.vpc_private_subnets) (default (list) (atmos.Component "vpc" .stack).outputs.vpc_public_subnets) | toRawJson }}

?

I get the same error:
template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference

ok

let’s try another one

like another version of this?

{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (get . "vpc_private_subnets") (get . "vpc_public_subnets") | toRawJson }}

armos templates support sprig and gomplate functions

it’s not supposed to work on a component that is not applied yet , and Atmos doesn’t know if it was applied or not

what “terraform output” returns for such a component, we need to handle that in the template

same error:
template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (get . "vpc_private_subnets") (get . "vpc_public_subnets")>: error calling concat: Cannot concat type string as list

@Andriy Knysh (Cloud Posse) how about mocking or default values?

that’s different error

sec

aah that is right

{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (coalesce (get . "vpc_private_subnets") (list)) (coalesce (get . "vpc_public_subnets") (list)) | toRawJson }}

try this one ^

and we are back:
template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (coalesce (get . "vpc_private_subnets") (list)) (coalesce (get . "vpc_public_subnets") (list))>: error calling concat: runtime error: invalid memory address or nil pointer dereference

just to test

{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (coalesce (get . "vpc_private_subnets") (list "test1")) (coalesce (get . "vpc_public_subnets") (list "test2")) | toRawJson }}

I get a different error now:
template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (coalesce (get . "vpc_private_subnets") (list "test1")) (coalesce (get . "vpc_public_subnets") (list "test2"))>: error calling concat: Cannot concat type map as list

@Miguel Zablah is that not provisioned component enabled or disabled? If it’s enabled, set enabled: false
to disable it and try again

{{ concat (coalesce (get (default (dict) (atmos.Component "vpc" .stack).outputs) "vpc_private_subnets") (list "test1")) (coalesce (get (default (dict) (atmos.Component "vpc" .stack).outputs) "vpc_public_subnets") (list "test2")) | toRawJson }}

sorry had to run an errand

@Igor Rodionov I fot this error:
template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference
so same error

@Andriy Knysh (Cloud Posse) where I set this?

you add enabled: false
in the YAML config for your component
components:
terraform:
my-component:
vars:
enabled: false

but this will only work of components that support that right? or will atmos actually ignore it?

yes, the enabled
variable only exist in the components that support it (mostly all CloudPosse components)

Atmos sends the vars
section to terraform

@Andriy Knysh (Cloud Posse) I’m using aws vpc module so I don’t think it will do much here also this will only return empty outputs right? so this might still not work right?
2024-10-03

Good morning, I’m new here but looks like a great community. I’ve been using various Cloud Posse modules for terraform for a while but am now trying to set up a new AWS account from scratch to learn the patterns for the higher level setup. I’ve run into a problem and am hoping for some help. I feel like its probably just a setting somewhere but for the life of me I can’t find it.
So I have been working through the Cold Start and have gotten through the account setup successfully but running the account-map
commands is resulting in errors. I’ll walk through the steps I’ve tried in case my tweaks have confused the root issue… For reference, I am using all the latest versions of the various components mentioned and pulled them in again just before posting this.
- When I first ran the
atmos terraform deploy account-map -s core-gbl-root
command, I got an error that it was unable to find a stack file in the/stacks/orgs
folder. That was fine as I wasn’t using that folder but in the error message, it was clear that it was using a defaultatmos.yaml
(this one) that includesorgs/**/*
in theinclude_paths
and not the one that I have been using on my machine. I’ve spent a long time trying to get it to use my local yaml and finally gave up and just added an empty file in theorgs
folder to get passed that error. Then I get to a new error… - Now if I run the
plan
foraccount-map
I get what looks like a correct full plan and then a new error at the end:╷ │ Error: │ Could not find the component 'account' in the stack 'core-gbl-root'. │ Check that all the context variables are correctly defined in the stack manifests. │ Are the component and stack names correct? Did you forget an import? │ │ │ with module.accounts.data.utils_component_config.config[0], │ on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config": │ 1: data "utils_component_config" "config" { │ ╵ exit status 1
If I run
atmos validate component account -s core-gbl-root
I get successful validations and the same with validatingaccount-map
.
I’ve tried deleting the .terraform
folders from both the accounts
and account-map
components and re-run the applies but get the same thing.
I’ve run with both Debug and Trace logs and am not seeing anything that points to where this error may be coming from.
I’ve been at this for hours yesterday and a few more hours this morning and decided it was time to seek some help.
Thanks for any advice!

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) @Dan Miller (Cloud Posse)

I think it could be related to the path of your atmos.yaml
file, since you deal with remote-state
and terraform-provider-utils
Terraform provider. Check this: https://atmos.tools/quick-start/simple/configure-cli/#config-file-location

I thought it might be that but it is clearly registering my atmos.yaml
file as it is in the main directory of my repo and i’ve found atmos to respond to settings there (such as the include/exclude paths and log levels) but the other terraform providers aren’t picking it up. (EDIT! Just saw the bit at the bottom… exploring now)

@Drew Fulton Terraform executes the provider code from the component directory (e.g. components/terraform/my-component
). We don’t want to put atmos.yaml
into each component directory, so we put it into one of the known places (as described in the doc) or we can use ENV vars, so both Atmos binary and the provider binary can find it

Got it! Is there a best practice for selecting where to put it to prevent duplication?

for Docker containers (geodesic
is one of them), we put it in rootfs/usr/local/etc/atmos/atmos.yaml
in the repo, and then in the Dockerfile we do
COPY rootfs/ /

so inside the container, it will be in usr/local/etc/atmos/atmos.yaml
, which is a know path to Atmos and the provider



hi all, am fresh new to atmos. And i was quite mind blown why i have not discover this tool yet while still figuring my way into the documentation, I have a question about the components and stacks.
If i as an ops engineer were to create and standardise my own component in a repository to store all the standard “libraries” of components. Is it advisable if the stacks, and atmos.yaml be in a separate repository ?
Meaning, for a developer will only need to declare the various component inside a stacks folder of their own project’s repository. Like only need to write yaml files and not needing to deal/write terraform code.
We then during execution have a github workflow that clones the core component repository into the project repository, and then complete the infra related deployment. is that something supported ?

Yes, centrally defined components is best for larger organizations that might have multiple infrastructure repositories

However, it can also be burdensome when iterating quickly to have to push all changes via an upstream component git component registry

yeah, thanks for the input. We’re not a large organization, there will be some trade off we’ll have to consider, if we try to limit the devs to only work with yaml. Still early stages of exploring the capabilities of atmos.
2024-10-04

atlantis integration question, I have this config in atmos.yaml
project_templates:
project-1:
# generate a project entry for each component in every stack
name: "{tenant}-{stage}-{environment}-{component}"
workspace: "{workspace}"
dir: "./components/terraform/{component}"
terraform_version: v1.6.3
delete_source_branch_on_merge: true
plan_requirements: [undiverged]
apply_requirements: [mergeable,undiverged]
autoplan:
enabled: true
when_modified:
- '**/*.tf'
- "varfiles/{tenant}-{stage}-{environment}-{component}.tfvars.json"
- "backends/{tenant}-{stage}-{environment}-{component}.tf"
the plan_requirements field doesn’t seem to have any effect on the generated atlantis.yaml

there is a lot of options recently added that atmos might now recognize

although
plan_requirements: [undiverged]
apply_requirements: [mergeable,undiverged]
I think they can only be declared in the server side repo config repo.yaml

mmm no, they can can be declared in the atlantis.yaml

FYI is not recommended to allow users to override workflows, so it is much safer to configure workflows and repo options on the server side config and leave the atlantis.yaml
as simple as possible

Yeah it make sense to have these values in server side repo config… I was trying some test scenarios and encountered the issue. in related question, the field ‘when_modified’ in atlantis.yaml .. I was trying to find some documentation about how it determines the modified files, just determined on if the file is modified in the PR?

file modified in the PR yes

is a regex used for autoplan

@jose.amengual another issue I am running in to with atlantis Have a question about the ‘when_modified’ field in the repo level config, we are using dynamic repo config generation and in the same way generating the var files for the project. The var files are not commited to the git repo, since it’s not commited I beleive the when_modified causes an issue by not planning the project. Removing the when_modified field from config doesn’t seem to help becuase of the default values. Do we have way to ignore this field and just plan the projects, regardless of modified file in the project

2024-10-05

trying to create a super simple example of atmos + cloudposse/modules to see if they will work for our needs. I’m using the s3-bucket but when I do a plan on it.. it prints out the terraform plan but that shows
│ Error: failed to find a match for the import '/opt/test/components/terraform/s3-bucket/stacks/orgs/**/*.yaml' ('/opt/test/components/terraform/s3-bucket/stacks/orgs' + '**/*.yaml')
I can’t make heads or tails of this error.. there are no stacks/orgs
under the s3-bucket module when I pulled it in with atmos vendor pull
Thanks in advance

please DM me your config, i’ll review it
2024-10-06

Continuing from my understanding of the design pattern. Base on this screenshot. Can i ask few questions:
- The infrastructure repository holds all the atmos components, stacks, modules ?
- The application repository only requires to write the taskdef.json for deployment into ECS ?
- If there are additional infrastructure the application needs, example s3, dynamodb … etc. The approach is to get the developer to write a PR to the infrastructure repository first with the necessary “stack” information, prior to performing any application type deployment ?

Yes, they would open a PR and add the necessary components to the stack configuration.

…and prior to performing any application deployment that depends on it.

Cool, i think i begin to have clearer understanding on the Atmos mindset … thnks !!
2024-10-07
2024-10-08

Hey, I’m trying to execute a plan and I’m getting the following output:
% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1
Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
- 172.80.0.0/16
- 172.81.0.0/16
vpc_id: [redacted]
Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json
Using ENV vars:
TF_IN_AUTOMATION=true
Executing command:
/opt/homebrew/bin/tofu init -reconfigure
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0
OpenTofu has been successfully initialized!
Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg
Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME
Select a different OpenTofu workspace.
Options:
-or-create=false Create the OpenTofu workspace if it doesn't exist.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME
Create a new OpenTofu workspace.
Options:
-lock=false Don't hold a state lock during the operation. This is
dangerous if others might concurrently run commands
against the same workspace.
-lock-timeout=0s Duration to retry a state lock.
-state=path Copy an existing state file into the new workspace.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
exit status 1
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
github.com/cloudposse/atmos/main.go:9 +0x1c
I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

I just now tried something that seemed to work. Since I’m using the “{stage}” name pattern inside atmos.yaml, I set a “vars.stage: dev” inside my dev.yaml stack file and it seemed to do the trick. Is this a correct pattern? Thanks!
Hey, I’m trying to execute a plan and I’m getting the following output:
% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1
Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
- 172.80.0.0/16
- 172.81.0.0/16
vpc_id: [redacted]
Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json
Using ENV vars:
TF_IN_AUTOMATION=true
Executing command:
/opt/homebrew/bin/tofu init -reconfigure
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0
OpenTofu has been successfully initialized!
Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg
Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME
Select a different OpenTofu workspace.
Options:
-or-create=false Create the OpenTofu workspace if it doesn't exist.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME
Create a new OpenTofu workspace.
Options:
-lock=false Don't hold a state lock during the operation. This is
dangerous if others might concurrently run commands
against the same workspace.
-lock-timeout=0s Duration to retry a state lock.
-state=path Copy an existing state file into the new workspace.
-var 'foo=bar' Set a value for one of the input variables in the root
module of the configuration. Use this option more than
once to set more than one variable.
-var-file=filename Load variable values from the given file, in addition
to the default files terraform.tfvars and *.auto.tfvars.
Use this option more than once to include more than one
variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg
exit status 1
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
github.com/cloudposse/atmos/main.go:9 +0x1c
I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

@Andriy Knysh (Cloud Posse)
2024-10-09

Hey, on a setup like this:
import:
- catalog/keycloak/defaults
components:
terraform:
keycloak_route53_zones:
vars:
zones:
"[redacted]":
comment: "zone made for the keycloak sso"
keycloak_acm:
vars:
domain_name: [redacted]
zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id }}'
My keycloak_acm component is failing to actually get the output of the one above it. Am I doing this fundamentally wrongly? The defaults.yaml being imported looks like this:
components:
terraform:
keycloak_route53_zones:
backend:
s3:
workspace_key_prefix: keycloak-route53-zones
metadata:
component: route53-zones
keycloak_acm:
backend:
s3:
workspace_key_prefix: keycloak-acm
metadata:
component: acm
depends_on:
- keycloak_route53_zones

Can you confirm that you see a valid zone_id
as a terraform output of keycloak_route53_zones

and please make sure the component is provisioned

atmos.Component calls terraform output, so it must be in the state already

atmos terraform output keycloak_route53_zones -s aws-dev-us-east-1
gives me
zone_id = {
"redacted" = "redacted"
}
which is redacted but it is the zone subdomain as key and the id as value

aha, so it’s returning a map for the zone id

ah!

so I guess it should be
zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id.value }}'

value is not correct

you should get the value by the map key using Go templates

{{ index .YourMap "yourKey" }}

zone_id: '{{ index (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id "<key>" }}'

there the key
is the "redacted"
in your example


It didn’t seem to work and for this case I just passed the zone id directly (like the actual value from the output). The error was that terraform complained that it should be less than 32 characters, which means that terraform understood my go template as an actual string and not a template. I run it with atmos terraform apply component -s stack
after a successful plan creation

I passed the value directly because I don’t think the zone id will change in the future, but for other components this might be an issue for me. Maybe I should run it differently? I set the value exactly as suggested by Andriy

please send me your yaml config, i’ll take a look

Sure!
2024-10-10

Hello, me again……. component setting are arbitrary keys?
settings:
pepe:
does-not-like-sushi: true
can I do that?

yes

settings is a free form map

its the same as vars, but free form, participates in all the inheritance, meaning you can define it globally, per stack, per base component, per component, and then everything gets deep merged into the final map


Is there a way to inherit metadata for all components without having to create an abstract
component? something that all component should have

imagine if this was added to something like atmos.yaml and CODEOWNERS only allow very few people to modify it

like having Sec rules for all components

no, metadata is not inherited, it’s per component

ok

because in metadata you specify all the base components for your component


Hi :wave: calling for help with configuring gcs backend. I’m bootstraping a GCP organization. I have a module that created a seed project with initial bits, including gcs bucket that I would like to use for storing tfstate files. I’ve run atmos configured with local tf backend for the init.
Now I’d like to move my backend from local to the bucket and move from there. I’ve added bucket configuration to _defaults.yaml
for my org:
backend_type: gcs
backend:
gcs:
bucket: "bucket_name"
Unfortunately atmos says that this bucket doesn’t exist, despite I have copied a test file into the bucket
╷
│ Error: Error inspecting states in the "local" backend:
│ querying Cloud Storage failed: storage: bucket doesn't exist

Note that atmos never uses this backend. We generate a backend.tf.json file used by terraform

Could it be a GCP permissions issue

I am a organisation admin, have full access to the bucket, and besides Ive tested access to the bucket itself with cli.

Based on your screenshot the YAML is invalid.

the hint is it’s trying to use a “local” backend type.

Configure Terraform Backends.

terraform:
backend_type: gcs
backend:
gcs:
bucket: "tf-state"

Note in your YAML above, the whitespace is off

Terraform is attempting to use this: https://developer.hashicorp.com/terraform/language/backend/local
which indicates that the backend type is not getting set

Terraform can store the state remotely, making it easier to version and work with in a team.

Thanks Erik. I’ve double checked again everything.
I’ve tried to delete backend.tt.json
file, disable auto_generate_backend_file
and after that added backend configuration directly into module - same result.
That got me thinking something is not right with auth into GCP. Re-logged again with gcloud auth application-default login
and now state is migrated into the bucket. Honestly no idea, I’ve logged in multiple times this day already. Even before posting here

maybe this is the kind of tax for changing whole workflow to atmos

can I vendor all the components using vendor.yaml? or do I have to set every component that I want to vendor?

@jose.amengual from where you want to vendor all the components?

from my iac repo

to another repo ( using atmos)

let me check

I did this :
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
# Import other vendor manifests, if necessary
imports: []
sources:
- source: "github.com/jamengual/pepe-iac.git/"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/sandbox/**"

I was able to vendor components just fine

but not sandbox stack files for some reason

you need to add another item for the stacks folder separately

that’s how the glob lib that we are using works

we don’t like it and will revisit it, but currently it’s what it is

you mean something like :
sources:
- source: "github.com/jamengua/pepe-iac.git/"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- source: "github.com/jamengual/pepe-iac.git/"
#version: "main"
targets:
- "./stacks/sandbox/"
included_paths:
- "stacks/sandbox/*.yaml"

included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
- "**/stacks/sandbox/**"

try this ^

so this pulled all the stacks
- "**/stacks/**"

but it looks like I can’t pull a specific subfolder

hmm, so this "**/stacks/sandbox/**"
does not work?

no

i guess you can use
- "**/stacks/**"
- "**/stacks/sandbox/**"

and exclude the other stacks in excluded_paths:

I could do that….but it makes it not very DRY

can you verbose the pull command?

atmos vendor pull --verbose

atmos vendor pull --verbose
Error: unknown flag: --verbose
Usage:
atmos vendor pull [flags]
Flags:
-c, --component string Only vendor the specified component: atmos vendor pull --component <component>
--dry-run atmos vendor pull --component <component> --dry-run
-h, --help help for pull
-s, --stack string Only vendor the specified stack: atmos vendor pull --stack <stack>
--tags string Only vendor the components that have the specified tags: atmos vendor pull --tags=dev,test
-t, --type string atmos vendor pull --component <component> --type=terraform|helmfile (default "terraform")
Global Flags:
--logs-file string The file to write Atmos logs to. Logs can be written to any file or any standard file descriptor, including '/dev/stdout', '/dev/stderr' and '/dev/null' (default "/dev/stdout")
--logs-level string Logs level. Supported log levels are Trace, Debug, Info, Warning, Off. If the log level is set to Off, Atmos will not log any messages (default "Info")
--redirect-stderr string File descriptor to redirect 'stderr' to. Errors can be redirected to any file or any standard file descriptor (including '/dev/null'): atmos <command> --redirect-stderr /dev/stdout
unknown flag: --verbose
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x10524c0c0, 0x140003edf10})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x10524c0c0, 0x140003edf10})
github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
main.main()
github.com/cloudposse/atmos/main.go:11 +0x24

Atmos 1.88.1 on darwin/arm64

oh sorry, try
ATMOS_LOGS_LEVEL=Trace atmos vendor pull

@jose.amengual also check out this example for using YAML anchors in vendor.yaml to DRY it up
https://github.com/cloudposse/atmos/blob/main/examples/demo-component-versions/vendor.yaml#L10-L20
- &library
source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
version: "main"
targets:
- "components/terraform/{{ .Component }}/{{.Version}}"
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
tags:
- demo

could you pass an ENV variable as a value inside of the yaml?

like
excluded_paths:
- "**/production/**"
- "${EXCLUDE_PATHS}"
- "${EXCLUDE_PATH_2}"
- "${EXCLUDE_PATH_3}"

currently not, env vars in vendor.yaml are not supported

2024-10-11

what’s the best way to distinguish between custom components and vendored components from cloudposse?

I was thinking
Option 1) a unique namespace via a separate dir
e.g.
# cloudposse components
components/terraform
# internal components
components/terraform/internal
Option 2) a unique namespace via a prefix
# upstream component
components/terraform/ecr
# internal component
components/terraform/internal-ecr
Option 3) Unique key in component.yaml
and enforce this file in all components
Option 4) Vendor internal components from an internal repo
This way the source
will contain the other org
instead of cloudposse
so that can be used to distinguish

We’re actually looking into something imilar to this right now related to our refarch stack configs

For stack configs we’ve settled on stacks/vendor/cloudposse

For components, I would maybe suggest
components/vendor/cloudposse

The alternative is a top-level folder like vendor/cloudposse

which could contain stacks and components.

Thats for stack configs, what about terraform components ? Or do you think it would be for both stack configs and terraform components?

What our team dose this:
Vendor: components/terraform/vendor/{provider, ej. cloudposse}
Internal: components/terraform/{cloudProvider, Ej. AWS}/{componentName, Ej. VPC}

Thats for stack configs, what about terraform components ?
https://sweetops.slack.com/archives/C031919U8A0/p1728683594309549?thread_ts=1728675666.706109&cid=C031919U8A0ß
https://sweetops.slack.com/archives/C031919U8A0/p1728683609704999?thread_ts=1728675666.706109&cid=C031919U8A0
For components, I would maybe suggest
components/vendor/cloudposse
The alternative is a top-level folder like vendor/cloudposse

@burnzy is this working for you? https://github.com/cloudposse/terraform-yaml-stack-config/pull/95 @Jeremy G (Cloud Posse) is looking into something similar and we’ll likely get this merged. Sorry it fell through the cracks.
what
Simple change to add support for GCS backends
why
Allows GCP users (users with gcs backends) to make use of this remote-state module for sharing data between components.
references
• https://developer.hashicorp.com/terraform/language/settings/backends/gcs • https://atmos.tools/core-concepts/share-data/#using-remote-state

Interesante, I was trying last week to use remote-state between two components with GCP backend and cannot make it, it forced me to re-write my module to switch from remote state to using outputs and atmos templating
what
Simple change to add support for GCS backends
why
Allows GCP users (users with gcs backends) to make use of this remote-state module for sharing data between components.
references
• https://developer.hashicorp.com/terraform/language/settings/backends/gcs • https://atmos.tools/core-concepts/share-data/#using-remote-state

This should work now as we merged the PR

Version 1.8.0 should give you full support for using remotes-state
with any backend Terraform or OpenTofu supports.

@Jeremy G (Cloud Posse) were you also able to resolve that deprecated warning? I think you maybe mentioned you had an idea

Yes, 1.8.0 should fix deprecated warnings from remote-state
. Or, more accurately, if receiving deprecation warnings from remote-state
, they can now be resolved by updating your backend/remote state backend configuration to match the version of Terraform or Tofu you are using. For example, change
terraform:
backend:
s3:
bucket: my-tfstate-bucket
dynamodb_table: my-tfstate-lock-table
role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-role
remote_state_backend:
s3:
role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-read-only-role
to
terraform:
backend:
s3:
bucket: my-tfstate-bucket
dynamodb_table: my-tfstate-lock-table
assume_role:
role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-role
remote_state_backend:
s3:
assume_role:
role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-read-only-role




@Gabriela Campana (Cloud Posse) please create a task to fix this in our infra-test
and infra-live
repos

@Jeremy G (Cloud Posse) can you please suggest the task title?

@Gabriela Campana (Cloud Posse) Update backend configurations to avoid deprecation warnings
and add a note in the task that says all remote-state
modules must be updated to v1.8.0


is it possible to vendor pull from a different repo?

I have my vendor.yaml

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
imports: []
sources:
- source: "<https://x-access-token>:${{ secrets.TOKEN }}@github.com/PEPE/pepe-iac.git"
#version: "main"
targets:
- "./"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"

I tried git:// but then tried to use ssh git, and this is running from a reusable action

I tried this locally and it works but in my local I have a ssh-config

if I run git clone using that url it clones just fine

I’m hitting this error on the go-git library now
// <https://github.com/go-git/go-git/blob/master/worktree.go>
func (w *Worktree) getModuleStatus() (Status, error) {
// ...
if w.r.ModulesPath == "" {
return nil, ErrModuleNotInitialized
}
if !filepath.IsAbs(w.r.ModulesPath) {
return nil, errors.New("relative paths require a module with a pwd")
}
// ...
}
``` package git
import ( “context” “errors” “fmt” “io” “os” “path/filepath” “runtime” “strings”
"github.com/go-git/go-billy/v5"
"github.com/go-git/go-billy/v5/util"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/go-git/go-git/v5/plumbing/format/index"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/merkletrie"
"github.com/go-git/go-git/v5/utils/sync" )
var ( ErrWorktreeNotClean = errors.New(“worktree is not clean”) ErrSubmoduleNotFound = errors.New(“submodule not found”) ErrUnstagedChanges = errors.New(“worktree contains unstaged changes”) ErrGitModulesSymlink = errors.New(gitmodulesFile + “ is a symlink”) ErrNonFastForwardUpdate = errors.New(“non-fast-forward update”) ErrRestoreWorktreeOnlyNotSupported = errors.New(“worktree only is not supported”) )
// Worktree represents a git worktree. type Worktree struct { // Filesystem underlying filesystem. Filesystem billy.Filesystem // External excludes not found in the repository .gitignore Excludes []gitignore.Pattern
r *Repository }
// Pull incorporates changes from a remote repository into the current branch. // Returns nil if the operation is successful, NoErrAlreadyUpToDate if there are // no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. func (w *Worktree) Pull(o *PullOptions) error { return w.PullContext(context.Background(), o) }
// PullContext incorporates changes from a remote repository into the current // branch. Returns nil if the operation is successful, NoErrAlreadyUpToDate if // there are no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. // // The provided Context must be non-nil. If the context expires before the // operation is complete, an error is returned. The context only affects the // transport operations. func (w *Worktree) PullContext(ctx context.Context, o *PullOptions) error { if err := o.Validate(); err != nil { return err }
remote, err := w.r.Remote(o.RemoteName)
if err != nil {
return err
}
fetchHead, err := remote.fetch(ctx, &FetchOptions{
RemoteName: o.RemoteName,
RemoteURL: o.RemoteURL,
Depth: o.Depth,
Auth: o.Auth,
Progress: o.Progress,
Force: o.Force,
InsecureSkipTLS: o.InsecureSkipTLS,
CABundle: o.CABundle,
ProxyOptions: o.ProxyOptions,
})
updated := true
if err == NoErrAlreadyUpToDate {
updated = false
} else if err != nil {
return err
}
ref, err := storer.ResolveReference(fetchHead, o.ReferenceName)
if err != nil {
return err
}
head, err := w.r.Head()
if err == nil {
// if we don't have a shallows list, just ignore it
shallowList, _ := w.r.Storer.Shallow()
var earliestShallow *plumbing.Hash
if len(shallowList) > 0 {
earliestShallow = &shallowList[0]
}
headAheadOfRef, err := isFastForward(w.r.Storer, ref.Hash(), head.Hash(), earliestShallow)
if err != nil {
return err
}
if !updated && headAheadOfRef {
return NoErrAlreadyUpToDate
}
ff, err := isFastForward(w.r.Storer, head.Hash(), ref.Hash(), earliestShallow)
if err != nil {
return err
}
if !ff {
return ErrNonFastForwardUpdate
}
}
if err != nil && err != plumbing.ErrReferenceNotFound {
return err
}
if err := w.updateHEAD(ref.Hash()); err != nil {
return err
}
if err := w.Reset(&ResetOptions{
Mode: MergeReset,
Commit: ref.Hash(),
}); err != nil {
return err
}
if o.RecurseSubmodules != NoRecurseSubmodules {
return w.updateSubmodules(&SubmoduleUpdateOptions{
RecurseSubmodules: o.RecurseSubmodules,
Auth: o.Auth,
})
}
return nil }
func (w *Worktree) updateSubmodules(o *SubmoduleUpdateOptions) error { s, err := w.Submodules() if err != nil { return err } o.Init = true return s.Update(o) }
// Checkout switch branches or restore working tree files. func (w *Worktree) Checkout(opts *CheckoutOptions) error { if err := opts.Validate(); err != nil { return err }
if opts.Create {
if err := w.createBranch(opts); err != nil {
return err
}
}
c, err := w.getCommitFromCheckoutOptions(opts)
if err != nil {
return err
}
ro := &ResetOptions{Commit: c, Mode: MergeReset}
if opts.Force {
ro.Mode = HardReset
} else if opts.Keep {
ro.Mode = SoftReset
}
if !opts.Hash.IsZero() && !opts.Create {
err = w.setHEADToCommit(opts.Hash)
} else {
err = w.setHEADToBranch(opts.Branch, c)
}
if err != nil {
return err
}
if len(opts.SparseCheckoutDirectories) > 0 {
return w.ResetSparsely(ro, opts.SparseCheckoutDirectories)
}
return w.Reset(ro) }
func (w *Worktree) createBranch(opts *CheckoutOptions) error { if err := opts.Branch.Validate(); err != nil { return err }
_, err := w.r.Storer.Reference(opts.Branch)
if err == nil {
return fmt.Errorf("a branch named %q already exists", opts.Branch)
}
if err != plumbing.ErrReferenceNotFound {
return err
}
if opts.Hash.IsZero() {
ref, err := w.r.Head()
if err != nil {
return err
}
opts.Hash = ref.Hash()
}
return w.r.Storer.SetReference(
plumbing.NewHashReference(opts.Branch, opts.Hash),
) }
func (w *Worktree) getCommitFromCheckoutOptions(opts *CheckoutOptions) (plumbing.Hash, error) { hash := opts.Hash if hash.IsZero() { b, err := w.r.Reference(opts.Branch, true) if err != nil { return plumbing.ZeroHash, err }
hash = b.Hash()
}
o, err := w.r.Object(plumbing.AnyObject, hash)
if err != nil {
return plumbing.ZeroHash, err
}
switch o := o.(type) {
case *object.Tag:
if o.TargetType != plumbing.CommitObject {
return plumbing.ZeroHash, fmt.Errorf("%w: tag target %q", object.ErrUnsupportedObject, o.TargetType)
}
return o.Target, nil
case *object.Commit:
return o.Hash, nil
}
return plumbing.ZeroHash, fmt.Errorf("%w: %q", object.ErrUnsupportedObject, o.Type()) }
func (w *Worktree) setHEADToCommit(commit plumbing.Hash) error { head := plumbing.NewHashReference(plumbing.HEAD, commit) return w.r.Storer.SetReference(head) }
func (w *Worktree) setHEADToBranch(branch plumbing.ReferenceName, commit plumbing.Hash) error { target, err := w.r.Storer.Reference(branch) if err != nil { return err }
var head *plumbing.Reference
if target.Name().IsBranch() {
head = plumbing.NewSymbolicReference(plumbing.HEAD, target.Name())
} else {
head = plumbing.NewHashReference(plumbing.HEAD, commit)
}
return w.r.Storer.SetReference(head) }
func (w *Worktree) ResetSparsely(opts *ResetOptions, dirs []string) error { if err := opts.Validate(w.r); err != nil { return err }
if opts.Mode == MergeReset {
unstaged, err := w.containsUnstagedChanges()
if err != nil {
return err
}
if unstaged {
return ErrUnstagedChanges
}
}
if err := w.setHEADCommit(opts.Commit); err != nil {
return err
}
if opts.Mode == SoftReset {
return nil
}
t, err := w.r.getTreeFromCommitHash(opts.Commit)
if err != nil {
return err
}
if opts.Mode == MixedReset || opts.Mode == MergeReset || opts.Mode =…

I do not know if this is because vendir sources on http are not compatible with git?

@Andriy Knysh (Cloud Posse) this is when I was trying to debug the pwd
error
2024-10-12

Improve error stack trace. Add --stack
flag to atmos describe affected
command. Improve atmos.Component
template function @aknysh (#714)## what
• Improve error stack trace
• Add --stack
flag to atmos describe affected
command
• Improve atmos.Component
template function
why
• On any error in the CLI, print Go
stack trace only when Atmos log level is Trace
- improve user experience
• The --stack
flag in the atmos describe affected
command allows filtering the results by the specific stack only:
atmos describe affected –stack plat-ue2-prod
Affected components and stacks:
[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]
• In the atmos.Component
template function, don’t execute terraform output
on disabled and abstract components. The disabled components (when enabled: false
) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.
Summary by CodeRabbit
Release Notes
• New Features
• Added a --stack
flag to the atmos describe affected
command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs.
• Documentation
• Updated documentation for the atmos describe affected
command to reflect the new --stack
flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation.
• Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0
to 1.89.0
and Terraform version from 1.9.5
to 1.9.7
.
Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what
Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.
Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’
why
Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.
references
Also closes #664
Summary by CodeRabbit
• New Features
• Renamed the component from myapp
to station
in the configuration.
• Updated provisioning commands in documentation to reflect the new component name.
• Documentation
• Revised “Deploy Everything” document to replace myapp
with station
.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content.
Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what
Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.
why
As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.
references
Closes #708
Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).
Use the following command to get a quick summary of the specific corrections made:
git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n
FWIW, the top typos are:
• usign
• accross
• overriden
• propogate
• verions
• combinatino
• compoenents
• conffig
• conventionss
• defind
Fix version command in simple tutorial @jasonwashburn (#705)## what
• Corrects incorrect atmos --version
command to atmos version
in simple tutorial docs.
why
• Documentation is incorrect.
references
closes #704
docs: add installation guides for asdf and Mise @mtweeman (#699)## what
Docs for installing Atmos via asdf or Mise
why
As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.
references
Use Latest Atmos GitHub Workflows Examples with RemoteFile
Component @milldr (#695)## what - Created the RemoteFile
component - Replace all hard-coded files with RemoteFile
call
why
• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs
, so we should fetch the latest pattern instead
references
• SweetOps slack thread
Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what
• Updates comment to reflect action defaults
• Fixes atmos-version
input
why
• Fixes input variables to match acceptable action variables
references


How log does it take to get the linux x64 package?
Improve error stack trace. Add --stack
flag to atmos describe affected
command. Improve atmos.Component
template function @aknysh (#714)## what
• Improve error stack trace
• Add --stack
flag to atmos describe affected
command
• Improve atmos.Component
template function
why
• On any error in the CLI, print Go
stack trace only when Atmos log level is Trace
- improve user experience
• The --stack
flag in the atmos describe affected
command allows filtering the results by the specific stack only:
atmos describe affected –stack plat-ue2-prod
Affected components and stacks:
[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]
• In the atmos.Component
template function, don’t execute terraform output
on disabled and abstract components. The disabled components (when enabled: false
) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.
Summary by CodeRabbit
Release Notes
• New Features
• Added a --stack
flag to the atmos describe affected
command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs.
• Documentation
• Updated documentation for the atmos describe affected
command to reflect the new --stack
flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation.
• Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0
to 1.89.0
and Terraform version from 1.9.5
to 1.9.7
.
Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what
Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.
Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’
why
Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.
references
Also closes #664
Summary by CodeRabbit
• New Features
• Renamed the component from myapp
to station
in the configuration.
• Updated provisioning commands in documentation to reflect the new component name.
• Documentation
• Revised “Deploy Everything” document to replace myapp
with station
.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content.
Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what
Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.
why
As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.
references
Closes #708
Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).
Use the following command to get a quick summary of the specific corrections made:
git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n
FWIW, the top typos are:
• usign
• accross
• overriden
• propogate
• verions
• combinatino
• compoenents
• conffig
• conventionss
• defind
Fix version command in simple tutorial @jasonwashburn (#705)## what
• Corrects incorrect atmos --version
command to atmos version
in simple tutorial docs.
why
• Documentation is incorrect.
references
closes #704
docs: add installation guides for asdf and Mise @mtweeman (#699)## what
Docs for installing Atmos via asdf or Mise
why
As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.
references
Use Latest Atmos GitHub Workflows Examples with RemoteFile
Component @milldr (#695)## what - Created the RemoteFile
component - Replace all hard-coded files with RemoteFile
call
why
• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs
, so we should fetch the latest pattern instead
references
• SweetOps slack thread
Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what
• Updates comment to reflect action defaults
• Fixes atmos-version
input
why
• Fixes input variables to match acceptable action variables
references

@Andriy Knysh (Cloud Posse)

@jose.amengual please use this release


it’s already in the deb
and rpm
Linux packages
https://github.com/cloudposse/packages/actions/runs/11320822457

(release 1.89.0 was not published to the Linux packages b/c of some issues with the GitHub action)

awesome, thanks
2024-10-13

Always template vendor source and targets @mss (#712)## what
This change improves the templating within vendor manifests slightly: It officially adds support for the Component
field to both source
and targets
.
These features were already supported but mostly undocumented and hidden behind an implicit switch: The templating was only triggered if the Version
field was set. Which was also the only officially supported field.
In reality though all fields from the current source definition were available but in the state they were currently in, depending on the order of the templates.
With this change
• It is clearly documented which fields are supported in which YAML values. • Only the two static fields are supported. • The values are always templated.
Theoretically this could be a breaking change if somebody used no version
field but curly braces in their paths. Or relied on the half-populated source data structure to refer to unsupported fields. If xkcd 1172 applies it should be possible to amend this logic to add more officially supported fields.
why
I was looking for a way to restructure our vendoring like the examples in examples/demo-vendoring/vendor.yaml
to avoid copy and paste errors when we release new component versions.
I actually only found out about that demo when I was done writing this code since the templating was never triggered without a version
field and the documentation didn’t mention it.
references
• https://github.com/cloudposse/atmos/blob/v1.88.1/examples/demo-vendoring/vendor.yaml • https://atmos.tools/core-concepts/vendor/vendor-manifest/#vendoring-manifest
Summary by CodeRabbit
• New Features
• Enhanced vendoring configuration with support for dynamic component referencing in vendor.yaml
.
• Improved handling of source
and targets
attributes for better organization and flexibility.
• Documentation
• Updated documentation for vendoring configuration, including clearer instructions and examples for managing multiple vendor manifests.
• Added explanations for included_paths
and excluded_paths
attributes to improve understanding.
Fix a reference to an undefined output in GitHub Actions @suzuki-shunsuke (#718)## what
- Fix a reference to an undefined output in GitHub Actions.
The step config
is not found.
This bug was added in #612 .
- Use a version variable for easier updates.
env: TERRAFORM_VERSION: “1.9.7”
steps:
- uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TERRAFORM_VERSION }}
- Stop installing terraform wrapper
By default hashicorp/setup-terraform
installs a wrapper of Terraform to output Terraform stdout and stderr as step’s outputs.
But we don’t need them, so we shouldn’t install the wrapper.
https://github.com/hashicorp/setup-terraform
- uses: hashicorp/setup-terraform@v3 with: terraform_wrapper: false
why
references
Summary by CodeRabbit
• Chores
• Updated workflow configurations for improved maintainability.
• Introduced a new environment variable TERRAFORM_VERSION
for version management.
ci: install Terraform to fix CI failure that Terraform is not found @suzuki-shunsuke (#717)## what
Install Terraform using hashicorp/setup-terraform
action in CI.
why
CI failed because Terraform wasn’t found.
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449045566
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449046010
Run cd examples/demo-context
all stacks validated successfully
exec: "terraform": executable file not found in $PATH
This is because ubuntu-latest was updated to ubuntu-24.04 and Terraform was removed from it.
https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2404-Readme.md
On the other hand, Ubuntu 22.04 has Terraform.
https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md
references
Summary by CodeRabbit
• Chores
• Enhanced workflow for testing and linting by integrating Terraform setup in multiple job sections.
• Updated the lint job to dynamically retrieve the Terraform version for improved flexibility.

@Malte would you mind reviewing this? https://github.com/cloudposse/atmos/pull/723
It addresses templating in vendor files. Let me know if something else would be helpful to add.
what
• Document how to vendor from private GitHub repos • Document template syntax for vendoring
why
• The syntax is a bit elusive, and it’s a common requirement

A bit late but: LGTM! And I just facepalmed because in hindsight the missing git::
Prefix is obious

Just for completeness sake: We actually use private repos as well (BitBucket but the idea is similar) and our Jenkins (don’t ask…) uses a small Git Credential Helper which pulls the variable from the env and outputs the required format (cf. https://git-scm.com/docs/gitcredentials)

I got a brand new error with version 1.90.0 with my vendor file:
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
ls -l
shell: /usr/bin/bash -e {0}
env:
ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
template: source-0:1: function "secrets" not defined

I think is because this:
sources:
- source: "<https://x-access-token>:${{ secrets.TOKEN }}@github.com/pepe-org/pepe-iac.git"
#version: "main"
targets:

I can’t template inside the vendor file

I think its related to https://github.com/cloudposse/atmos/pull/712
what
This change improves the templating within vendor manifests slightly: It officially adds support for the Component
field to both source
and targets
.
These features were already supported but mostly undocumented and hidden behind an implicit switch: The templating was only triggered if the Version
field was set. Which was also the only officially supported field.
In reality though all fields from the current source definition were available but in the state they were currently in, depending on the order of the templates.
With this change
• It is clearly documented which fields are supported in which YAML values. • Only the two static fields are supported. • The values are always templated.
Theoretically this could be a breaking change if somebody used no version
field but curly braces in their paths. Or relied on the half-populated source data structure to refer to unsupported fields. If xkcd 1172 applies it should be possible to amend this logic to add more officially supported fields.
why
I was looking for a way to restructure our vendoring like the examples in examples/demo-vendoring/vendor.yaml
to avoid copy and paste errors when we release new component versions.
I actually only found out about that demo when I was done writing this code since the templating was never triggered without a version
field and the documentation didn’t mention it.
references
• https://github.com/cloudposse/atmos/blob/v1.88.1/examples/demo-vendoring/vendor.yaml • https://atmos.tools/core-concepts/vendor/vendor-manifest/#vendoring-manifest
Summary by CodeRabbit
• New Features
• Enhanced vendoring configuration with support for dynamic component referencing in vendor.yaml
.
• Improved handling of source
and targets
attributes for better organization and flexibility.
• Documentation
• Updated documentation for vendoring configuration, including clearer instructions and examples for managing multiple vendor manifests.
• Added explanations for included_paths
and excluded_paths
attributes to improve understanding.

Cc @Andriy Knysh (Cloud Posse)

@jose.amengual can you open an issue so I can tag the other author

@jose.amengual what is this token {{ secrets.TOKEN }}
?

it’s a GH action token not Atmos Go template token?

this is a known issue with using Atmos templates with other templates intended for external systems (e.g. GH actions, Datadog, etc.)

since that PR enabled processing templates in all cases (even if the version
is not specified as it was before), now Atmos processes the templates every time, and breaks on the templates for the external systems

we have a doc about that:


you have to do
{{`{{ ... }}`}}

there is no way around this

let me know if that syntax works for you

OHHHHHHHHH let me try that

ok, that seems to work but now I’m on this hell
Run #cd atmos/
#cd atmos/
# git clone [email protected]:pepe-org/pepe-iac-iac.git
# git clone ***github.com/pepe-org/pepe-iac-iac.git
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
ls -l
shell: /usr/bin/bash -e {0}
env:
ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
Pulling sources from '<https://x-access-token>:${{ secrets.TOKEN }} @github.com/pepe-org/pepe-iac-iac.git' into '/home/runner/work/pepe-iac/pepe-iac/atmos/atmos'
relative paths require a module with a pwd
Error: Process completed with exit code 1.

if I switch to a git url ( with ssh) this problem goes away and then I get permission denied because I do not have a key to clone that other repo in the org, which is expected

I was trying to use a PAT and change the url to https:// to pull the git repo

note: the git clone command with https in the scrip works fine, so I know my PAT works

you have a space after {{ secrets.TOKEN }}
and before @

ohhh no, if that is the problem I quit

Run #cd atmos/
#cd atmos/
# git clone [email protected]:pepe-org/pepe-iac.git
# git clone ***github.com/pepe-org/pepe-iac.git
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
ls -l
shell: /usr/bin/bash -e {0}
env:
ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
Pulling sources from '<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git' into '/home/runner/work/pepe-iac/pepe-iac/atmos/atmos'
relative paths require a module with a pwd
Error: Process completed with exit code 1.

uff, that could have been embarrassing

so I changes the token for
- source: "<https://x-access-token:12345>@github.com/pepe-org/pepe-iac.git"
and I got bad response code: 404
which makes me believe that is trying to clone with the Token that I’m passing

@jose.amengual sorry, I don’t understand the whole use-case, so let’s review it step by step

before Atmos 1.90.0 release, this was working ?
<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

I tagged you on the other thread I have, if you prefer that one that is cleaner

and the template {{secrets.TOKEN}}
was evaluated by the GH action?

I tagged you on the other thread I have, if you prefer that one that is cleaner

this is the same issue

ok

The goal: clone pepe-iac
repo which is on the same org ( pepe-org
) from another repo ( the app repo)

the rendering of the token was definitely a problem and now that is resolved thanks yo your suggestion

<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git. -
was it working before?

no, it never did

never? even before Atmos release 1.90.0?

( using atmos)

correct, it did not worked before because I was not using the {{
{{ … }}}}
format

i don’t understand

you mentioned it was working before the Atmos 1.90.0 release

I thought it did because I was getting relative paths require a module with a pwd
and I thought I had a ATMOS_BASEPATH problem

the error through me off to another direction, thinking
the token was not an issue anymore

I broke something?

Then I upgraded to 1.90.0 and the error changed

let’s review this:

source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

in vendor.yaml

this is what I have in vendor.yaml
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
imports: []
sources:
- source: "<https://x-access-token>:${{`{{secrets.TOKEN}}`}}@github.com/pepe-org/pepe-iac.git/"
#version: "main"
targets:
- "atmos"
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
- "**/qa/**"
- "**/development/**"
- "**/staging/**"
- "**/management/**"
- "**/venueseus/**"

how does your GH action work? It needs to evaluate {{secrets.TOKEN}}
and replace it with the value before atmos vendor pull
gets to it

let’s focus on source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

ok

what I said beofre about {{
{{secrets.TOKEN}}}}
is not your use case

so
how does your GH action work? It needs to evaluate {{secrets.TOKEN}} and replace it with the value before `atmos vendor pull` gets to it

ohhh…

you should use
source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

and make sure the token is replaces in the GH action before atmos vendor pull
is executed

that’s all that you need to check

this {{secrets.TOKEN}}
is not Atmos Go template, it’s not related tpo Atmos at all

the GH action needs to evaluate it first

the token is available to the action as a secret

if I run on the action
git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git
before the atmos command, I can clone the repo( I’m just clarifying that the token works and is present)

I will change the vendor.yaml and try again

no no

the token ${{secrets.TOKEN}}
will be replaced by the GH actions only if it’s in a GH workflow

the GH actions don’t know anything about that token in an external file vendor.yaml

I c what you mean

Is the token also available as an environment variable? Then you should (now) be able to do something like source: <https://x-access-token>:{{env "SECRET_TOKEN"}}@github.com/pepe-org/pepe-iac.git
(note that there is no $
in this case, ie. that will be evaluated by atmos)

yes correct, thanks @Malte

ok, so within the GH workflow, I need a way to use the value of ${{secrets.TOKEN}}

my gh action is like this :
- name: Vendoring stack and components
working-directory: ${{ github.workspace }}
env:
GITHUB_TOKEN: ${{ secrets.TOKEN }}
run: |
cd atmos/
# git clone [email protected]:pepe-org/pepe-iac.git
git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git
ls -l
ATMOS_LOGS_LEVEL=Trace atmos vendor pull

in your GH action workflow file, you create an env variable from the secret, and then use the env variable in the vendor.yaml
file

@Malte i might have jumped the gun. It might not be related to your PR, but thanks for jumping in!

(not related to @Malte PR at all)

@Erik Osterman (Cloud Posse) no worries, I assumed it was one of those cases where I broke some other weird legit use case of curly braces .-)


instead of the secrets.TOKEN
you mean?

instead of ${{secrets.TOKEN}
, yes (ie. drop the $
as well

source: '<https://x-access-token>:{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'


testing

ohhhh single brackets…..

ok I did this:
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: iac-vendoring
description: Atmos vendoring manifest for Atmos-iac repo
spec:
imports: []
sources:
- source: '<https://x-access-token>:{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'
#version: "main"
targets:
- "atmos"
included_paths:
- "**/components/**"


I got
Processing vendor config file 'vendor.yaml'
Pulling sources from '***github.com/pepe-org/pepe-iac.git' into 'atmos'
bad response code: 404

those stars, the token was replaced correctly?

GH action obscure tokens and such

I will add the token to the vendor file to just test it ( without using the ENV variable)

I wonder if go-getter supports this at all…

ok this is what I got :
Run cd atmos/
cd atmos/
# git clone [email protected]:pepe-org/pepe-iac.git
git clone ***github.com/pepe-org/pepe-iac.git
ls -l
ATMOS_LOGS_LEVEL=Trace atmos vendor pull
shell: /usr/bin/bash -e {0}
env:
ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac-service/pepe-iac-service/atmos/config
ATMOS_BASE_PATH: /home/runner/work/pepe-iac-service/pepe-iac-service/atmos
GITHUB_TOKEN: ***
Cloning into 'pepe-iac'...
total 32
-rw-r--r-- 1 runner docker 15178 Oct 14 20:51 README.md
drwxr-xr-x 10 runner docker 4096 Oct 14 20:51 pepe-iac
drwxr-xr-x 2 runner docker 4096 Oct 14 20:51 config
drwxr-xr-x 4 runner docker 4096 Oct 14 20:51 stacks
-rw-r--r-- 1 runner docker 760 Oct 14 20:51 vendor.yaml
Processing vendor config file 'vendor.yaml'
Pulling sources from '***github.com/pepe-org/pepe-iac.git' into 'atmos'
bad response code: 404

new token, I changed the secret in the repo, git clone command uses same token as
git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/...

and you can see it can clone

the vendor file has the same token but clear text now

Just to be sure: It doesn’t work with a cleartext password either? I quickly checked the go-getter source and it should work. Weird.

it does not

is it possible that the @
is being changes for something else?

I digged a bit. Can you try this please? `
source: 'https://{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'

If that works I don’t understand why your plain git clone
works unless maybe GH has some magic in their workflow engine to fix up this common error

Ah, wait. Why do you use secrets.TOKEN
and not secrets.GITHUB_TOKEN
? https://docs.github.com/en/actions/security-for-github-actions/security-guides/automatic-token-authentication

this is a reusable action, I’m passing the secret as an input

Ok, I must admit I have no idea why that isn’t working. go-getter should call more or less exactly the git clone
command you execute. The only difference is that it will do a `
git clone -- <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git
(note the double dash) but I doubt that this makes a difference…

I changed the git command to git clone – just to try it and it worked

so somehow with atmos this does not work

I even tried this:
git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>"

git command works, atmos 404s

Is it possible to copy private github repositories with parameters : token ?

So just to reiterate, atmos uses go-getter, which is what terraform uses under the hood, so the syntax should be the same.

Was this tried as well?

Adding the git::

that worked!!!

so if you want to avoid having to template the secret in the vendor.yaml
you can do this :
git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>"

then run atmos vendor pull

super

so we need to use git::
if we provide credentials

and actaully, what you said above, if you execute the command git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>"
first, then in vendor.yaml
you don’t need to specify any credentials at all

and this works too
source: 'git::https://{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'

but this does not
source: 'git::https://{{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git'

so the ENV is the way to go


yep, this git::https://{{secrets.TOKEN}}@github.com/xxxxxxxx/yyyyyyyyy.git
does not work b/c the template is not Atmos template, and the file is not a GH action manifest

@jose.amengual can you review https://github.com/cloudposse/atmos/pull/723
what
• Document how to vendor from private GitHub repos • Document template syntax for vendoring
why
• The syntax is a bit elusive, and it’s a common requirement


for those using github and wanted clone internal repos, you can do it without using org-level PAT and using Github App Tokens that expiry every hour, using this actions/create-github-app-token
https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app

Thanks for sharing!

Also, here’s how we’re using that action with GHE environments
https://github.com/cloudposse/.github/blob/main/.github/workflows/shared-auto-release.yml
name: "Shared auto release"
on:
workflow_call:
inputs:
prerelease:
description: "Boolean indicating whether this release should be a prerelease"
required: false
default: false
type: string
publish:
description: "Whether to publish a new release immediately"
required: false
default: false
type: string
runs-on:
description: "Overrides job runs-on setting (json-encoded list)"
type: string
required: false
default: '["ubuntu-latest"]'
summary-enabled:
description: Enable github action summary.
required: false
default: true
type: boolean
outputs:
id:
description: The ID of the release that was created or updated.
value: ${{ jobs.release.outputs.id }}
name:
description: The name of the release
value: ${{ jobs.release.outputs.name }}
tag_name:
description: The name of the tag associated with the release.
value: ${{ jobs.release.outputs.tag_name }}
body:
description: The body of the drafted release.
value: ${{ jobs.release.outputs.body }}
html_url:
description: The URL users can navigate to in order to view the release
value: ${{ jobs.release.outputs.html_url }}
upload_url:
description: The URL for uploading assets to the release, which could be used by GitHub Actions for additional uses, for example the @actions/upload-release-asset GitHub Action.
value: ${{ jobs.release.outputs.upload_url }}
major_version:
description: The next major version number. For example, if the last tag or release was v1.2.3, the value would be v2.0.0.
value: ${{ jobs.release.outputs.major_version }}
minor_version:
description: The next minor version number. For example, if the last tag or release was v1.2.3, the value would be v1.3.0.
value: ${{ jobs.release.outputs.minor_version }}
patch_version:
description: The next patch version number. For example, if the last tag or release was v1.2.3, the value would be v1.2.4.
value: ${{ jobs.release.outputs.patch_version }}
resolved_version:
description: The next resolved version number, based on GitHub labels.
value: ${{ jobs.release.outputs.resolved_version }}
exists:
description: Tag exists so skip new release issue
value: ${{ jobs.release.outputs.exists }}
permissions: {}
jobs:
release:
runs-on: ${{ fromJSON(inputs.runs-on) }}
environment: release
outputs:
id: ${{ steps.drafter.outputs.id }}
name: ${{ steps.drafter.outputs.name }}
tag_name: ${{ steps.drafter.outputs.tag_name }}
body: ${{ steps.drafter.outputs.body }}
html_url: ${{ steps.drafter.outputs.html_url }}
upload_url: ${{ steps.drafter.outputs.upload_url }}
major_version: ${{ steps.drafter.outputs.major_version }}
minor_version: ${{ steps.drafter.outputs.minor_version }}
patch_version: ${{ steps.drafter.outputs.patch_version }}
resolved_version: ${{ steps.drafter.outputs.resolved_version }}
exists: ${{ steps.drafter.outputs.exists }}
steps:
- uses: actions/create-github-app-token@v1
id: github-app
with:
app-id: ${{ vars.BOT_GITHUB_APP_ID }}
private-key: ${{ secrets.BOT_GITHUB_APP_PRIVATE_KEY }}
- name: Context
id: context
uses: cloudposse/[email protected]
with:
query: .${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}
config: |-
true:
config: auto-release.yml
latest: true
false:
config: auto-release-hotfix.yml
latest: false
# Drafts your next Release notes as Pull Requests are merged into "main"
- uses: cloudposse/github-action-auto-release@v2
id: drafter
with:
token: ${{ steps.github-app.outputs.token }}
publish: ${{ inputs.publish }}
prerelease: ${{ inputs.prerelease }}
latest: ${{ steps.context.outputs.latest }}
summary-enabled: ${{ inputs.summary-enabled }}
config-name: ${{ steps.context.outputs.config }}

jobs:
release:
runs-on: ${{ fromJSON(inputs.runs-on) }}
environment: release
outputs:
...
steps:
- uses: actions/create-github-app-token@v1
id: github-app
with:
app-id: ${{ vars.BOT_GITHUB_APP_ID }}
private-key: ${{ secrets.BOT_GITHUB_APP_PRIVATE_KEY }}
...
# Drafts your next Release notes as Pull Requests are merged into "main"
- uses: cloudposse/github-action-auto-release@v2
id: drafter
with:
token: ${{ steps.github-app.outputs.token }}
2024-10-14

Morning everyone. Is there a switch with atmos terraform plan to get it to -out to a readable file? I see it with straight terraform, but wasn’t sure how the command would look as part of an atmos command like terraform plan -s mystack-stack

you mean show
commnad?

yea but im unsure how to daisy chain it in an atmos command against my s3 backend, a pbkac issue

my plans longer than the vscode buffer

Can you share the command you are running? We use -out in all our github actions and it works fine

yea apologies atmos terraform plan ec2-instance -s platform-platformprod-instance

i didnt play with it much to see if I can tee out at the end

I was expecting to see -out in your command

yea im just going to mess around today and read the cli guide, wasnt sure if I can just add an -out at the end with atmos

this is nested in a Github Action that Erik mentioned, but here’s an example https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L218-L223
TLDR:
atmos terraform plan ${{ COMPONENT }} \
--stack ${{ STACK }} \
-out="${{ PLAN FILE }}" \
atmos terraform plan ${{ inputs.component }} \
--stack ${{ inputs.stack }} \
-out="${{ steps.vars.outputs.plan_file }}" \
-lock=false \
-input=false \
-no-color \

ooooh so thats how it gets the plan details into github? I honestly just took a different approach and increased my vscode buffers - i did get it to output without -out:\file\whatever, but it was in what looked like encrypted/plan format.

appreciate the multiple responses here

it was honestly something new to me, i hadnt run into a plan output that got cutoff in the console

storing plans in our github actions are entirely separate from running locally. In github actions, we store the plans with a combination of s3 and dynamodb, but locally we use native terraform - plans arent stored locally unless you specify so

no no sorry im not using correct technical terms

i mean the console output of whats going to happen

thats all i was struggling with, half my planned changes were cut off in the console so i couldnt tell if some stuff was going to be destroyed or not

im interested in actions btw but I am on an internal GHES server that I would have to work through compliance reqs to expose.

that output file is a planfile for Terraform. This is an special Terraform file that you can then pass to Terraform again with terraform apply to apply a specific plan. But you dont have to create a plan file to apply terraform. You can always run terraform apply without a planfile, then manually accept the changes

yea as soon as i saw the garbled text im like oh thats not what im looking for

An internal GHES should be fine for the actions too! But if you want to validate a planfile you can then use terraform show
to see what that planfile includes. For example
terraform plan -out=tfplan
terraform show -json tfplan

ohhhhh thank you

cool to know

and yea with GHES but its internal and not exposed yet

and you dont need atmos to use terraform show. You could generate the planfile with the atmos command, and then just use native TF with show


yup we use OIDC as well with our actions. Here’s a little bit about how that works to authenticate with AWS https://docs.cloudposse.com/layers/github-actions/github-oidc-with-aws/
This is a detailed guide on how to integrate GitHub OpenID Connect (OIDC) with AWS to facilitate secure and efficient authentication and authorization for GitHub Actions, without the need for permanent (static) AWS credentials, thereby enhancing security and simplifying access management. First we explaini the concept of OIDC, illustrating its use with AWS, and then provide the step-by-step instructions for setting up GitHub as an OIDC provider in AWS.
2024-10-15

Continuing my vendoring journey with Atmos, I will like to avoid having to do this :
included_paths:
- "**/components/**"
- "**/*.md"
- "**/stacks/**"
excluded_paths:
- "**/production/**"
- "**/qa/**"
- "**/development/**"
- "**/staging/**"
- "**/management/**"

the subdir regex does not work

I had to do that so I can only vendor the stacks/sandbox
stack

the expression "**/stacks/sandbox/**"
does not work

it would be nice if the **
, could work at any level

@jose.amengual yes, in order to support **/stacks/sandbox/**
, you have to also add **/stacks/**
- this is a limitation of the Go lib that Atmos uses. But obviously you don’t want to use that b/c oyiu have to then exclude all other folders in excluded_paths
. We’ll have to review the lib

This is in our backlog - I think we can probably get to it pretty soon

feat: support for .yml
and .yaml
file extensions for component vendoring @RoseSecurity (#725)## what
• Support for .yml
and .yaml
when vendoring using component.yaml
why
• The tool is strict about needing component.yaml
, file ending for yaml files is a matter of preference and both should be accepted.
testing
• make build
component.yml
❯ ./build/atmos vendor pull -c aurora-postgres-resources Pulling sources for the component ‘aurora-postgres-resources’ from ‘github.com/cloudposse/terraform-aws-components.git//modules/aurora-postgres-resources?ref=1.511.0’ into ‘/Users/infra/components/terraform/aurora-postgres-resources’
component.yaml
❯ ./build/atmos vendor pull -c aurora-postgres-resources Pulling sources for the component ‘aurora-postgres-resources’ from ‘github.com/cloudposse/terraform-aws-components.git//modules/aurora-postgres-resources?ref=1.511.0’ into ‘/Users/infra/components/terraform/aurora-postgres-resources’
Missing both
❯ ./build/atmos vendor pull -c aurora-postgres-resources component vendoring config file does not exist in the ‘/Users/infra/components/terraform/aurora-postgres-resources’ folder
references
• Closes the following issue
Summary by CodeRabbit
Summary by CodeRabbit
• New Features
• Enhanced file existence checks for component configuration, now supporting both .yaml
and .yml
file formats.
• Refactor
• Streamlined variable declarations for improved readability without changing logic.
Add the guide to install atmos using aqua @suzuki-shunsuke (#720)## what
Add the guide to install atmos using aqua.
why
aqua is a CLI Version Manager written in Go.
aqua supports various tools including atmos, Terraform, Helm, Helmfile.
Confirmation
I have launched the webserver on my laptop according to the guide.
references
Summary by CodeRabbit
• New Features
• Introduced a new installation method for Atmos using the aqua
CLI version manager.
• Added a dedicated tab in the installation guide for aqua
, including instructions for setup and usage.
• Documentation
• Updated the “Install Atmos” document to enhance user guidance on installation options.

Improve error handling @haitham911 (#726)
what
• Improve error handling , check log level Trace
for detailed trace information
why
• Print detailed error only when log level Trace
Document vendoring from private git repos @osterman (#723)
what
• Document how to vendor from private GitHub repos • Document template syntax for vendoring
why
• The syntax is a bit elusive, and it’s a common requirement
2024-10-16

When do you thin we can get https://github.com/cloudposse/github-action-atmos-affected-stacks updated to use atmos 1.92.0? are you guys ok if I push a PR?
A composite workflow that runs the atmos describe affected command

yes, please open a PR, thank you
A composite workflow that runs the atmos describe affected command

what
• Upgrade default version to 1.92.0 • Add new –stack option supported on >1.90.0
why
• To allow filter changes bystacks
references

and if you have time : https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/62/ and https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/90 that will allow people to use Azure

@Igor Rodionov

I think Igor was waiting for a review

this ones Erik

Ah yea, this just seemed untennable

- name: Retrieve Plan (Azure)
if: ${{ env.ACTIONS_ENABLED == 'true' &&
steps.config.outputs.plan-repository-type != '' &&
steps.config.outputs.plan-repository-type != 'null' &&
steps.config.outputs.blob-account-name != '' &&
steps.config.outputs.blob-account-name != 'null' &&
steps.config.outputs.blob-container-name != '' &&
steps.config.outputs.blob-container-name != 'null' &&
steps.config.outputs.metadata-repository-type != '' &&
steps.config.outputs.metadata-repository-type != 'null' &&
steps.config.outputs.cosmos-container-name != '' &&
steps.config.outputs.cosmos-container-name != 'null' &&
steps.config.outputs.cosmos-database-name != '' &&
steps.config.outputs.cosmos-database-name != 'null' &&
steps.config.outputs.cosmos-endpoint != '' &&
steps.config.outputs.cosmos-endpoint != 'null' }}
uses: cloudposse/github-action-terraform-plan-storage@v1

We can’t keep that pattern. While it addresses the current problem it doesn’t scale code wise

if you can comment the pr and add your thoughts I could work on it if Igor does not have the time


Is there a way to always terraform apply all components of a stack?

like atmos terraform apply -all -s mystack

currently Atmos does not have that native command (we will add it).
You can do this using Atmos workflows (if you know all the components in the stacks) - this is static config

or you can create a custom Atmos command - this is dynamic. You can call atmos describe stacks --stack

or this :
atmos describe stacks -s sandbox --format=json|jq -r '.["sandbox"].components.terraform | keys[]'

and then use shell script and jq
to loop over all components

yes, I will use that for now

@jose.amengual add that to a custom command

good idea

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.
2024-10-17

I have an Idea: The Plan and apply actions support grabbing the config from the atmos.yaml
.integrations.github.gitops.*
and since the .integrations
is a free map I was wondering if we could add an option to pass a scope to the .integrations
so that we can do something like :
github:
sandbox:
gitops:
role:
plan: sandboxrole
gitops:
role:
plan: generalrole
, by adding a new input to the action to something like this:
atmos-integrations-scope:
description: The scope for the integrations config in the atmos.yaml
required: false
default: ".integrations.github.gitops"
we can allow a more flexible and backwards compatible solution for people that needs to have integrations per stack

@Andriy Knysh (Cloud Posse)

So I don’t know that we would add something like atmos-integrations-scope
, and we should probably move role assumption like this out of the action itself, as there are just too many ways it can work.

However, using the free-form map idea should still work for you, @jose.amengual

Then you can use this action to retrieve the roles https://github.com/cloudposse/github-action-atmos-get-setting
A GitHub Action to extract settings from atmos metadata

from the integrations section

the role, was just an example, for me the problem is: I have a bucket per account where I want to store the plans with the storage action

so I can’t use one github integration setting for all accounts

Integrations can be extended in stacks


…following inheritance

Pretty sure at least..


but the github actions do not do that, they look at the atmos.config.integrations

I think we just need to change this action slightly.


So today, it does this, like. you said:
- name: config
shell: bash
id: config
run: |-
echo "opentofu-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["opentofu-version"]')" >> $GITHUB_OUTPUT
echo "terraform-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["terraform-version"]')" >> $GITHUB_OUTPUT
echo "enable-infracost=$(atmos describe config -f json | jq -r '.integrations.github.gitops["infracost-enabled"]')" >> $GITHUB_OUTPUT
echo "aws-region=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].region')" >> $GITHUB_OUTPUT
echo "terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT
echo "terraform-state-table=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].table')" >> $GITHUB_OUTPUT
echo "terraform-state-bucket=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].bucket')" >> $GITHUB_OUTPUT
echo "terraform-plan-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops.role.plan')" >> $GITHUB_OUTPUT

correct

However, these are already required.
component:
description: "The name of the component to plan."
required: true
stack:
description: "The stack name for the given component."
required: true

So instead, we should be retrieving the integration config for the component in the stack.

Then it will do what you want.

or that yes

So most things in atmos.yaml
can be extended in settings
section of a component. Integrations is one of those.

I am not sure when @Igor Rodionov can get to it, but we could probably define it well enough so if you wanted to PR it, you could do it - since maybe it’s a blocker for you

the other PRs that Igor have inflight and mine to add the --stacks
for describe-affected are a blocker for me

The problem is it should be using this get settings action.

( I mention them in the other thread)

- name: Get Atmos Multiple Settings
uses: cloudposse/github-action-atmos-get-setting@main
id: example
with:
settings: |
- component: foo
stack: core-ue1-dev
settingsPath: settings.secrets-arn
outputPath: secretsArn
- component: foo
stack: core-ue1-dev
settingsPath: settings.secrets-arn
outputPath: roleArn

that will handle all the lookups the right way

so you are suggesting to move away from using this
"terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT
to use github-action-atmos-get-setting? are you ok with breaking changes, or I keep the current use of atmos describe config
and I add the a new section to use the github-action-atmos-get-setting?

@Erik Osterman (Cloud Posse)

I am not sure that it would be breaking

I think it would work the same way, but add the ability for configurations to be inherited

I’m having some weird issues with the github-action-atmos-terraform-apply action, where it seemingly forgets which command it should run. I’m using OpenTofu and have set components.terraform.command: tofu
which the ...-plan
action picks up perfectly fine (and it also works locally), but ...-apply
ignores that setting and tries to use terraform
which isn’t installed. Using ATMOS_COMPONENTS_TERRAFORM_COMMAND
works, which makes me believe it’s an issue on how the config is read (although it’s the only thing that is being ignored).
I’m using the GitHub actions exactly as described in the docs.
I’ve combed through both actions to try and figure out what the difference is, but I got no clue (I did find a discrepancy on how the cache is loaded which led to the cache key not being found as the path is different). For context, it’s defined in atmos.yaml
in the workspace root, not in /rootfs/usr/local/etc/atmos/atmos.yaml
which is the config-path (the folder not the file) that’s defined when running the action.
Is there anything I should look out for to make this work? Happy to post config files, they are pretty standard.

@Igor Rodionov any ideas?

Have you set the atmos-config-path

?


Yep, atmos-config-path
is set and I verified it’s the correct value.

This is the complete action execution:
Run cloudposse/github-action-atmos-terraform-apply@v2
with:
component: aurora-postgres
stack: plat-euw1-prod
sha: c60fd18fb31bea65c8bf5975913940623c3d98c6
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: 1.90.0
branding-logo-image: <https://cloudposse.com/logo-300x69.svg>
branding-logo-url: <https://cloudposse.com/>
debug: false
token: ***
env:
AWS_CLI_VERSION: 2
AWS_CLI_ARCH: amd64
VERBOSE: false
LIGHTSAILCTL: false
BINDIR: /usr/local/bin
INSTALLROOTDIR: /usr/local
ROOTDIR:
WORKDIR:
and the error message
exec: "terraform": executable file not found in $PATH
exec: "terraform": executable file not found in $PATH

Could this be related? read PR description https://github.com/cloudposse/atmos/pull/717
what
Install Terraform using hashicorp/setup-terraform
action in CI.
why
CI failed because Terraform wasn’t found.
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449045566
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449046010
Run cd examples/demo-context
all stacks validated successfully
exec: "terraform": executable file not found in $PATH
This is because ubuntu-latest was updated to ubuntu-24.04 and Terraform was removed from it.
https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2404-Readme.md
On the other hand, Ubuntu 22.04 has Terraform.
https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md
references
Summary by CodeRabbit
• Chores
• Enhanced workflow for testing and linting by integrating Terraform setup in multiple job sections.
• Updated the lint job to dynamically retrieve the Terraform version for improved flexibility.

I don’t think so, since terraform
shouldn’t be called at all, tofu
should.
When I was running this on GitHub-hosted runners (but apparently that has changed according to the PR), I was actually getting a different error that makes more sense now and it was about schemas not being found because they were using the OpenTofu registry (registry.opentofu.org) but the plugins were downloaded from registry.terraform.io.
I can also see the difference between plans (OpenTofu has successfully been initalized
) and applies (Terraform has successfully been initialized
) in the output, now that I know what to look for.
Edit: going to rerun with ATMOS_LOGS_LEVEL=Trace
and report back.

Okay, I feel stupid now. :face_palm: Apparently my two atmos configs (rootfs/...
in the base directory) were out of sync and that caused the issues, because they are obviously not merged. I guess I misunderstood something in the last couple of years and not re-read the docs properly. Anyways, thanks for your help, Erik!
What still baffles me though, is that the -plan
action worked perfectly fine.

Hrmmm why do you have 2 atmos configs?

I’ve just followed this: https://github.com/cloudposse/atmos/tree/main/examples/quick-start-advanced

To be honest, I’ve seen these 2 configs since the early examples in the Atmos repo and always thought that this was a common thing, so I never questioned it.

and yet another interesting opportunity: since github-action-atmos-terraform-plan
uses action/checkout
inside the action, if you vendor files but do not commit the action/cache
wipes them out.

If it would help, checkout could be feature flagged

ok, I will create a PR for that

it does help but I will need to add a restore cache step too

using the same flag

basically something like this :
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ inputs.sha }}
- name: Restore atmos files cache
uses: actions/cache/restore@v4
with:
path: atmos
key: ${{ runner.os }}-atmosvendor

the checkout needs to exist for the job to see the files of the repo in the new runner
2024-10-18

Hey, I’m not sure if I’m getting this right. https://atmos.tools/core-concepts/components/terraform/backends/#terraform-backend-inheritance states, that if I want to manage multiple accounts I need a separate bucket, dynamodb and iam role. So far so good, but if I run atmos describe stacks
it seems like the first stack is running fine, but as soon as it is trying to fetch the second stack I get Error: Backend configuration changed
. I use init_run_reconfigure: true
in atmos.yaml
.
My backend configuration looks like this:
terraform:
backend_type: s3
backend:
s3:
profile: "{{ .vars.account_profile }}"
encrypt: true
key: "terraform.tfstate"
bucket: "kn-terraform-backend-{{ .vars.account_id }}-{{ .vars.region }}"
dynamodb_table: "TerraformBackendLock"
The profile is switched based on the stack.
Configure Terraform Backends.

Are you also using {{ atmos.Component ...}}
?
Configure Terraform Backends.

Yes

I’m using some components that are depending on the vpc id of the vpc component

Ok, I think that could be a scenario not thoroughly tested.

We are also working on a replacement of the {{ atmos.Component ...}}
implementation, which is problematic for multiple reasons, chief among them is performance, since all must be evaluated everytime - and since this is handled by the go template engine at load time, it’s slow. We’re working on a way to do this with YAML explicit types, and lazy loading.

However, I’m really glad that you bring this use-case up, as I don’t believe we’ve considered it. The crux of the problem is the component is invoked nearly concurrently in multiple “init” contexts, so we would need to copy it somewhere so it can be concurrently initialized with different backends.

So I can understand why you want to do this, and why it’s needed, but we need to think about how to do this. It’s further complicated by copying, since a root module (component), can have relative source = "../something"
module statements

I thikn the problem may be mititgated by using terraform data sources (in HCL) instead of atmos.Component


I will try it with a datasource. Thanks for your explanation

Just another question, do you already have something in mind to replace the templating with? Just another templating or remote-state?

yaml explicit type functions - will probably be released this week, and will be used to define Atmos functions in YAML

the functions will include getting a terraform output from a remote state, getting secrets from many different systems, etc

Hey, just a quick catchup. This is not released yet, is it?

We have the branch published but the PR is not yet ready

We discussed this on Friday internally, and are considering introducing feature flags - as this should be considered experimental until we understand all the implications and edge cases.

We learned a lot from the template functions, and the way it was used was not the way we used it ourselves :-)

Ah, good to know. I would gladly appreciate such feature flag and try it out

I anticipate we should be able to get this wrapped up this week, but defer to @Andriy Knysh (Cloud Posse)

@Dennis Bernardy are you suing the latest Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.100.0 ?
Please try it out and let me know if it fixes the issue with multiple backends
Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:
1. default
2. <the previously used workspace>
The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD.
The PR adds the logic that deletes the .terraform/environment file from the component directory before executing terraform init. The .terraform/environment file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing terraform init to prevent the Terraform prompt asking to select the default or the previously used workspace.

Looks like the same error still:
template: describe-stacks-all-sections:56:24: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".

I see you are using atmos.Component
(and multiple backends). This case is not solved yet, we know this is an issue and discussed it (will have a solution for that prob this week)

we will probably have a solution for that today, but no ETA

Ok, I’m looking forward to it

I’m trying to decide something similar with my backend. I have everything right now in Govcloud West, but was hoping to monorepo East as part of the same deployment repository. Unsure if anyones done that here.

Is it expected to get “<no value>” for when I describe a stack that has one of its values coming from a datasource like Vault, or does that mean that I’m not grabbing the value correctly? Thanks!

Just bumping this up, in case anyone knows something about it. I get a map[] when I don’t specify a key so I assume it’s correctly accessing Vault? But it’s weird that the map is always empty

How do you access that value?

Hey, I’m trying to get it like this:
- name: KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY
value: '{{ (datasource "vault-dev" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'

And this is part of a ECS “environment” list

This works, for example:
gomplate -d vault=vault+http:/// -i '{{(datasource "vault" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'
Given the proper vault token and addr are set

And I’ve set the vault-dev data source in my atmos.yaml

And I’m setting an environment variable in my terminal with VAULT_TOKEN before running atmos

Ah. That looks like a different approach then ours. I only noticed recently that if you use .atmos.component to template outputs from a module you can not get lists or maps. Maps specifically only return no_value
. Maybe that limitation also applies here and templating in general only is supported for strings, but that is better confirmed by someone as this is just a suspicion

Hmm, this is interesting, the KC_SPI… var should be a string, if I try to get the whole secret path, like not specifying the ).KC...
part, I get a “map[]”, which seems to be an empty map. The worst part is not knowing what I’m doing wrong lol

I only noticed recently that if you use .atmos.component to template outputs from a module you can not get lists or maps.
See this workaround
I was able to work around issues with passing more complex objects between components by changing the default delimiter for templating
Instead of the default delimiters: ["{{", "}}"]
Use delimiters: ["'{{", "}}'"]
.
It restricts doing some things like having a string prefix or suffix around the template, but there are other ways to handle this.
vars:
vpc_config: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_attrs) }}'
works with a object like
output "vpc_attrs" {
value = {
vpc_id = "test-vpc-id"
subnet_ids = ["subnet-01", "subnet-02"]
azs = ["us-east-1a", "us-east-1b"]
private_subnets = ["private-subnet-01", "private-subnet-02"]
intra_subnets = ["intra-subnet-01", "intra-subnet-02"]
}
}

Also, we’re working on a better long-term solution using YAML explicit types. ETA this week.

This is interesting, do you think this applies to my issue with the vault gomplate data source?

Hrmm… probably less so, since you’re trying to retrieve a string.

Have you tried enabling ATMOS_LOG_LEVEL=Trace

It might shed more light

Are you able to share your datasource configuration in atmos.yml for
vault-dev

I currently have Trace set in my atmos.yaml, it doesn’t output any errors. Yes, sure:
templates:
settings:
enabled: true
gomplate:
enabled: true
datasources:
vault-prod:
url: "vault+<http://vault.companydomain:8200>"
vault-dev:
url: "vault+<http://vault-dev.companydomain:8200>"

gomplate -d "vault-dev=vault+<http://vault-dev.companydomain:8200/>" -i '{{ (datasource "vault-dev" "infrastructure/keycloak").RANDOM_SECRET_KEY }}'
works on the same terminal, in case it helps

I tried it without specifying an actual secret too and got a “map[]” in the value field of my variable. This is why I think I’m not correctly grabbing the variable but I just wanted to confirm

For context, I’m trying to grab the value like this:
- name: KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY
value: '{{ (datasource "vault-dev" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'

Ah, also wanted to mention that when I do something like:
gomplate -d vault=vault+http:/// -i '{{(datasource "vault" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'
I do receive a value. Also, I’ve set up a custom command just to see if I’m getting the VAULT_TOKEN correctly inside atmos and it seems that I am:
$ atmos validate-token
Executing command:
echo "VAULT_TOKEN is: $VAULT_TOKEN"
VAULT_TOKEN is: [[redacted]]
2024-10-19

Fix: condition to display help interactive menu @Cerebrovinny (#724)
what
• Ensure that the usage is displayed only when invoking the help commands or when the help flag is set
why
• Running incorrect commands in Atmos caused the output to be an interactive help menu forcing the user to manually exit the UI

is it possible use the cloudposse/github-action-atmos-get-setting@v2 action to retrieve a setting that is not at the component level and is outside the terraform/components
?( like a var or anything else)

Can you give an example

I can get this no problem
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: settings.integrations.github.gitops.role
outputPath: integrations-gitops

but then I thought about doing this :
import:
vars:
location: west
environment: us3
namespace: dev
settings:
github:
actions_enabled: true
integrations:
github:
gitops:
region: "westus3"
role: "arn:aws:iam::123456789012:role/atmos-terraform-plan-gitops"
table: "terraform-state-lock"
components:
terraform:
......

but I do not think that will work since the stack yaml is not free form as I understand it

But that is then inherited by all components, so no need to retrieve it from anywhere else

but this is on a stack file

Thats fine

this goes back to my questions about
echo "aws-region=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].region')" >> $GITHUB_OUTPUT
echo "terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT
echo "terraform-state-table=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].table')" >> $GITHUB_OUTPUT

Its still inherited right?

for the plan action

integration.config, lives in atmos.yaml ( we could call that global)

if I add overwrite the settings at the component level then I can retrieve it like this :
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: settings.integrations.github.gitops.role
outputPath: integrations-gitops

from
cosmosdb:
settings:
github:
actions_enabled: true
integrations:
github:
gitops:
region: "westus3"
role: "arn:aws:iam::123456789012:role/atmos-terraform-plan-gitops"
table: "terraform-state-lock"

I am on my phone so I cannot give examples

I was trying to avoid to have to do it per component

The point ianthst we should not be retrieving the value from integrations, we should be using the settings action but we are not

You are only applying 1 component at a time, so this is ideal - use settings action

I am pretty sure everything in atmos integrations is just deepmerged into .settings

@Igor Rodionov should not be calling jq on values in the atmos config, but instead using the final value. Then it works both the way you write it here, with putting it in a stack config, as well as putting it in atmos yaml

understand

I started implemented your suggestion as follows:
- name: Get atmos settings
uses: cloudposse/github-action-atmos-get-setting@v2
id: component
with:
settings: |
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: settings.github.actions_enabled
outputPath: enabled
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: component_info.component_path
outputPath: component-path
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: atmos_cli_config.base_path
outputPath: base-path
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: command
outputPath: command
- component: ${{ inputs.component }}
stack: ${{ inputs.stack }}
settingsPath: settings.integrations.github.gitops.role
outputPath: integrations-gitops

and use that to get the settings for the gitops integration

On my phone that looks like the right trajectory

and with glasses?

I would have to compare what was there originally, with this new one and my mental cache scrolling up and down (or late night willpower) is depleted. The real question is, is that working? If so, I think its probably good

yes, it works

Sweet! Lets have @Igor Rodionov review.


I will add a few more things and I will create a PR

Thanks @jose.amengual !

https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/92, needs doc changes but I want to see if that seems ok so far
what
This is based on #90 that @goruha was working on.
Replace the describe config for cloudposse/github-action-atmos-get-setting
replace If statements to check for azure repository type
Add azure blob storage and cosmos
Add cache restore option for vendor files outside git
why
To support azzure and better config settings
references

@Igor Rodionov up

@jose.amengual why do we need to cache the whole repo while we can check it out?

not the whole repo, but just the stacks folder

I have been testing a lot , so I tried many things

somehow the cache is not getting created

what is the profit? hit rate would be low as key based on the sha

the problem is when using vendor files or in this case Templates stack files that are changed in another step but not commited to the repo

Any file part of the repo (in git) will be overwrited by action/checkout after the sed command runs

same happens with any untracked files

could you provide example of untracked files?

untracked files is a problem I have when I use atmos vendor since I vendor stacks files from another repo

for the test github-action-atmos-terraform-plan something similar happens

Keep in mind that Erik wanted to move the integration settings from atmos.yaml to each component

why do not you commit vendor dir into the repo?

we have vendoring to terraform components and commit them

let’s use the same for stacks vendoring

Lets focus on the test issue because if we solve that problem, my usecase will be solved too

sure

what you need to is

rollback changes like htis



I suppose you would have to get rid of this part

integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: __INFRACOST_ENABLED__
artifact-storage:
region: __STORAGE_REGION__
bucket: __STORAGE_BUCKET__
table: __STORAGE_TABLE__
role: __STORAGE_ROLE__
plan-repository-type: azureblob
blob-account-name:
blob-container-name:
metadata-repository-type:
cosmos-container-name:
cosmos-database-name:
cosmos-endpoint:
role:
plan: __PLAN_ROLE__
apply: __APPLY_ROLE__
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")

here

integrations:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: __INFRACOST_ENABLED__
artifact-storage:
region: __STORAGE_REGION__
bucket: __STORAGE_BUCKET__
table: __STORAGE_TABLE__
role: __STORAGE_ROLE__
plan-repository-type: azureblob
blob-account-name:
blob-container-name:
metadata-repository-type:
cosmos-container-name:
cosmos-database-name:
cosmos-endpoint:
role:
plan: __PLAN_ROLE__
apply: __APPLY_ROLE__
matrix:
sort-by: .stack_slug
group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")

and replace it with

and add here


components:
terraform:
settings:
github:
gitops:
terraform-version: 1.5.2
infracost-enabled: __INFRACOST_ENABLED__
artifact-storage:
region: __STORAGE_REGION__
.....

and so on

can you have settings at that level?

@Andriy Knysh (Cloud Posse) would be suggestion works ?

and what do we do with those files after the template runs ? this part here
sed -i -e "s#__INFRACOST_ENABLED__#false#g" "$file"
sed -i -e "s#__STORAGE_REGION__#${{ env.AWS_REGION }}#g" "$file"
sed -i -e "s#__STORAGE_BUCKET__#${{ secrets.TERRAFORM_STATE_BUCKET }}#g" "$file"
sed -i -e "s#__STORAGE_TABLE__#${{ secrets.TERRAFORM_STATE_TABLE }}#g" "$file"
sed -i -e "s#__STORAGE_TABLE__#${{ secrets.TERRAFORM_STATE_TABLE }}#g" "$file"
sed -i -e "s#__STORAGE_ROLE__#${{ secrets.TERRAFORM_STATE_ROLE }}#g" "$file"
sed -i -e "s#__PLAN_ROLE__#${{ secrets.TERRAFORM_PLAN_ROLE }}#g" "$file"
sed -i -e "s#__APPLY_ROLE__#${{ secrets.TERRAFORM_PLAN_ROLE }}#g" "$file"

@jose.amengual it seems my suggestion would not work

let me think about this

that is why I added the cache

in my fork, this works just fine

I got the problem, I’ll come back with the workaround tomorrow. It is a night in my time now and I’m too tied to find the right solution. But caching still looks not good for me as it’s breaks gitops conception

the caching is optional in my code for that reason, but let me know tomorrow

to explain the problem and scenarios: I will fully explains the use cases:
1.- github-action-atmos-terraform-plan
changes and cache issues:
for https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/92, Erik wanted to move the integration settings to the component istead of the atmos.yaml file, to accomplish that I needed to add the integration setting to the test stacks and use cloudposse/github-action-atmos-get-setting@v2, so now the test stacks look like this :
components:
terraform:
foobar/changes:
component: foobar
settings:
github:
actions_enabled: true
gitops:
terraform-version: 1.5.2
infracost-enabled: __INFRACOST_ENABLED__
artifact-storage:
region: __STORAGE_REGION__
bucket: __STORAGE_BUCKET__
table: __STORAGE_TABLE__
role: __STORAGE_ROLE__
plan-repository-type: s3
blob-account-name:
blob-container-name:
metadata-repository-type: dynamo
cosmos-container-name:
cosmos-database-name:
cosmos-endpoint:
role:
plan: __PLAN_ROLE__
apply: __APPLY_ROLE__
vars:
example: blue
enabled: true
enable_failure: false
enable_warning: true
Previously, we relied solely on atmos.yaml:
mkdir -p ${{ runner.temp }}
cp ./tests/terraform/atmos.yaml ${{ runner.temp }}/atmos.yaml
This workaround no longer works because the component settings now manage integration settings. The new approach required stack files to persist after replacing values. We still set atmos-config-path for actions, but stacks requires atmos_base_path, which isn’t available in the current action input.
The action calculates the base path as follows:
- name: Set atmos cli base path vars
if: ${{ fromJson(steps.atmos-settings.outputs.settings).enabled }}
shell: bash
run: |-
# Set ATMOS_BASE_PATH allow `cloudposse/utils` provider to read atmos config from the correct path
ATMOS_BASE_PATH="${{ fromJson(steps.atmos-settings.outputs.settings).base-path }}"
echo "ATMOS_BASE_PATH=$(realpath ${ATMOS_BASE_PATH:-./})" >> $GITHUB_ENV
` The issue is that without the base path, Atmos can’t locate the stack files. The checkout action wipes untracked or modified files, which is why we need to use the cache action. This cache will restore files with the template replacements after the checkout action. Currently: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L62 My-PR: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/.github/workflows/integration-tests.yml#L32
The problem: In my tests, the caching worked fine, but during the actual test runs, no cache was created. You can see the cache setup here:
https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/.github/workflows/integration-tests.yml#L46 and https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/action.yml#L72
But the cache isn’t created during test runs https://github.com/cloudposse/github-action-atmos-terraform-plan/actions/caches, Any insights into this issue would be appreciated.
2.- github-action-atmos-terraform-plan
vendoring support:
in the following scenario :
pepe-iac-repo = the atmos monorepo that contains all the stacks and all components but does not deploy any services, only core functionality like vpcs, subnets, firewalls etc. pepe-service-repo = A service repository that hold code for a serverless function and the following structure:
/pepe-service-repo
/atmos
/stacks
/sandbox
service.yaml
/config
/atmos.yaml
The goal is to keep the service stack (service.yaml) near the developers, without needing to submit two PRs (one for core infra in the pepe-iac-repo and another for the service). The service stack should only describe service-related components, excluding core infra.
The deployment process is simple;
create a pr
atmos vendor all the components(atmos vendor pull
), the sandbox
stack and catalog from the pepe-iac-repo
run github-action-atmos-terraform-plan
after identifying all the components from the service.yaml
Issue: The action/checkout wipes untracked files (like service.yaml), so the cache action must restore these files after the checkout to preserve the service.yaml during the process.
I personally believe that this is not against any gitops principal, in fact a similar mechanism can be found in atlantis with pre-workflow-hooks and post-workflow-hooks.

Why don’t we just make checkout optional? That way you can manage the checkout outside of the action?

Are you able to share your workflows?

I believe I tried that and it didn’t persist

@Igor Rodionov sounded like he understood better than me. But all actions are executed in a single workflow run, not spanning multiple jobs, caching is not needed

but I could be wrong

Well, you are right if there are multiple checkouts

But we don’t need to enforce multiple checkouts

if the checkout persist then we do the checkout outside and that should do it


Our actions often combine a lot of repetitive steps in our workflows, but those might cause problems in your workflows. But it would conceivably be easy to make checkouts optional.

Exactly..

Changes to files definitely persist between steps in job, but they definitely do not persist between jobs or between workflow executions

But steps in a workflow can “undo” changes, which seems like what you are seeing. In that case, we need to prevent the undoing. :-)

testing


I have no idea why it didn’t work when I tried it, maybe it was because at that point I didn’t know the action/checkout was on the action.yaml

ok, I will change the input name and the other tests tomorrow and do some cleanup

that is easiest solution



you can provide PR with that and I will approve it

will do a bit later

Thanks
2024-10-20

hey folks, i would like to use secrets from 1password, is there a way to replace the terraform command with op run --env-file=.env -- tofu
?

@Andriy Knysh (Cloud Posse) you did something similar before for vault secrets, no?

we used a custom Atmos command to access Hashi Vault

can you share that command snippet please?

just for reference, make sure i’m doing fine.

overriding an existing command and calling it in the custom command creates a loop (as you see in your code)

you can create a custom command with a diff name, or you can call just terraform (w/o Atmos) from the custom command

@Erik Osterman (Cloud Posse) we need to think about adding hooks (e.g. before_exec
, after_exec
) to Atmos commands - this will allow calling other commands before executing any Atmos command

i got this if i remove atmos and calling just terraform as you suggested:
│ Error: Too many command line arguments
│
│ Expected at most one positional argument.
the command:
- terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }}

ommands:
- name: vault
description: Get vault token
verbose: false
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: stack
required: true
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
steps:
- |
set -e
AWS_ROLE_ARN=......
credentials=$(aws sts assume-role --role-arn "$AWS_ROLE_ARN" --role-session-name vaultSession --duration-seconds 3600 --output=json)
export AWS_ACCESS_KEY_ID=$(echo "${credentials}" | jq -r '.Credentials.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo "${credentials}" | jq -r '.Credentials.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo "${credentials}" | jq -r '.Credentials.SessionToken')
export AWS_EXPIRATION=$(echo "${credentials}" | jq -r '.Credentials.Expiration')
VAULT_TOKEN=$(vault login -token-only -address=https://"$VAULT_ADDR" -method=aws header_value="$VAULT_ADDR" role="$VAULT_ROLE")
echo export VAULT_TOKEN="$VAULT_TOKEN"

hooks can be very handy also

example of atmos vault
command ^

got this if i remove atmos and calling just terraform as you suggested:

thanks for sharing the above command

you can’t just call terraform, you need to prepare vars, backend and other things that Atmos does

you can call Atmos commands to generate varfile and backend, then call terraform apply

(it’s easier to create a separate command, e.g. atmos terraform provision
, and call op
and atmos terraform apply
from it)

ok thanks

another way i was thinking is template function, is that possible to create a custom template function in atmos?

good question, currently not, but we are thinking about it.

gomplate would be even better but there is no support for 1password it seems

(we already have custom template functions in Atmos, e.g. atmos.Component
, but it’s inside Atmos)

gomplate is supported, I’m not sure about 1pass (did not see that)

would amazing to have support for at least generic cli call template function

yea, atmos.Exec
- we can do that

exactly, i was thinking similar
vars:
foo: '{{ (atmos.Exec "op read ...").stdout }}'

or something like this

or consider the integration of this: https://github.com/helmfile/vals, this works fantastic in helmfiles

i can simply reference vars, and vals replaces during templating: token: <ref+op://infra/ghcr-pull/token>
and they support many various sources, including sops

that’s nice, thanks for sharing

we can prob embed the binary in Atmos and reuse it

it’s a go library you can easily embed

@Gabriela Campana (Cloud Posse) please create the following Atmos tasks (so we track them):
• Add before_exec
and after_exec
hooks to Atmos commands (configured in atmos.yaml
)
• Implement atmos.Exec
custom template function to be able to execute any external command or script and return the result
• Review https://github.com/helmfile/vals, and implement similar functionality in Atmos (embed the lib into Atmos if feasible/possible)
Helm-like configuration values loader with support for various sources

this would be fantastic. secrets handling is really overlooked in tooling, if you using something else than vault.

yes, agree, once you get to the secrets handling, it’s a complicated and takes a lot of time and effort. We’ll improve it in Atmos

thanks for the conversation

thank you!

Note atmos supports multiple other ways to retrieve secrets via SSM. S3, Vault, etc. these are via the Gomplate data sources. Gomplate doesn’t have one for 1Password.

We also have plans to support SOPS

Is using Gomplate a valid (and recommended) approach for passing secrets into Atmos?

it’s just one way that is currently supported

we’ll implement the other ways described above

I am just curious if it’s worth investing in this approach or holding off until something more solid is in place

we’ll be working on it, might have something next week, although no ETA

Got it. No urgency here - will hold off on using Gomplate for now.

I knew I had seen something like this before!! Thanks for sharing. Well use that.

@Andriy Knysh (Cloud Posse) we should discuss how we can leverage vals. Do you think it should be another data source?

let me review it, we’ll discuss the implementation (interface)

it could be another data source (e.g atmos.XXXX), or/and Yaml explicit type functions

Ya explicit type


it seems i can create a custom command in atmos.yaml, but overriding terraform apply
is not working. command just hanging.

Can you share your custom command

- name: apply2
arguments:
- name: component
description: Name of the component
flags:
- name: stack
shorthand: s
description: Name of the stack
required: true
- name: auto-approve
shorthand: a
description: Auto approve
steps:
- op run --no-masking --env-file=.env -- atmos terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }}

it works, but if i rename to apply
to override the command, it’s just hanging..
kinda make sense, as it calls itself eventually?

As implemented that creates a recursive command

That’s why it doesn’t return

Renaming apply to apply2 works, because it’s no longer recursive

yes, i think so too. but in the docs there is an example hence i thought it could work.

you CAN override any existing Atmos command (native or custom), but you can’t call the command from itself

sure, thanks!

can you do this?
- path: "sandbox/service"
context: {}
skip_templates_processing: false
ignore_missing_template_values: false
skip_if_missing: true
to then import the file after is being created?

so , run atmos vendor
, then atmos terraform plan.....
and I’m expecting the service.yaml to be imported after

Basically that stack import is optional?

What causes the file to get created?

I need a valid stack file for atmos vendor to work

that is my service.yaml

then I vendor and bring all the other stacks files to complete the sandbox stack

I was thinking on how could have a conditional import so that I could add in the service side new files to import

We we have a PR open now to eliminate that requirement I think

It shouldn’t be required to have stack configs to use vendoring

ahhh ok

Actually, looks like we didn’t - but will get that fixed this week

no problem


ok I start work on it

@jose.amengual https://github.com/cloudposse/atmos/pull/738
what
• stop process stack config when cmd atmos vendor pull
why
• Atmos vendor should not require stack configs
references
• DEV-2689 • https://linear.app/cloudposse/issue/DEV-2689/atmos-vendor-should-not-require-stack-configs
Summary by CodeRabbit
• New Features
• Updated vendor pull command to no longer require stack configurations.
• Bug Fixes
• Maintained error handling and validation for command-line flags, ensuring consistent functionality.

Do not process stack configs when executing command atmos vendor pull
and the stack
flag is not specified @haitham911 (#738)
what
• Do not process stack configs when executing command atmos vendor pull
and the stack
flag is not specified
why
• Atmos vendor should not require stack configs if the stack
flag is not provided

@jose.amengual can you let me know as soon as you validated it?

this is not working, cc @haitham911eg

What’s the issue I tested it yesterday

If you attempt to run atmos vendor pull
with only an atmos.yaml
and vendor.yaml
file, it will error because no stacks folder exists, and further more, no stack files exist in the folder.


ahh yes I tested with include path not exist only


@Erik Osterman (Cloud Posse)


I fixed it work without stacks folder

Ok, cool - post screenshots in PR too

I will send PR now with screenshots


what
• Do not process stack configs when executing command atmos vendor pull
and the stack
flag is not specified
why
• Atmos vendor should not require stack configs if the stack
flag is not provided
references
• DEV-2689
• https://linear.app/cloudposse/issue/DEV-2689/atmos-vendor-should-not-require-stack-configs
image (1)
image (2)

Updates to Installation Instructions @osterman (#549)
what
• Add terminal configuration instructions • Add installation instructions for fonts
why
• Not everyone is familiar with what makes atmos TUI look good =) Read Atmos config and vendor config from .yaml or .yml @haitham911 (#736)
what
• Read Atmos config and vendor file from atmos.yaml
or atmos .yml
, vendor.yaml
or vendor.yml
• If both .yaml
and .yml
files exist, the .yaml
file is prioritized
why
• Supports both YAML extensions
Improve logging in atmos vendor pull
@haitham911 (#730)
What
• Added functionality to log the specific tags being processed during atmos vendor pull --tags demo
.
• Now, when running the command, the log will display: Processing config file vendor.yaml for tags {demo1, demo2, demo3}
.
Why
• This update improves visibility by explicitly showing the tags during the pull operation

do you guys allow GH action to create caches? https://github.com/cloudposse/github-action-atmos-terraform-plan/actions/runs/11432993177/job/31804302214
Run actions/cache@v4
with:
path: ./
key: atmos
enableCrossOsArchive: false
fail-on-cache-miss: false
lookup-only: false
save-always: false
env:
AWS_REGION: us-east-2
Cache not found for input keys: atmos

@Igor Rodionov

the run failed because of
Error: Region is not valid: __STORAGE_REGION__

we use cache to speedup terraform init by caching providers . But the failure is not related to cache

is because the test components are templated and processed before the call to the action, but the action has action/checkout, so it wipes out the changes. Since we move from atmos.yaml settings to component settings, then the test stacks are need after the template execution so that is why I wanted to create a cache and restore it ( similar problem when you vendor stack files , they get wipped by the action)

@jose.amengual I will check your PRs today or tomorrow.

I want to finish fixing the tests , so I need to figure out this cache thing
2024-10-21

is there any simple example regarding whats the best practice to create a kubernetes cluster (let’s say gke but the provider doesn’t really matter) and deploy some kubernetes manifests to the newly provisioned cluster?

we deploy EKS clusters using this TF component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/cluster

and deploy all EKS releases similar to https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/cert-manager

using the cloudposse/helm-release/aws
TF module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cert-manager/main.tf#L10

and for the remote state, using the cloudposse/stack-config/yaml//modules/remote-state
module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cert-manager/remote-state.tf#L1

the remote state is done in TF, and in Atmos we just configure the variables for the remote-state module

note that instead os using remote-state
TF module, you could prob use atmos.Component
template function (but we did not test it with the EKS components since all of them are using the remote-state
TF modules now)

cool, thanks for sharing, i will check out these resources.

so what i’m after really is how to provide kube host, token and cert from the cluster component to another component.

We do this in our EKS components


I’m not personally familiar with the details

thanks!

what is the recommended way, atmos.Component
template function or something else maybe?

(note, in Cloud Posse components, we rely more on terraform data sources, than atmos.Component
)

i see

Do not process stack configs when executing command atmos vendor pull
and the stack
flag is not specified @haitham911 (#738)
what
• Do not process stack configs when executing command atmos vendor pull
and the stack
flag is not specified
why
• Atmos vendor should not require stack configs if the stack
flag is not provided

Hi all - how do you usually handle replacing an older version of an atmos component with a new one? For example, I have an existing production EKS cluster defined in the “cluster” component in my “acme-production.yaml” stack file. I want to replace this with a new cluster component with different attributes.
I looked into using the same stack file and adding a new component called “cluster_1”, but that breaks some of my reusable catalog files that have the component name set to “cluster”. I know I can also just create a new stack file but that approach also seems not ideal.
Any advice is appreciated!


This PR is WIP
2024-10-22

is that possible to reference to a list with atmos.Component function? it seems the genereated tfvars will be string
instead of list
:
...
components:
terraform:
foo:
vars:
bar: '{{ (atmos.Component "baz" .stack).outputs.mylist }}'
...

See this workaround
I was able to work around issues with passing more complex objects between components by changing the default delimiter for templating
Instead of the default delimiters: ["{{", "}}"]
Use delimiters: ["'{{", "}}'"]
.
It restricts doing some things like having a string prefix or suffix around the template, but there are other ways to handle this.
vars:
vpc_config: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_attrs) }}'
works with a object like
output "vpc_attrs" {
value = {
vpc_id = "test-vpc-id"
subnet_ids = ["subnet-01", "subnet-02"]
azs = ["us-east-1a", "us-east-1b"]
private_subnets = ["private-subnet-01", "private-subnet-02"]
intra_subnets = ["intra-subnet-01", "intra-subnet-02"]
}
}

We’re also working towards an alternative implementation we hope to release this week that will improve the DX and avoid templates.

thanks!

trying that toRawJson
thing but it still seems i got strings. maybe i’m doing wrong.

do you see the tricky thing we’re doing with the delimiters?

The delimiter is '{{
not {{

and }}'
not }}

This way, the Go template engine removes the '{{
and '}}
and replaces i with the raw JSON

without the '
in the delimiter, the json gets encoded inside of a string, and does nothing

ok. i see. and isn’t that could cause issues if i change the default delimiters?

Your mileage may vary.

The forth coming YAML explicit types will be a much better solution, not requiring these types of workarounds.

e.g.
components:
terraform:
foo:
vars:
bar: !terraform.outputs "baz" "mylist"

nice! any eta on that?


But it could slip. Anyways, it’s days not weeks away.

sounds good thx

hey all,
is there a way to edit where the terraform init -reconfigure
looks for AWS credentials? I want to select an AWS profile name dynamically via my cli flags and without hardcoding it into the terraform source code
My current non dynamic solution is setting a local env variable with the aws profile name and atmos picks that up fine and finds my credentials. But is there a way to configure the atmos.yaml so that the terraform init -reconfigure
looks for the aws profile in a flag in my cli command such as
-s <stackname>
? where the stackname matches my aws profile name?
so far it doesnt look to have an option like that.

Are you familiar with atmos backend generation? That’s how we do it.

https://atmos.tools/quick-start/advanced/configure-terraform-backend/#provision-terraform-s3-backend
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Doesn’t have to be S3 (that’s just an example)

The backend configuration supports a profile name, so that’s how we do it.

# <https://developer.hashicorp.com/terraform/language/backend/s3>
terraform:
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: "your-s3-bucket-name"
dynamodb_table: "your-dynamodb-table-name"
key: "terraform.tfstate"
region: "your-aws-region"
profile: "...."

we use aws-vault and a make file if that’s helpful

2024-10-23

When working in a monorepo for Atmos with various teams organized under stacks/orgs/acme/team1, team2, etc., will the affected stacks GitHub Action detect changes in other teams’ stacks? At times, we only want to plan/apply changes to our team’s stacks and not those of other teams.

That makes sense… Yes, I think something like that is possible.

looking it up

@Igor Rodionov I can’t find the setting where we can pass a jq
filter

I thought we had the option to filter affected stacks that way

is it possible get the path of a stack? for example:
...
components:
terraform:
kubeconfig:
vars:
filename: '{{ .stacks.base_path }}/{{ .stack }}/kubeconfig'
...

I believe if you run atmos describe stacks
you can see the complete data model. Any thing in that structure should be accessible. I believe you will find one that represents the file, then take the dirname of that. Gomplate probably provides a dirname function.

@Andriy Knysh (Cloud Posse) may have another idea

use atmos describe component

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

in the output, you will see
component_info - a block describing the Terraform or Helmfile components that the Atmos component manages. The component_info block has the following sections:
component_path - the filesystem path to the Terraform/OpenTofu or Helmfile component
component_type - the type of the component (terraform or helmfile)
terraform_config - if the component type is terraform, this sections describes the high-level metadata about the Terraform component from its source code, including variables, outputs and child Terraform modules (using a Terraform parser from HashiCorp). The file names and line numbers where the variables, outputs and child modules are defined are also included. Invalid Terraform configurations are also detected, and in case of any issues, the warnings and errors are shows in the terraform_config.diagnostics section

you would use component_path
to get the path to the Terraform component for the Atmos component

@Kalman Speier, everything that yuo see in the outputs of the atmos describe component
command (https://atmos.tools/cli/commands/describe/component/#output) can be used in the Go templates
Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

In general we don’t recommend putting non stack configs in the stacks folder

thanks! i will check that out.

it felt natural to have a kubeconfig file per stack, but maybe you right i will re-consider where to write those files

it does not need to go to the component folders, it can be anywhere

look at this custom Atmos command for example https://github.com/cloudposse/atmos/blob/main/examples/tests/atmos.yaml#L223
- name: set-eks-cluster

yeah, i was thinking on that to keep the stacks dirs clean, maybe i will create a kube folder

# If a custom command defines 'component_config' section with 'component' and 'stack',
# Atmos generates the config for the component in the stack
# and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
# exposing all the component sections (which are also shown by 'atmos describe component' command)
component_config:
component: "{{ .Arguments.component }}"
stack: "{{ .Flags.stack }}"
env:
- key: KUBECONFIG
value: /dev/shm/kubecfg.{{ .Flags.stack }}-{{ .Flags.role }}
steps:
- >
aws
--profile {{ .ComponentConfig.vars.namespace }}-{{ .ComponentConfig.vars.tenant }}-gbl-{{ .ComponentConfig.vars.stage }}-{{ .Flags.role }}
--region {{ .ComponentConfig.vars.region }}
eks update-kubeconfig
--name={{ .ComponentConfig.vars.namespace }}-{{ .Flags.stack }}-eks-cluster
--kubeconfig="${KUBECONFIG}"
> /dev/null
- chmod 600 ${KUBECONFIG}
- echo ${KUBECONFIG}

this is what we usually do ^

2024-10-24
2024-10-28

Anything on the roadmap for org wide stacksets natively supported as an upstream component? These are very useful because the stackset auto deploys infrastructure when a new account is created.
For example, aws-team-roles
component could be converted to a stackset and get deployed without manually needing to provision the component.


Set custom User Agent for Terraform providers @Cerebrovinny (#729)
What
• The Atmos command now sets the TF_APPEND_USER_AGENT
environment variable, which Terraform uses when interacting with the AWS provider.
• Users can also append custom values to TF_APPEND_USER_AGENT
, allowing further flexibility in monitoring or tracking specific operations.
• If no user-defined value is provided, the system will automatically set atmos {currentVersion}
as the default.
Why
• Add a customer-specific user agent for Terraform operations. This enhancement ensures that Atmos-driven actions are identifiable and distinct from other operations. • Users will be able to differentiate and monitor Atmos-initiated actions within AWS, facilitating better tracking, logging, and troubleshooting. WIP: Document helmfile, template imports @osterman (#741)
what
• Document options for helmfile
• Update helmfile demo to use options
• Document versioning of components
Fix example demo-context @Cerebrovinny (#743)
what
• Running atmos in demo-context folder causes the code to process all stack configurations, including the catalog stacks
fix atmos version
@haitham911 (#735)
what
• atmos version should work regardless if the stack configs are provided
2024-10-29

Just a question as a more junior atmos/terraformer. I am prepping to deploy some terraform stacks to GovCloud East. My backend sits in West right now. I took this morning to create an east “global” yaml that references my West backend and ARNs, but configures East region. I tested a small dev S3 module and it deployed in the org, in East, in the account it’s supposed to be in. I’m wondering from a design perspective do these backends get split off. It would seem easier if I can just leverage what I already have, but I’m not sure what any negatives to doing this are. Hope this question makes sense, just trying to strategize with regards to my backend and atmos.

For reference for non-govcloud users - this would just be a separate region, same account.

Note my codes all in one repo right now as well.

@Ryan regarding splitting backends, there are a few possible ways to do it (depending on security requirements, access control, audit, etc.)

• One backend per Org (e.g. provisioned in the root
account). All other accounts (e.g. dev
, staging
, prod
) use the same backend

• Separate backend per account (dev
, staging
, prod
). You need to manage many backends and provision them. This is used for security reasons to completely separate the accounts (they don’t share anything)

Yea I might be overthinking, just trying to strategize before I build in east

It’s just me primarily and likely one person I train to run it

• A combination of the above - e.g. one backend for dev
and staging
, a separate backend for prod

we use all the above depending on many factors like security, access control, etc. - there is no one/best way, it all depends on your requirements

(all of them are easily configured with Atmos, let us know if you need any help on that)

thank you andriy, i honestly wasnt sure if I could keep everything all within one west db and s3, first time poking this thing over to east

idk if we need to get that crazy with separation of duties or new roles for east

whoops wrong thread

Hi, I’m trying to deploy Spacelift components following these instructions: https://docs.cloudposse.com/layers/spacelift/?space=tenant-specific&admin-stack=managed Everything seems to be ok until I try to deploy plat-gbl-spacelift stack. I get: │ Error: │ Could not find the component ‘spacelift/spaces’ in the stack ‘plat-gbl-spacelift’. │ Check that all the context variables are correctly defined in the stack manifests. │ Are the component and stack names correct? Did you forget an import? │ │ │ with module.spaces.data.utils_component_config.config[0], │ on .terraform/modules/spaces/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”: │ 1: data “utils_component_config” “config” {
Any ideas?

@Michal Tomaszek did you add the component ‘spacelift/spaces’ in the stack ‘plat-gbl-spacelift’?

Spaces must be provisioned first before provisioning Spacelift stacks

the component spacelift/spaces
needs to be added to the Atmos stack and provisioned
(the error above means that the component is not added to the Atmos stack and Atmos can’t find it. It’s a dependency of Spacelift stack component)

spacelift/spaces
is already deployed before in root-gbl-spacelift
stack. I can see both root
and plat
spaces in Spacelift dashboard. so, deployment of these:
atmos terraform deploy spacelift/spaces -s root-gbl-spacelift
atmos terraform deploy spacelift/admin-stack -s root-gbl-spacelift
went fine.
provisioning of:
atmos terraform deploy spacelift/admin-stack -s plat-gbl-spacelift
results in the issue I described. if you look at the Tenant-Specific tab on the website (section https://docs.cloudposse.com/layers/spacelift/#spaces), it actually doesn’t import catalog/spacelift/spaces
. if I do this, it eventually duplicates plat
space as it’s already created in root-gbl-spacelift
stack.

i see what you are saying. I think the docs don’t describe the steps in all the details. Here’s a top-level overview:
• Spacelift has the root
space that is always present. atmos terraform deploy spacelift/spaces -s root-gbl-spacelift
does not provision it, it configures it

• In each account, you need to provision child spaces (under root
). You need to configure and execute atmos terraform deploy spacelift/spaces -s plat-gbl-spacelift

• Ater having the child spaces in plat
tenant, you can provision Spacelift stacks in plat
that use the plat
space(s)

in Spacelift, there is a hierarchy of spaces (starting with the root),
and a hierarchy of admin stacks

those are related, but separate concepts

the admin stack in root
is the root of the stack hierarchy, all other admin stacks are managed by the root admin stack (note that the child admin stacks can be provisioned in the same or diff environments (e.g. diff tenant)

each admin and child stack belongs to a space (root or child)

if I do this, it eventually duplicates
plat
space as it’s already created inroot-gbl-spacelift
stack this is prob some omission in the docs or in config if you don’t find the issue, you can DM me your config to take a look

ok, I’ll give this a try and reach out in case of issues. thanks for help!

Hello, our setup is each AWS account has its own state bucket and DynamoDB table. I’m using a role in our identity account that authenticates via GitHub OIDC and can assume roles in target accounts. My challenge is with the GitHub Action for “affected stacks”, how can I configure Atmos to assume the correct role in each target account when it runs? Any guidance would be much appreciated!

there are usually two roles:
• For the terraform aws
provider (with permissions to provision the AWS resources)
• For the backend with permissions to access the S3 bucket and DynamoDB table

when you run an atmos commend like describe affected
or terraform plan
, the identity role needs to have permissions to assume the above two roles

then, in Atmos manifests, you can configure the two roles in YAML filesL

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

• Roles for aws
providers - two ways of doing it:

- If you are using the terraform components
<https://github.com/cloudposse/terraform-aws-components/tree/main/modules>
, then each one has[providers.tf](http://providers.tf)
file (e.g. https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/providers.tf). It also usesmodule "iam_roles"
module to read terraform roles per account/tenant/environment (those roles can and should be different)
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}

- Similar things can be configured in Atmos manifests by using the
providers
section https://atmos.tools/core-concepts/components/terraform/providers/#provider-configuration-and-overrides-in-atmos-manifests
Configure and override Terraform Providers.

depending on what you are using and how you want to configure it, you can use one way or the other (or both)

the idea is to have a way to get (different) IAM roles to access the backends and AWS resources

ok.. I’ll give these options a try. I added the role_arn to backend.tf - it seem to have accessed the first account fine then error’ed out on the second account.
Wrote the backend config to file:
components/terraform/eks/otel-collectors/backend.tf.json
Executing 'terraform init eks/otel-collectors -s core-site-use1-prod'
template: describe-stacks-all-sections:17:32: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Error: Process completed with exit code 1.

can describe affected
run with state buckets in each target account or is it expecting a single state bucket for all accounts?

@jose.amengual might have solved this in his forth coming PR

To have a gitops role configured per stack

what
This is based on #90 that @goruha was working on.
• Replace the describe config for cloudposse/github-action-atmos-get-setting • Replace If statements to check for azure repository type • Add azure blob storage and cosmos • Add cache parameter to enable or disable caching inside the action • Add pr-comment parameter to allow the user to get the current summary and a PR comment if they want to. • Updated docs and Tests.
why
To support azure and better config settings
references

His PR references Azure, but the solution is not azure specific

With this change all the gitops settings can be defined in stack configs that extend atmos.yml


@Andriy Knysh (Cloud Posse) the GitHub OIDC portion is handled by the action and not atmos or terraform

@Igor Rodionov needs to give the final to get this merged

I think there are some other PRs to other actions too. @jose.amengual is doing the same thing with a state bucket and dynamo per account.

Looking forward to this! Thanks @Erik Osterman (Cloud Posse)

I see the PR has been merged. Is there an example of how to use a GitOps role configured for each stack?

settings:
github:
actions_enabled: true
integrations:
github:
gitops:
terraform-version: 1.9.5
infracost-enabled: false
artifact-storage:
plan-repository-type: "azureblob"
blob-account-name: "pepe"
blob-container-name: "pepe-state"
azure:
ARM_CLIENT_ID:
ARM_SUBSCRIPTION_ID:
ARM_TENANT_ID:
retry: true

that is a stack file that I import

that gets deep merged to all endpoint on the stack

obviously for you it will be aws so role_arn
or whatever you need to use there

and it is a free map after :
settings:
github:
actions_enabled: true
integrations:
2024-10-30

Add support for remote validation schemas @haitham911 (#731)
What
• Add support for remote schemas in atmos
for manifest validation
• Updated schemas
configuration to allow referencing remote schema files, e.g.:
schemas:
atmos:
manifest: “https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json”
Why
• This reduces redundancy, the schema file can be referenced remotely .
Do not process stack configs when executing command atmos vendor pull
and the --stack
flag is not specified @haitham911 (#740)
what
• Do not process stack configs when executing command atmos vendor pull
and the --stack
flag is not specified
why
• Atmos vendor should not require stack configs if the stack
flag is not provided


Add atmos docs
command @RoseSecurity (#751)
what
• Add atmos docs <component>
CLI command
• Render component documentation utilizing the atmos docs <component>
command
why
• Improve user experience when navigating component documentation
testing
• Ensure existing functionality to the docs
command is not affected
• Tested without valid Atmos Base Path
• Tested with nonexistent component name
• Tested with valid component name
• Tested with invalid component name
• Tested with nested component names
references

hello guys
I am playing Atmos and GitHub for my project
currently, I am having a problem with posting comments to GitHub pull requests with atmos terraform plan <stack> -s #####
I cant parse output in a readable this relative to
terraform -no-color
my question is:
can I run
atmos terraform plan

Atmos should send the command-line flags to Terraform, so this should work
atmos terraform plan <component> -s <stack> -no-color

did you try it?

you can also use double-dash like so
double-dash -- can be used to signify the end of the options for Atmos and the start of the additional native arguments and flags for the terraform commands. For example:
atmos terraform plan <component> -s <stack> -- -refresh=false
atmos terraform apply <component> -s <stack> -- -lock=false

using double-dash, you can specify any arguments and flags, and they will be sent directly to terraform

Use these subcommands to interact with terraform.

@Vitalii also, did you see the GH actions that are already supported and work with Atmos
Use GitHub Actions with Atmos

@Andriy Knysh (Cloud Posse)
Thank you for your help
I dont no why this command didn
t work yesterday on my laptop
but after your answer it start working
atmos terraform plan sns -s bm-dev-ue1 -no-color
and yes I am using GH action with Atmos

and one more question
Id like to run atmos in this way
atmos terraform deploy sns -s bm-dev-ue1 –from-plan
atmos says that
Error: stat bm-dev-ue1-sns.planfile: no such file or directory
I assume that atmos trying to find this ^^^ file in
component/sns directory
so my question is:
how to configure or run atmos that command
atmos terraform plan sns -s bm-dev-ue1 saved plan automatically to the
components/sns folder without
-out=
sorry for tagging

Atmos supports these two flags
--from-plan If the flag is specified, use the planfile previously generated by Atmos instead of
generating a new planfile. The planfile name is in the format supported by Atmos
and is saved to the component's folder
--planfile The path to a planfile. The --planfile flag should be used instead of the planfile
argument in the native terraform apply <planfile> command

you need to first run atmos terraform plan sns -s bm-dev-ue1
, then atmos terraform apply sns -s bm-dev-ue1 --from-plan

the planfile generated by terraform plan
will be used instead of creating a new planfile

Atmos automatically uses the component directory to save the planfiles

you need to run these two cmmands:
atmos terraform plan sns -s bm-dev-ue1
atmos terraform deploy sns -s bm-dev-ue1 --from-plan

@Vitalii ^


have a good day

Hello guys one more question how to post atmos outputs (like terraform plan) to PR branch in human-readable way
2024-10-31

hey guys! just a quick PR for the terraform-aws-config
module https://github.com/cloudposse/terraform-aws-config/pull/124
what
use enabled
boolean in managed_rules
variable
why
aws_config_config_rule
resources were still being created despite enabled
being set to false

Best place for this is #pr-reviews
what
use enabled
boolean in managed_rules
variable
why
aws_config_config_rule
resources were still being created despite enabled
being set to false

Update Atmos manifests validation JSON schema. Improve help for Atmos commands. Deep-merge the settings.integrations.github
section from stack manifests with the integrations.github
section from atmos.yaml
@aknysh (#755)
what
• Update Atmos manifests validation JSON schema
• Improve help and error handling for Atmos commands
• Deep-merge the settings.integrations.github
section from Atmos stack manifests with the integrations.github
section from atmos.yaml
why
• In Atmos manifests validation JSON schema, don’t “hardcode” the s3
backend section fields, allow it to be a free-form map so the user can define any configuration for it. The Terraform s3
backend can change, can be different for Terraform and OpenTofu. Also, the other backends (e.g. GCP, Azure, remote) are already free-form maps in the validation schema
• When Atmos commands are executed w/o specifying a component and a stack (e.g. atmos terraform plan
, atmos terraform apply
, atmos terraform clean
), print help for the command w/o throwing errors that a component and stack are missing
• Deep-merging the settings.integrations.github
section from Atmos stack manifests with the integrations.github
section from atmos.yaml
allows configuring the global settings for integrations.github
in atmos.yaml
, and then override them in the Atmos stack manifests in the settings.integrations.github
section. Every component in every stack will get settings.integrations.github
from atmos.yaml
. You can override any field in stack manifests. Atmos deep-merges the integrations.github
values from all scopes in the following order (from the lowest to highest priority):
• integrations.github
section from atmos.yaml
• stack-level settings.integrations.github
configured in Atmos stack manifests per Org, per tenant, per region, per account
• base component(s) level settings.integrations.github
section
• component level settings.integrations.github
section
For example:
atmos.yaml
integrations:
# GitHub integration
github:
gitops:
opentofu-version: 1.8.4
terraform-version: 1.9.8
infracost-enabled: false
stacks/catalog/vpc.yaml
components:
terraform:
vpc:
metadata:
component: vpc
settings:
integrations:
github:
gitops:
infracost-enabled: true
test_enabled: true
Having the above config, the command atmos describe component vpc -s tenant1-ue2-dev
returns the following deep-merged configuration for the component’s settings.integrations.github
section:
settings:
integrations:
github:
gitops:
infracost-enabled: true
opentofu-version: 1.8.4
terraform-version: 1.9.8
test_enabled: true
Improve custom command error message for missing arguments @pkbhowmick (#752)
what
• Improved custom command error message for missing arguments including the name of argument for better user’s understanding.
why
• If a custom command expects an argument, it should say so with the arguments name. Fix helmfile demo @osterman (#753)
what
• Enable templates in atmos.yaml so we can use env function to get current work directory
• Do not default KUBECONFIG
to /dev/shm
as /dev/shm
is a directory, and the kube config should be a YAML file
• Fix stack includes
• Set KUBECONFIG from components.helmfile.kubeconfig_path
if set (previously only set if use_eks
was true)
why
• Demo was not working