#atmos (2024-10)

2024-10-01

Miguel Zablah avatar
Miguel Zablah

hey I’m debuging some template and I notice that atmos is not updating correctly the error dose it cache the template result? if so is there a way to clear this?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah Atmos caches the results of the atmos.Component functions only for the same component and stack, and only for one command execution. If you run atmos terraform ... again, it will not use the cached results anymore

Miguel Zablah avatar
Miguel Zablah

Thanks it looks like I had an error on another file that is why I though it was maybe cache, thanks!

2
Dennis DeMarco avatar
Dennis DeMarco

I’m having an issue, I’m using atmos to autogenerate the backend. I’m getting a Error: Backend configuration changed. I am not quite, as I don’t see Atmos making the backend

Dennis DeMarco avatar
Dennis DeMarco

hmm atmos terraform generate backend is not making backend files, Any tips?

jose.amengual avatar
jose.amengual

run validate stacks to see if you have a wrong yaml

Dennis DeMarco avatar
Dennis DeMarco

all successful

jose.amengual avatar
jose.amengual

and make sure the base_path is correct, and atmos cli can find the atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for backends to be auto-generated, you need to configure the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  • In atmos.yaml
    components:
    terraform:
      # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
      auto_generate_backend_file: true
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to have this config

terraform:
  backend_type: s3
  backend:
    s3:
      acl: "bucket-owner-full-control"
      encrypt: true
      bucket: "your-s3-bucket-name"
      dynamodb_table: "your-dynamodb-table-name"
      key: "terraform.tfstate"
      region: "your-aws-region"
      role_arn: "arn:aws:iam::<your account ID>:role/<IAM Role with permissions to access the Terraform backend>"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the defaults for the Org (e.g. in stacks/orgs/acme/_defaults.yaml) if you have one backend for the entire Org

Dennis DeMarco avatar
Dennis DeMarco

What is strange it was working, this is the error.

Dennis DeMarco avatar
Dennis DeMarco

atmos terraform generate backend wazuh -s dev –logs-level debug template: all-atmos-sections35: executing “all-atmos-sections” at <atmos.Component>: error calling Component: exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require migrating existing state.

If you wish to attempt automatic migration of the state, use “terraform init -migrate-state”. If you wish to store the current configuration with no changes to the state, use “terraform init -reconfigure”.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to remove the .terraform folder and run it again

Dennis DeMarco avatar
Dennis DeMarco

Same error with the .terraform folder removed

Dennis DeMarco avatar
Dennis DeMarco

Ahh ran with log_level trace, and found the issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what was the issue?

Dennis DeMarco avatar
Dennis DeMarco

The component had references for var in other components

Dennis DeMarco avatar
Dennis DeMarco

removed .terraform in those other components

Dennis DeMarco avatar
Dennis DeMarco

TY for trace

Dennis DeMarco avatar
Dennis DeMarco

I should make a script to just clean-slate all those .terraform directories

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, we should have that as a built in command.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dennis DeMarco for now, instead of a script add a custom command to atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.

2024-10-02

Miguel Zablah avatar
Miguel Zablah

Hi! I’m having an issue with this workflow:

cloudposse/github-action-atmos-affected-stacks

where it will do a plan for all stacks even when I got some disable stacks on the CI/CD and since one of the components is dependent of another it fails with.

I get the same error when running this locally:

atmos describe affected --include-settings=false --verbose=true

is there a way to skip a stack or mark it as ignore?

this is the error:

template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference

it’s complaining about this concat I do:

'{{ concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets | default (list)) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets | default (list)) | toRawJson }}'

but this works when vpc is apply but since this stack is not being uses at the moment it fails

any idea how to fix this?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah by “disabled component” do you mean you set enabled: false for it?

Miguel Zablah avatar
Miguel Zablah

Yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i see the issue. We’ll review and update it in the next Atmos release

Miguel Zablah avatar
Miguel Zablah

That will be awesome since this is a blocker for us now

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) any ETL on this?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Miguel Zablah We don’t have an ETA yet. But should be soon

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll try to do it in the next few days

Miguel Zablah avatar
Miguel Zablah

thanks!!

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) Is this the fix? Release - v1.89.0 If so I have try running it but I still get the same error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what error are you getting?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

must be something wrong with the Go template

Igor Rodionov avatar
Igor Rodionov

@Andriy Knysh (Cloud Posse) the problem is that vpc component is not applied yey

Igor Rodionov avatar
Igor Rodionov

to there is no output

atmos.Component "vpc" .stack).outputs.vpc_private_subnets
Igor Rodionov avatar
Igor Rodionov

I do not know how we should handle that

Miguel Zablah avatar
Miguel Zablah

yes but I was thinking that since it had the:

terraform:
  settings:
    github:
      actions_enabled: false

it should get ignore no?

Igor Rodionov avatar
Igor Rodionov

Github actions would

Igor Rodionov avatar
Igor Rodionov

but atmos still have to process yaml properly

Igor Rodionov avatar
Igor Rodionov

so that would not solve your problem

Miguel Zablah avatar
Miguel Zablah

my issue is that github actions is not doing this since it dose a normal:

atmos describe affected --include-settings=false --verbose=true
Igor Rodionov avatar
Igor Rodionov

atmos need valid configuration of all components at any call

Igor Rodionov avatar
Igor Rodionov

have you tried that on local?

Miguel Zablah avatar
Miguel Zablah

yes and I get the same error bc that stack is not applied

Miguel Zablah avatar
Miguel Zablah

but that is intencional since that stack is mostly for quick test/debug

Igor Rodionov avatar
Igor Rodionov

atmos needs to parse valid all stack configuration then it operate on stack you need

Igor Rodionov avatar
Igor Rodionov

if there is any error or misconfigration on yaml atmos will fail at any command

Miguel Zablah avatar
Miguel Zablah

well there is no error on configuration the issue is that the stack is not applied

Igor Rodionov avatar
Igor Rodionov

but that’s the error

Miguel Zablah avatar
Miguel Zablah

but I guess maybe I can mock the values with some go template or something to ignore this

Miguel Zablah avatar
Miguel Zablah

will that work?

Igor Rodionov avatar
Igor Rodionov

@Andriy Knysh (Cloud Posse) do our templating support sprig functions?

1
Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah try this one

{{ concat (default (list) (atmos.Component "vpc" .stack).outputs.vpc_private_subnets) (default (list) (atmos.Component "vpc" .stack).outputs.vpc_public_subnets) | toRawJson }}
Igor Rodionov avatar
Igor Rodionov

?

Miguel Zablah avatar
Miguel Zablah

I get the same error:

template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference
Igor Rodionov avatar
Igor Rodionov

ok

Igor Rodionov avatar
Igor Rodionov

let’s try another one

Miguel Zablah avatar
Miguel Zablah

like another version of this?

Igor Rodionov avatar
Igor Rodionov
{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (get . "vpc_private_subnets") (get . "vpc_public_subnets") | toRawJson }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

armos templates support sprig and gomplate functions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not supposed to work on a component that is not applied yet , and Atmos doesn’t know if it was applied or not

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what “terraform output” returns for such a component, we need to handle that in the template

Miguel Zablah avatar
Miguel Zablah

same error:

template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (get . "vpc_private_subnets") (get . "vpc_public_subnets")>: error calling concat: Cannot concat type string as list
Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) how about mocking or default values?

Igor Rodionov avatar
Igor Rodionov

that’s different error

Igor Rodionov avatar
Igor Rodionov

sec

Miguel Zablah avatar
Miguel Zablah

aah that is right

Igor Rodionov avatar
Igor Rodionov
{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (coalesce (get . "vpc_private_subnets") (list)) (coalesce (get . "vpc_public_subnets") (list)) | toRawJson }}
Igor Rodionov avatar
Igor Rodionov

try this one ^

Miguel Zablah avatar
Miguel Zablah

and we are back:

template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (coalesce (get . "vpc_private_subnets") (list)) (coalesce (get . "vpc_public_subnets") (list))>: error calling concat: runtime error: invalid memory address or nil pointer dereference
Igor Rodionov avatar
Igor Rodionov

just to test

Igor Rodionov avatar
Igor Rodionov
{{ default (dict) (atmos.Component "vpc" .stack).outputs | concat (coalesce (get . "vpc_private_subnets") (list "test1")) (coalesce (get . "vpc_public_subnets") (list "test2")) | toRawJson }}
Miguel Zablah avatar
Miguel Zablah

I get a different error now:

template: describe-stacks-all-sections:78:82: executing "describe-stacks-all-sections" at <concat (coalesce (get . "vpc_private_subnets") (list "test1")) (coalesce (get . "vpc_public_subnets") (list "test2"))>: error calling concat: Cannot concat type map as list
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah is that not provisioned component enabled or disabled? If it’s enabled, set enabled: false to disable it and try again

Igor Rodionov avatar
Igor Rodionov
{{ concat (coalesce (get (default (dict) (atmos.Component "vpc" .stack).outputs) "vpc_private_subnets") (list "test1")) (coalesce (get (default (dict) (atmos.Component "vpc" .stack).outputs) "vpc_public_subnets") (list "test2")) | toRawJson }}
Miguel Zablah avatar
Miguel Zablah

sorry had to run an errand

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov I fot this error:

template: describe-stacks-all-sections:74:26: executing "describe-stacks-all-sections" at <concat ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) ((atmos.Component "vpc" .stack).outputs.vpc_public_subnets)>: error calling concat: runtime error: invalid memory address or nil pointer dereference

so same error

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) where I set this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you add enabled: false in the YAML config for your component

components:
   terraform:
     my-component:
       vars:
         enabled: false
Miguel Zablah avatar
Miguel Zablah

but this will only work of components that support that right? or will atmos actually ignore it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, the enabled variable only exist in the components that support it (mostly all CloudPosse components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos sends the vars section to terraform

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) I’m using aws vpc module so I don’t think it will do much here also this will only return empty outputs right? so this might still not work right?

2024-10-03

Drew Fulton avatar
Drew Fulton

Good morning, I’m new here but looks like a great community. I’ve been using various Cloud Posse modules for terraform for a while but am now trying to set up a new AWS account from scratch to learn the patterns for the higher level setup. I’ve run into a problem and am hoping for some help. I feel like its probably just a setting somewhere but for the life of me I can’t find it.

So I have been working through the Cold Start and have gotten through the account setup successfully but running the account-map commands is resulting in errors. I’ll walk through the steps I’ve tried in case my tweaks have confused the root issue… For reference, I am using all the latest versions of the various components mentioned and pulled them in again just before posting this.

  1. When I first ran the atmos terraform deploy account-map -s core-gbl-root command, I got an error that it was unable to find a stack file in the /stacks/orgs folder. That was fine as I wasn’t using that folder but in the error message, it was clear that it was using a default atmos.yaml (this one) that includes orgs/**/* in the include_paths and not the one that I have been using on my machine. I’ve spent a long time trying to get it to use my local yaml and finally gave up and just added an empty file in the orgs folder to get passed that error. Then I get to a new error…
  2. Now if I run the plan for account-map I get what looks like a correct full plan and then a new error at the end:
    ╷
    │ Error: 
    │ Could not find the component 'account' in the stack 'core-gbl-root'.
    │ Check that all the context variables are correctly defined in the stack manifests.
    │ Are the component and stack names correct? Did you forget an import?
    │ 
    │ 
    │   with module.accounts.data.utils_component_config.config[0],
    │   on .terraform/modules/accounts/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
    │    1: data "utils_component_config" "config" {
    │ 
    ╵
    exit status 1
    

    If I run atmos validate component account -s core-gbl-root I get successful validations and the same with validating account-map.

I’ve tried deleting the .terraform folders from both the accounts and account-map components and re-run the applies but get the same thing.

I’ve run with both Debug and Trace logs and am not seeing anything that points to where this error may be coming from.

I’ve been at this for hours yesterday and a few more hours this morning and decided it was time to seek some help.

Thanks for any advice!

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) @Dan Miller (Cloud Posse)

Michal Tomaszek avatar
Michal Tomaszek

I think it could be related to the path of your atmos.yaml file, since you deal with remote-state and terraform-provider-utils Terraform provider. Check this: https://atmos.tools/quick-start/simple/configure-cli/#config-file-location

Drew Fulton avatar
Drew Fulton

I thought it might be that but it is clearly registering my atmos.yaml file as it is in the main directory of my repo and i’ve found atmos to respond to settings there (such as the include/exclude paths and log levels) but the other terraform providers aren’t picking it up. (EDIT! Just saw the bit at the bottom… exploring now)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Drew Fulton Terraform executes the provider code from the component directory (e.g. components/terraform/my-component). We don’t want to put atmos.yaml into each component directory, so we put it into one of the known places (as described in the doc) or we can use ENV vars, so both Atmos binary and the provider binary can find it

Drew Fulton avatar
Drew Fulton

Got it! Is there a best practice for selecting where to put it to prevent duplication?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for Docker containers (geodesic is one of them), we put it in rootfs/usr/local/etc/atmos/atmos.yaml in the repo, and then in the Dockerfile we do

COPY rootfs/ /
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so inside the container, it will be in usr/local/etc/atmos/atmos.yaml, which is a know path to Atmos and the provider

1
Drew Fulton avatar
Drew Fulton

perfect! Thanks so much for the help

1
Drew Fulton avatar
Drew Fulton

and it works!

1
Samuel Than avatar
Samuel Than

hi all, am fresh new to atmos. And i was quite mind blown why i have not discover this tool yet while still figuring my way into the documentation, I have a question about the components and stacks.

If i as an ops engineer were to create and standardise my own component in a repository to store all the standard “libraries” of components. Is it advisable if the stacks, and atmos.yaml be in a separate repository ?

Meaning, for a developer will only need to declare the various component inside a stacks folder of their own project’s repository. Like only need to write yaml files and not needing to deal/write terraform code.

We then during execution have a github workflow that clones the core component repository into the project repository, and then complete the infra related deployment. is that something supported ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, centrally defined components is best for larger organizations that might have multiple infrastructure repositories

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, it can also be burdensome when iterating quickly to have to push all changes via an upstream component git component registry

Samuel Than avatar
Samuel Than

yeah, thanks for the input. We’re not a large organization, there will be some trade off we’ll have to consider, if we try to limit the devs to only work with yaml. Still early stages of exploring the capabilities of atmos.

1

2024-10-04

Alcp avatar

atlantis integration question, I have this config in atmos.yaml

    project_templates:
      project-1:
        # generate a project entry for each component in every stack
        name: "{tenant}-{stage}-{environment}-{component}"
        workspace: "{workspace}"
        dir: "./components/terraform/{component}"
        terraform_version: v1.6.3
        delete_source_branch_on_merge: true
        plan_requirements: [undiverged]
        apply_requirements: [mergeable,undiverged]
        autoplan:
          enabled: true
          when_modified:
            - '**/*.tf'
            - "varfiles/{tenant}-{stage}-{environment}-{component}.tfvars.json"
            - "backends/{tenant}-{stage}-{environment}-{component}.tf"

the plan_requirements field doesn’t seem to have any effect on the generated atlantis.yaml

jose.amengual avatar
jose.amengual

there is a lot of options recently added that atmos might now recognize

jose.amengual avatar
jose.amengual

although

 plan_requirements: [undiverged]
        apply_requirements: [mergeable,undiverged] 

I think they can only be declared in the server side repo config repo.yaml

jose.amengual avatar
jose.amengual

mmm no, they can can be declared in the atlantis.yaml

jose.amengual avatar
jose.amengual

FYI is not recommended to allow users to override workflows, so it is much safer to configure workflows and repo options on the server side config and leave the atlantis.yaml as simple as possible

Alcp avatar

Yeah it make sense to have these values in server side repo config… I was trying some test scenarios and encountered the issue. in related question, the field ‘when_modified’ in atlantis.yaml .. I was trying to find some documentation about how it determines the modified files, just determined on if the file is modified in the PR?

jose.amengual avatar
jose.amengual

file modified in the PR yes

jose.amengual avatar
jose.amengual

is a regex used for autoplan

Alcp avatar

@jose.amengual another issue I am running in to with atlantis Have a question about the ‘when_modified’ field in the repo level config, we are using dynamic repo config generation and in the same way generating the var files for the project. The var files are not commited to the git repo, since it’s not commited I beleive the when_modified causes an issue by not planning the project. Removing the when_modified field from config doesn’t seem to help becuase of the default values. Do we have way to ignore this field and just plan the projects, regardless of modified file in the project

jose.amengual avatar
jose.amengual

I answer you on the atlantis slack

1

2024-10-05

John Polansky avatar
John Polansky

trying to create a super simple example of atmos + cloudposse/modules to see if they will work for our needs. I’m using the s3-bucket but when I do a plan on it.. it prints out the terraform plan but that shows

│ Error: failed to find a match for the import '/opt/test/components/terraform/s3-bucket/stacks/orgs/**/*.yaml' ('/opt/test/components/terraform/s3-bucket/stacks/orgs' + '**/*.yaml')

I can’t make heads or tails of this error.. there are no stacks/orgs under the s3-bucket module when I pulled it in with atmos vendor pull

Thanks in advance

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please DM me your config, i’ll review it

2024-10-06

Samuel Than avatar
Samuel Than

Continuing from my understanding of the design pattern. Base on this screenshot. Can i ask few questions:

  1. The infrastructure repository holds all the atmos components, stacks, modules ?
  2. The application repository only requires to write the taskdef.json for deployment into ECS ?
  3. If there are additional infrastructure the application needs, example s3, dynamodb … etc. The approach is to get the developer to write a PR to the infrastructure repository first with the necessary “stack” information, prior to performing any application type deployment ?
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, they would open a PR and add the necessary components to the stack configuration.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…and prior to performing any application deployment that depends on it.

Samuel Than avatar
Samuel Than

Cool, i think i begin to have clearer understanding on the Atmos mindset … thnks !!

2024-10-07

2024-10-08

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey, I’m trying to execute a plan and I’m getting the following output:

% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1

Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
    - 172.80.0.0/16
    - 172.81.0.0/16
vpc_id: [redacted]

Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/tofu init -reconfigure

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0

OpenTofu has been successfully initialized!

Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg

Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME

  Select a different OpenTofu workspace.

Options:

    -or-create=false    Create the OpenTofu workspace if it doesn't exist.

    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg


Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME

  Create a new OpenTofu workspace.

Options:

    -lock=false         Don't hold a state lock during the operation. This is
                        dangerous if others might concurrently run commands
                        against the same workspace.

    -lock-timeout=0s    Duration to retry a state lock.

    -state=path         Copy an existing state file into the new workspace.


    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg

exit status 1

goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
        runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
        github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
        github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
        github.com/cloudposse/atmos/main.go:9 +0x1c

I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

tretinha avatar
tretinha

I just now tried something that seemed to work. Since I’m using the “{stage}” name pattern inside atmos.yaml, I set a “vars.stage: dev” inside my dev.yaml stack file and it seemed to do the trick. Is this a correct pattern? Thanks!

Hey, I’m trying to execute a plan and I’m getting the following output:

% atmos terraform plan keycloak_sg -s deploy/dev/us-east-1

Variables for the component 'keycloak_sg' in the stack 'deploy/dev/us-east-1':
aws_account_profile: [redacted]
cloud_provider: aws
environment: dev
region: us-east-1
team: [redacted]
tfstate_bucket: [redacted]
vpc_cidr_blocks:
    - 172.80.0.0/16
    - 172.81.0.0/16
vpc_id: [redacted]

Writing the variables to file:
components/terraform/sg/-keycloak_sg.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/tofu init -reconfigure

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.70.0

OpenTofu has been successfully initialized!

Command info:
Terraform binary: tofu
Terraform command: plan
Arguments and flags: []
Component: keycloak_sg
Terraform component: sg
Stack: deploy/dev/us-east-1
Working dir: components/terraform/sg

Executing command:
/opt/homebrew/bin/tofu workspace select -keycloak_sg
Usage: tofu [global options] workspace select NAME

  Select a different OpenTofu workspace.

Options:

    -or-create=false    Create the OpenTofu workspace if it doesn't exist.

    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg


Executing command:
/opt/homebrew/bin/tofu workspace new -keycloak_sg
Usage: tofu [global options] workspace new [OPTIONS] NAME

  Create a new OpenTofu workspace.

Options:

    -lock=false         Don't hold a state lock during the operation. This is
                        dangerous if others might concurrently run commands
                        against the same workspace.

    -lock-timeout=0s    Duration to retry a state lock.

    -state=path         Copy an existing state file into the new workspace.


    -var 'foo=bar'      Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

    -var-file=filename  Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.
Error parsing command-line flags: flag provided but not defined: -keycloak_sg

exit status 1

goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
        runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x105c70460, 0x14000b306e0})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
github.com/cloudposse/atmos/cmd.init.func17(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/cloudposse/atmos/cmd/terraform.go:33 +0x150
github.com/spf13/cobra.(*Command).execute(0x10750ef60, {0x14000853480, 0x4, 0x4})
        github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x10750ec80)
        github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
        github.com/cloudposse/atmos/cmd/root.go:88 +0x214
main.main()
        github.com/cloudposse/atmos/main.go:9 +0x1c

I’m not really sure what I can do since the error message suggests an underlying tofu/terraform error and not an atmos one. I bet my stack/component has something wrong but I’m not entirely sure why. The atmos.yaml is the same of the previous message I sent here yesterday. I’d appreciate any pointers

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

2024-10-09

tretinha avatar
tretinha

Hey, on a setup like this:

import:
  - catalog/keycloak/defaults

components:
  terraform:
    keycloak_route53_zones:
      vars:
        zones:
          "[redacted]":
            comment: "zone made for the keycloak sso"
    keycloak_acm:
      vars:
        domain_name: [redacted]
        zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id }}'

My keycloak_acm component is failing to actually get the output of the one above it. Am I doing this fundamentally wrongly? The defaults.yaml being imported looks like this:

components:
  terraform:
    keycloak_route53_zones:
      backend:
        s3:
          workspace_key_prefix: keycloak-route53-zones
      metadata:
        component: route53-zones
    keycloak_acm:
      backend:
        s3:
          workspace_key_prefix: keycloak-acm
      metadata:
        component: acm
      depends_on:
        - keycloak_route53_zones
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you confirm that you see a valid zone_id as a terraform output of keycloak_route53_zones

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and please make sure the component is provisioned

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos.Component calls terraform output, so it must be in the state already

tretinha avatar
tretinha
atmos terraform output keycloak_route53_zones -s aws-dev-us-east-1

gives me

zone_id = {
  "redacted" = "redacted"
}

which is redacted but it is the zone subdomain as key and the id as value

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, so it’s returning a map for the zone id

tretinha avatar
tretinha

ah!

tretinha avatar
tretinha

so I guess it should be

zone_id: '{{ (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id.value }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

value is not correct

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you should get the value by the map key using Go templates

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
{{ index .YourMap "yourKey" }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
zone_id: '{{ index (atmos.Component "keycloak_route53_zones" .stack).outputs.zone_id "<key>" }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there the key is the "redacted" in your example

tretinha avatar
tretinha

Thanks! I’ll test this out, I appreciate the help

1
tretinha avatar
tretinha

It didn’t seem to work and for this case I just passed the zone id directly (like the actual value from the output). The error was that terraform complained that it should be less than 32 characters, which means that terraform understood my go template as an actual string and not a template. I run it with atmos terraform apply component -s stack after a successful plan creation

tretinha avatar
tretinha

I passed the value directly because I don’t think the zone id will change in the future, but for other components this might be an issue for me. Maybe I should run it differently? I set the value exactly as suggested by Andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please send me your yaml config, i’ll take a look

tretinha avatar
tretinha

Sure!

2024-10-10

jose.amengual avatar
jose.amengual

Hello, me again……. component setting are arbitrary keys?

 
settings:
        pepe:
         does-not-like-sushi: true

can I do that?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

settings is a free form map

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

its the same as vars, but free form, participates in all the inheritance, meaning you can define it globally, per stack, per base component, per component, and then everything gets deep merged into the final map

jose.amengual avatar
jose.amengual

cool, that is awesome and exactly what I need

1
jose.amengual avatar
jose.amengual

Is there a way to inherit metadata for all components without having to create an abstract component? something that all component should have

1
jose.amengual avatar
jose.amengual

imagine if this was added to something like atmos.yaml and CODEOWNERS only allow very few people to modify it

jose.amengual avatar
jose.amengual

like having Sec rules for all components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, metadata is not inherited, it’s per component

jose.amengual avatar
jose.amengual

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because in metadata you specify all the base components for your component

jose.amengual avatar
jose.amengual

understood

1
toka avatar

Hi :wave: calling for help with configuring gcs backend. I’m bootstraping a GCP organization. I have a module that created a seed project with initial bits, including gcs bucket that I would like to use for storing tfstate files. I’ve run atmos configured with local tf backend for the init.

Now I’d like to move my backend from local to the bucket and move from there. I’ve added bucket configuration to _defaults.yaml for my org:

backend_type: gcs
  backend:
    gcs:
      bucket: "bucket_name"

Unfortunately atmos says that this bucket doesn’t exist, despite I have copied a test file into the bucket

╷
│ Error: Error inspecting states in the "local" backend:
│     querying Cloud Storage failed: storage: bucket doesn't exist
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note that atmos never uses this backend. We generate a backend.tf.json file used by terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could it be a GCP permissions issue

toka avatar

I am a organisation admin, have full access to the bucket, and besides Ive tested access to the bucket itself with cli.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Based on your screenshot the YAML is invalid.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the hint is it’s trying to use a “local” backend type.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
terraform:
  backend_type: gcs
  backend:
    gcs:
      bucket: "tf-state"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note in your YAML above, the whitespace is off

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform is attempting to use this: https://developer.hashicorp.com/terraform/language/backend/local

which indicates that the backend type is not getting set

Backend Type: local | Terraform | HashiCorp Developerattachment image

Terraform can store the state remotely, making it easier to version and work with in a team.

toka avatar

Thanks Erik. I’ve double checked again everything. I’ve tried to delete backend.tt.json file, disable auto_generate_backend_file and after that added backend configuration directly into module - same result. That got me thinking something is not right with auth into GCP. Re-logged again with gcloud auth application-default login and now state is migrated into the bucket. Honestly no idea, I’ve logged in multiple times this day already. Even before posting here

toka avatar

maybe this is the kind of tax for changing whole workflow to atmos

jose.amengual avatar
jose.amengual

can I vendor all the components using vendor.yaml? or do I have to set every component that I want to vendor?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual from where you want to vendor all the components?

jose.amengual avatar
jose.amengual

from my iac repo

jose.amengual avatar
jose.amengual

to another repo ( using atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me check

jose.amengual avatar
jose.amengual

I did this :

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: iac-vendoring
  description: Atmos vendoring manifest for Atmos-iac repo
spec:
  # Import other vendor manifests, if necessary
  imports: []

  sources:
      - source: "github.com/jamengual/pepe-iac.git/"
        #version: "main"
        targets:
          - "./"
        included_paths:
          - "**/components/**"
          - "**/*.md"
          - "**/stacks/sandbox/**"
jose.amengual avatar
jose.amengual

I was able to vendor components just fine

jose.amengual avatar
jose.amengual

but not sandbox stack files for some reason

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to add another item for the stacks folder separately

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s how the glob lib that we are using works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t like it and will revisit it, but currently it’s what it is

jose.amengual avatar
jose.amengual

you mean something like :

 sources:
      - source: "github.com/jamengua/pepe-iac.git/"
        #version: "main"
        targets:
          - "./"
        included_paths:
          - "**/components/**"
          - "**/*.md"
      - source: "github.com/jamengual/pepe-iac.git/"
        #version: "main"
        targets:
          - "./stacks/sandbox/"
        included_paths:
          - "stacks/sandbox/*.yaml"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
included_paths:
          - "**/components/**"
          - "**/*.md"
          - "**/stacks/**"
          - "**/stacks/sandbox/**"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this ^

jose.amengual avatar
jose.amengual

so this pulled all the stacks

       - "**/stacks/**"
jose.amengual avatar
jose.amengual

but it looks like I can’t pull a specific subfolder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, so this "**/stacks/sandbox/**" does not work?

jose.amengual avatar
jose.amengual

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess you can use

- "**/stacks/**"
- "**/stacks/sandbox/**"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and exclude the other stacks in excluded_paths:

jose.amengual avatar
jose.amengual

I could do that….but it makes it not very DRY

jose.amengual avatar
jose.amengual

can you verbose the pull command?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos vendor pull --verbose

jose.amengual avatar
jose.amengual
atmos vendor pull --verbose
Error: unknown flag: --verbose

Usage:

  atmos vendor pull [flags]


Flags:

  -c, --component string   Only vendor the specified component: atmos vendor pull --component <component>
      --dry-run            atmos vendor pull --component <component> --dry-run
  -h, --help               help for pull
  -s, --stack string       Only vendor the specified stack: atmos vendor pull --stack <stack>
      --tags string        Only vendor the components that have the specified tags: atmos vendor pull --tags=dev,test
  -t, --type string        atmos vendor pull --component <component> --type=terraform|helmfile (default "terraform")


Global Flags:

      --logs-file string         The file to write Atmos logs to. Logs can be written to any file or any standard file descriptor, including '/dev/stdout', '/dev/stderr' and '/dev/null' (default "/dev/stdout")
      --logs-level string        Logs level. Supported log levels are Trace, Debug, Info, Warning, Off. If the log level is set to Off, Atmos will not log any messages (default "Info")
      --redirect-stderr string   File descriptor to redirect 'stderr' to. Errors can be redirected to any file or any standard file descriptor (including '/dev/null'): atmos <command> --redirect-stderr /dev/stdout


unknown flag: --verbose

goroutine 1 [running]:
runtime/debug.Stack()
        runtime/debug/stack.go:26 +0x64
runtime/debug.PrintStack()
        runtime/debug/stack.go:18 +0x1c
github.com/cloudposse/atmos/pkg/utils.LogError({0x10524c0c0, 0x140003edf10})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:61 +0x18c
github.com/cloudposse/atmos/pkg/utils.LogErrorAndExit({0x10524c0c0, 0x140003edf10})
        github.com/cloudposse/atmos/pkg/utils/log_utils.go:35 +0x30
main.main()
        github.com/cloudposse/atmos/main.go:11 +0x24
jose.amengual avatar
jose.amengual

Atmos 1.88.1 on darwin/arm64

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh sorry, try

ATMOS_LOGS_LEVEL=Trace atmos vendor pull
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual also check out this example for using YAML anchors in vendor.yaml to DRY it up

https://github.com/cloudposse/atmos/blob/main/examples/demo-component-versions/vendor.yaml#L10-L20

  - &library
    source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
    version: "main"
    targets:
      - "components/terraform/{{ .Component }}/{{.Version}}"
    included_paths:
      - "**/*.tf"
      - "**/*.tfvars"
      - "**/*.md"
    tags:
      - demo
1
1
jose.amengual avatar
jose.amengual

could you pass an ENV variable as a value inside of the yaml?

jose.amengual avatar
jose.amengual

like

 excluded_paths: 
          - "**/production/**"
          - "${EXCLUDE_PATHS}"
          - "${EXCLUDE_PATH_2}"
          - "${EXCLUDE_PATH_3}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently not, env vars in vendor.yaml are not supported

jose.amengual avatar
jose.amengual

ok, Thanks guys

1

2024-10-11

RB avatar

what’s the best way to distinguish between custom components and vendored components from cloudposse?

RB avatar

I was thinking

Option 1) a unique namespace via a separate dir

e.g.

# cloudposse components
components/terraform
# internal components
components/terraform/internal

Option 2) a unique namespace via a prefix

# upstream component
components/terraform/ecr
# internal component
components/terraform/internal-ecr

Option 3) Unique key in component.yaml and enforce this file in all components

Option 4) Vendor internal components from an internal repo

This way the source will contain the other org instead of cloudposse so that can be used to distinguish

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re actually looking into something imilar to this right now related to our refarch stack configs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For stack configs we’ve settled on stacks/vendor/cloudposse

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For components, I would maybe suggest

components/vendor/cloudposse

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The alternative is a top-level folder like vendor/cloudposse

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

which could contain stacks and components.

RB avatar

Thats for stack configs, what about terraform components ? Or do you think it would be for both stack configs and terraform components?

Miguel Zablah avatar
Miguel Zablah

What our team dose this: Vendor: components/terraform/vendor/{provider, ej. cloudposse} Internal: components/terraform/{cloudProvider, Ej. AWS}/{componentName, Ej. VPC}

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For components, I would maybe suggest

components/vendor/cloudposse

The alternative is a top-level folder like vendor/cloudposse

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@burnzy is this working for you? https://github.com/cloudposse/terraform-yaml-stack-config/pull/95 @Jeremy G (Cloud Posse) is looking into something similar and we’ll likely get this merged. Sorry it fell through the cracks.

#95 feat: support for gcs backends

what

Simple change to add support for GCS backends

why

Allows GCP users (users with gcs backends) to make use of this remote-state module for sharing data between components.

references

https://developer.hashicorp.com/terraform/language/settings/backends/gcshttps://atmos.tools/core-concepts/share-data/#using-remote-state

1
toka avatar

Interesante, I was trying last week to use remote-state between two components with GCP backend and cannot make it, it forced me to re-write my module to switch from remote state to using outputs and atmos templating

#95 feat: support for gcs backends

what

Simple change to add support for GCS backends

why

Allows GCP users (users with gcs backends) to make use of this remote-state module for sharing data between components.

references

https://developer.hashicorp.com/terraform/language/settings/backends/gcshttps://atmos.tools/core-concepts/share-data/#using-remote-state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This should work now as we merged the PR

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Version 1.8.0 should give you full support for using remotes-state with any backend Terraform or OpenTofu supports.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) were you also able to resolve that deprecated warning? I think you maybe mentioned you had an idea

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Yes, 1.8.0 should fix deprecated warnings from remote-state. Or, more accurately, if receiving deprecation warnings from remote-state, they can now be resolved by updating your backend/remote state backend configuration to match the version of Terraform or Tofu you are using. For example, change

terraform:
  backend:
    s3:
      bucket: my-tfstate-bucket
      dynamodb_table: my-tfstate-lock-table
      role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-role
  remote_state_backend:
    s3:
      role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-read-only-role

to

terraform:
  backend:
    s3:
      bucket: my-tfstate-bucket
      dynamodb_table: my-tfstate-lock-table
      assume_role:
        role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-role
  remote_state_backend:
    s3:
      assume_role:
        role_arn: arn:aws:iam::123456789012:role/my-tfstate-access-read-only-role
2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@kevcube

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Michael Dizon

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) please create a task to fix this in our infra-test and infra-live repos

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse) can you please suggest the task title?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Gabriela Campana (Cloud Posse) Update backend configurations to avoid deprecation warnings and add a note in the task that says all remote-state modules must be updated to v1.8.0

1
jose.amengual avatar
jose.amengual

is it possible to vendor pull from a different repo?

jose.amengual avatar
jose.amengual

I have my vendor.yaml

jose.amengual avatar
jose.amengual
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: iac-vendoring
  description: Atmos vendoring manifest for Atmos-iac repo
spec:
  imports: []

  sources:
      - source: "<https://x-access-token>:${{ secrets.TOKEN }}@github.com/PEPE/pepe-iac.git"
        #version: "main"
        targets:
          - "./"
        included_paths:
          - "**/components/**"
          - "**/*.md"
          - "**/stacks/**"
        excluded_paths: 
          - "**/production/**"
jose.amengual avatar
jose.amengual

I tried git:// but then tried to use ssh git, and this is running from a reusable action

jose.amengual avatar
jose.amengual

I tried this locally and it works but in my local I have a ssh-config

jose.amengual avatar
jose.amengual

if I run git clone using that url it clones just fine

jose.amengual avatar
jose.amengual

I’m hitting this error on the go-git library now

// <https://github.com/go-git/go-git/blob/master/worktree.go>

func (w *Worktree) getModuleStatus() (Status, error) {
    // ...
    if w.r.ModulesPath == "" {
        return nil, ErrModuleNotInitialized
    }

    if !filepath.IsAbs(w.r.ModulesPath) {
        return nil, errors.New("relative paths require a module with a pwd")
    }
    // ...
}

``` package git

import ( “context” “errors” “fmt” “io” “os” “path/filepath” “runtime” “strings”

"github.com/go-git/go-billy/v5"
"github.com/go-git/go-billy/v5/util"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/go-git/go-git/v5/plumbing/format/index"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/merkletrie"
"github.com/go-git/go-git/v5/utils/sync" )

var ( ErrWorktreeNotClean = errors.New(“worktree is not clean”) ErrSubmoduleNotFound = errors.New(“submodule not found”) ErrUnstagedChanges = errors.New(“worktree contains unstaged changes”) ErrGitModulesSymlink = errors.New(gitmodulesFile + “ is a symlink”) ErrNonFastForwardUpdate = errors.New(“non-fast-forward update”) ErrRestoreWorktreeOnlyNotSupported = errors.New(“worktree only is not supported”) )

// Worktree represents a git worktree. type Worktree struct { // Filesystem underlying filesystem. Filesystem billy.Filesystem // External excludes not found in the repository .gitignore Excludes []gitignore.Pattern

r *Repository }

// Pull incorporates changes from a remote repository into the current branch. // Returns nil if the operation is successful, NoErrAlreadyUpToDate if there are // no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. func (w *Worktree) Pull(o *PullOptions) error { return w.PullContext(context.Background(), o) }

// PullContext incorporates changes from a remote repository into the current // branch. Returns nil if the operation is successful, NoErrAlreadyUpToDate if // there are no changes to be fetched, or an error. // // Pull only supports merges where the can be resolved as a fast-forward. // // The provided Context must be non-nil. If the context expires before the // operation is complete, an error is returned. The context only affects the // transport operations. func (w *Worktree) PullContext(ctx context.Context, o *PullOptions) error { if err := o.Validate(); err != nil { return err }

remote, err := w.r.Remote(o.RemoteName)
if err != nil {
	return err
}

fetchHead, err := remote.fetch(ctx, &FetchOptions{
	RemoteName:      o.RemoteName,
	RemoteURL:       o.RemoteURL,
	Depth:           o.Depth,
	Auth:            o.Auth,
	Progress:        o.Progress,
	Force:           o.Force,
	InsecureSkipTLS: o.InsecureSkipTLS,
	CABundle:        o.CABundle,
	ProxyOptions:    o.ProxyOptions,
})

updated := true
if err == NoErrAlreadyUpToDate {
	updated = false
} else if err != nil {
	return err
}

ref, err := storer.ResolveReference(fetchHead, o.ReferenceName)
if err != nil {
	return err
}

head, err := w.r.Head()
if err == nil {
	// if we don't have a shallows list, just ignore it
	shallowList, _ := w.r.Storer.Shallow()

	var earliestShallow *plumbing.Hash
	if len(shallowList) > 0 {
		earliestShallow = &shallowList[0]
	}

	headAheadOfRef, err := isFastForward(w.r.Storer, ref.Hash(), head.Hash(), earliestShallow)
	if err != nil {
		return err
	}

	if !updated && headAheadOfRef {
		return NoErrAlreadyUpToDate
	}

	ff, err := isFastForward(w.r.Storer, head.Hash(), ref.Hash(), earliestShallow)
	if err != nil {
		return err
	}

	if !ff {
		return ErrNonFastForwardUpdate
	}
}

if err != nil && err != plumbing.ErrReferenceNotFound {
	return err
}

if err := w.updateHEAD(ref.Hash()); err != nil {
	return err
}

if err := w.Reset(&ResetOptions{
	Mode:   MergeReset,
	Commit: ref.Hash(),
}); err != nil {
	return err
}

if o.RecurseSubmodules != NoRecurseSubmodules {
	return w.updateSubmodules(&SubmoduleUpdateOptions{
		RecurseSubmodules: o.RecurseSubmodules,
		Auth:              o.Auth,
	})
}

return nil }

func (w *Worktree) updateSubmodules(o *SubmoduleUpdateOptions) error { s, err := w.Submodules() if err != nil { return err } o.Init = true return s.Update(o) }

// Checkout switch branches or restore working tree files. func (w *Worktree) Checkout(opts *CheckoutOptions) error { if err := opts.Validate(); err != nil { return err }

if opts.Create {
	if err := w.createBranch(opts); err != nil {
		return err
	}
}

c, err := w.getCommitFromCheckoutOptions(opts)
if err != nil {
	return err
}

ro := &ResetOptions{Commit: c, Mode: MergeReset}
if opts.Force {
	ro.Mode = HardReset
} else if opts.Keep {
	ro.Mode = SoftReset
}

if !opts.Hash.IsZero() && !opts.Create {
	err = w.setHEADToCommit(opts.Hash)
} else {
	err = w.setHEADToBranch(opts.Branch, c)
}

if err != nil {
	return err
}

if len(opts.SparseCheckoutDirectories) > 0 {
	return w.ResetSparsely(ro, opts.SparseCheckoutDirectories)
}

return w.Reset(ro) }

func (w *Worktree) createBranch(opts *CheckoutOptions) error { if err := opts.Branch.Validate(); err != nil { return err }

_, err := w.r.Storer.Reference(opts.Branch)
if err == nil {
	return fmt.Errorf("a branch named %q already exists", opts.Branch)
}

if err != plumbing.ErrReferenceNotFound {
	return err
}

if opts.Hash.IsZero() {
	ref, err := w.r.Head()
	if err != nil {
		return err
	}

	opts.Hash = ref.Hash()
}

return w.r.Storer.SetReference(
	plumbing.NewHashReference(opts.Branch, opts.Hash),
) }

func (w *Worktree) getCommitFromCheckoutOptions(opts *CheckoutOptions) (plumbing.Hash, error) { hash := opts.Hash if hash.IsZero() { b, err := w.r.Reference(opts.Branch, true) if err != nil { return plumbing.ZeroHash, err }

	hash = b.Hash()
}

o, err := w.r.Object(plumbing.AnyObject, hash)
if err != nil {
	return plumbing.ZeroHash, err
}

switch o := o.(type) {
case *object.Tag:
	if o.TargetType != plumbing.CommitObject {
		return plumbing.ZeroHash, fmt.Errorf("%w: tag target %q", object.ErrUnsupportedObject, o.TargetType)
	}

	return o.Target, nil
case *object.Commit:
	return o.Hash, nil
}

return plumbing.ZeroHash, fmt.Errorf("%w: %q", object.ErrUnsupportedObject, o.Type()) }

func (w *Worktree) setHEADToCommit(commit plumbing.Hash) error { head := plumbing.NewHashReference(plumbing.HEAD, commit) return w.r.Storer.SetReference(head) }

func (w *Worktree) setHEADToBranch(branch plumbing.ReferenceName, commit plumbing.Hash) error { target, err := w.r.Storer.Reference(branch) if err != nil { return err }

var head *plumbing.Reference
if target.Name().IsBranch() {
	head = plumbing.NewSymbolicReference(plumbing.HEAD, target.Name())
} else {
	head = plumbing.NewHashReference(plumbing.HEAD, commit)
}

return w.r.Storer.SetReference(head) }

func (w *Worktree) ResetSparsely(opts *ResetOptions, dirs []string) error { if err := opts.Validate(w.r); err != nil { return err }

if opts.Mode == MergeReset {
	unstaged, err := w.containsUnstagedChanges()
	if err != nil {
		return err
	}

	if unstaged {
		return ErrUnstagedChanges
	}
}

if err := w.setHEADCommit(opts.Commit); err != nil {
	return err
}

if opts.Mode == SoftReset {
	return nil
}

t, err := w.r.getTreeFromCommitHash(opts.Commit)
if err != nil {
	return err
}

if opts.Mode == MixedReset || opts.Mode == MergeReset || opts.Mode =…
jose.amengual avatar
jose.amengual

I do not know if this is because vendir sources on http are not compatible with git?

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse) this is when I was trying to debug the pwd error

1

2024-10-12

github3 avatar
github3
05:50:42 PM

Improve error stack trace. Add --stack flag to atmos describe affected command. Improve atmos.Component template function @aknysh (#714)## what • Improve error stack trace • Add --stack flag to atmos describe affected command • Improve atmos.Component template function

why

• On any error in the CLI, print Go stack trace only when Atmos log level is Trace - improve user experience • The --stack flag in the atmos describe affected command allows filtering the results by the specific stack only:

atmos describe affected –stack plat-ue2-prod

Affected components and stacks:

[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]

• In the atmos.Component template function, don’t execute terraform output on disabled and abstract components. The disabled components (when enabled: false) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.

Summary by CodeRabbit

Release Notes

New Features
• Added a --stack flag to the atmos describe affected command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs. • Documentation
• Updated documentation for the atmos describe affected command to reflect the new --stack flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation. • Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0 to 1.89.0 and Terraform version from 1.9.5 to 1.9.7. Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what

Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.

Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’

why

Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.

references

Also closes #664

Summary by CodeRabbit

New Features
• Renamed the component from myapp to station in the configuration.
• Updated provisioning commands in documentation to reflect the new component name. • Documentation
• Revised “Deploy Everything” document to replace myapp with station.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content. Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what

Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.

why

As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.

references

Closes #708

Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).

Use the following command to get a quick summary of the specific corrections made:

git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n

FWIW, the top typos are:

• usign • accross • overriden • propogate • verions • combinatino • compoenents • conffig • conventionss • defind Fix version command in simple tutorial @jasonwashburn (#705)## what • Corrects incorrect atmos --version command to atmos version in simple tutorial docs.

why

• Documentation is incorrect.

references

closes #704

docs: add installation guides for asdf and Mise @mtweeman (#699)## what

Docs for installing Atmos via asdf or Mise

why

As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.

references

Plugin repo

Use Latest Atmos GitHub Workflows Examples with RemoteFile Component @milldr (#695)## what - Created the RemoteFile component - Replace all hard-coded files with RemoteFile call

why

• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs, so we should fetch the latest pattern instead

references

SweetOps slack thread Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what • Updates comment to reflect action defaults • Fixes atmos-version input

why

• Fixes input variables to match acceptable action variables

references

Setup Actions

2
fast_parrot2
jose.amengual avatar
jose.amengual

How log does it take to get the linux x64 package?

Improve error stack trace. Add --stack flag to atmos describe affected command. Improve atmos.Component template function @aknysh (#714)## what • Improve error stack trace • Add --stack flag to atmos describe affected command • Improve atmos.Component template function

why

• On any error in the CLI, print Go stack trace only when Atmos log level is Trace - improve user experience • The --stack flag in the atmos describe affected command allows filtering the results by the specific stack only:

atmos describe affected –stack plat-ue2-prod

Affected components and stacks:

[ { “component”: “vpc”, “component_type”: “terraform”, “component_path”: “examples/quick-start-advanced/components/terraform/vpc”, “stack”: “plat-ue2-prod”, “stack_slug”: “plat-ue2-prod-vpc”, “affected”: “stack.vars” } ]

• In the atmos.Component template function, don’t execute terraform output on disabled and abstract components. The disabled components (when enabled: false) don’t produce any terraform outputs. The abstract components are not meant to be provisioned (they are just blueprints for other components with default values), and they don’t have any outputs.

Summary by CodeRabbit

Release Notes

New Features
• Added a --stack flag to the atmos describe affected command for filtering results by stack.
• Enhanced error handling across various commands to include configuration context in error logs. • Documentation
• Updated documentation for the atmos describe affected command to reflect the new --stack flag.
• Revised “Atlantis Integration” documentation to highlight support for Terraform Pull Request Automation. • Dependency Updates
• Upgraded several dependencies, including Atmos version from 1.88.0 to 1.89.0 and Terraform version from 1.9.5 to 1.9.7. Correct outdated ‘myapp’ references in simple tutorial @jasonwashburn (#707)## what

Corrects several (assuming) outdated references to a ‘myapp’ component rather than the correct ‘station’ component in the simple tutorial.

Also corrects the provided example repository hyperlink to refer to the correct weather example ‘quick-start-simple’ used in the tutorial rather than ‘demo-stacks’

why

Appears that the ‘myapp’ references were likely just missed during a refactor of the simple tutorial. Fixing them alleviates confusion/friction for new users following the tutorial. Attempting to use the examples/references as-is results in various errors as there is no ‘myapp’ component defined.

references

Also closes #664

Summary by CodeRabbit

New Features
• Renamed the component from myapp to station in the configuration.
• Updated provisioning commands in documentation to reflect the new component name. • Documentation
• Revised “Deploy Everything” document to replace myapp with station.
• Enhanced “Simple Atmos Tutorial” with updated example link and clarified instructional content. Fix incorrect terraform flag in simple tutorial workflow example @jasonwashburn (#709)## what

Fixes inconsistencies in the simple-tutorial extra credit section on workflows that prevent successful execution when following along.

why

As written, the tutorial results in two errors, one due to an incorrect terraform flag, and one due to a mismatch between the defined workflow name, and the provided command in the tutorial to execute it.

references

Closes #708

Fix typos @NathanBaulch (#703)Just thought I’d contribute some typo fixes that I stumbled on. Nothing controversial (hopefully).

Use the following command to get a quick summary of the specific corrections made:

git diff HEAD^! –word-diff-regex=’\w+’ -U0
| grep -E ‘[-.-]{+.+}’
| sed -r ‘s/.[-(.)-]{+(.)+}./\1 \2/’
| sort | uniq -c | sort -n

FWIW, the top typos are:

• usign • accross • overriden • propogate • verions • combinatino • compoenents • conffig • conventionss • defind Fix version command in simple tutorial @jasonwashburn (#705)## what • Corrects incorrect atmos --version command to atmos version in simple tutorial docs.

why

• Documentation is incorrect.

references

closes #704

docs: add installation guides for asdf and Mise @mtweeman (#699)## what

Docs for installing Atmos via asdf or Mise

why

As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.

references

Plugin repo

Use Latest Atmos GitHub Workflows Examples with RemoteFile Component @milldr (#695)## what - Created the RemoteFile component - Replace all hard-coded files with RemoteFile call

why

• These workflows quickly get out of date. We already have these publicly available on cloudposse/docs, so we should fetch the latest pattern instead

references

SweetOps slack thread Update Documentation and Comments for Atmos Setup Action @RoseSecurity (#692)## what • Updates comment to reflect action defaults • Fixes atmos-version input

why

• Fixes input variables to match acceptable action variables

references

Setup Actions

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual please use this release

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s already in the deb and rpm Linux packages

https://github.com/cloudposse/packages/actions/runs/11320822457

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(release 1.89.0 was not published to the Linux packages b/c of some issues with the GitHub action)

jose.amengual avatar
jose.amengual

awesome, thanks

2024-10-13

github3 avatar
github3
04:01:06 AM

Always template vendor source and targets @mss (#712)## what

This change improves the templating within vendor manifests slightly: It officially adds support for the Component field to both source and targets.

These features were already supported but mostly undocumented and hidden behind an implicit switch: The templating was only triggered if the Version field was set. Which was also the only officially supported field.

In reality though all fields from the current source definition were available but in the state they were currently in, depending on the order of the templates.

With this change

• It is clearly documented which fields are supported in which YAML values. • Only the two static fields are supported. • The values are always templated.

Theoretically this could be a breaking change if somebody used no version field but curly braces in their paths. Or relied on the half-populated source data structure to refer to unsupported fields. If xkcd 1172 applies it should be possible to amend this logic to add more officially supported fields.

why

I was looking for a way to restructure our vendoring like the examples in examples/demo-vendoring/vendor.yaml to avoid copy and paste errors when we release new component versions.

I actually only found out about that demo when I was done writing this code since the templating was never triggered without a version field and the documentation didn’t mention it.

references

https://github.com/cloudposse/atmos/blob/v1.88.1/examples/demo-vendoring/vendor.yamlhttps://atmos.tools/core-concepts/vendor/vendor-manifest/#vendoring-manifest

Summary by CodeRabbit

New Features
• Enhanced vendoring configuration with support for dynamic component referencing in vendor.yaml.
• Improved handling of source and targets attributes for better organization and flexibility. • Documentation
• Updated documentation for vendoring configuration, including clearer instructions and examples for managing multiple vendor manifests.
• Added explanations for included_paths and excluded_paths attributes to improve understanding. Fix a reference to an undefined output in GitHub Actions @suzuki-shunsuke (#718)## what

  1. Fix a reference to an undefined output in GitHub Actions.

https://github.com/cloudposse/atmos/blob/6439a64488856c461a9eb7a8f7adb30901080cef/.github/workflows/test.yml#L312

The step config is not found.
This bug was added in #612 .

b53d696#612

  1. Use a version variable for easier updates.

env: TERRAFORM_VERSION: “1.9.7”

steps:

  • uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TERRAFORM_VERSION }}

#717 (comment)

  1. Stop installing terraform wrapper

By default hashicorp/setup-terraform installs a wrapper of Terraform to output Terraform stdout and stderr as step’s outputs.
But we don’t need them, so we shouldn’t install the wrapper.

https://github.com/hashicorp/setup-terraform

  • uses: hashicorp/setup-terraform@v3 with: terraform_wrapper: false

why

references

Summary by CodeRabbit

Chores
• Updated workflow configurations for improved maintainability.
• Introduced a new environment variable TERRAFORM_VERSION for version management. ci: install Terraform to fix CI failure that Terraform is not found @suzuki-shunsuke (#717)## what

Install Terraform using hashicorp/setup-terraform action in CI.

why

CI failed because Terraform wasn’t found.

https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449045566
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449046010

Run cd examples/demo-context
all stacks validated successfully
exec: "terraform": executable file not found in $PATH

This is because ubuntu-latest was updated to ubuntu-24.04 and Terraform was removed from it.

https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2404-Readme.md

On the other hand, Ubuntu 22.04 has Terraform.

https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md

references

Summary by CodeRabbit

Chores
• Enhanced workflow for testing and linting by integrating Terraform setup in multiple job sections.
• Updated the lint job to dynamically retrieve the Terraform version for improved flexibility.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Malte would you mind reviewing this? https://github.com/cloudposse/atmos/pull/723

It addresses templating in vendor files. Let me know if something else would be helpful to add.

#723 Document vendoring from private git repos

what

• Document how to vendor from private GitHub repos • Document template syntax for vendoring

why

• The syntax is a bit elusive, and it’s a common requirement

Malte avatar

A bit late but: LGTM! And I just facepalmed because in hindsight the missing git:: Prefix is obious

Malte avatar

Just for completeness sake: We actually use private repos as well (BitBucket but the idea is similar) and our Jenkins (don’t ask…) uses a small Git Credential Helper which pulls the variable from the env and outputs the required format (cf. https://git-scm.com/docs/gitcredentials)

jose.amengual avatar
jose.amengual

I got a brand new error with version 1.90.0 with my vendor file:

ATMOS_LOGS_LEVEL=Trace atmos vendor pull
  ls -l
  shell: /usr/bin/bash -e {0}
  env:
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
    ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
    GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
template: source-0:1: function "secrets" not defined
jose.amengual avatar
jose.amengual

I think is because this:

 sources:
      - source: "<https://x-access-token>:${{ secrets.TOKEN }}@github.com/pepe-org/pepe-iac.git"
        #version: "main"
        targets:
jose.amengual avatar
jose.amengual

I can’t template inside the vendor file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#712 Always template vendor source and targets

what

This change improves the templating within vendor manifests slightly: It officially adds support for the Component field to both source and targets.

These features were already supported but mostly undocumented and hidden behind an implicit switch: The templating was only triggered if the Version field was set. Which was also the only officially supported field.

In reality though all fields from the current source definition were available but in the state they were currently in, depending on the order of the templates.

With this change

• It is clearly documented which fields are supported in which YAML values. • Only the two static fields are supported. • The values are always templated.

Theoretically this could be a breaking change if somebody used no version field but curly braces in their paths. Or relied on the half-populated source data structure to refer to unsupported fields. If xkcd 1172 applies it should be possible to amend this logic to add more officially supported fields.

why

I was looking for a way to restructure our vendoring like the examples in examples/demo-vendoring/vendor.yaml to avoid copy and paste errors when we release new component versions.

I actually only found out about that demo when I was done writing this code since the templating was never triggered without a version field and the documentation didn’t mention it.

references

https://github.com/cloudposse/atmos/blob/v1.88.1/examples/demo-vendoring/vendor.yamlhttps://atmos.tools/core-concepts/vendor/vendor-manifest/#vendoring-manifest

Summary by CodeRabbit

New Features
• Enhanced vendoring configuration with support for dynamic component referencing in vendor.yaml.
• Improved handling of source and targets attributes for better organization and flexibility. • Documentation
• Updated documentation for vendoring configuration, including clearer instructions and examples for managing multiple vendor manifests.
• Added explanations for included_paths and excluded_paths attributes to improve understanding.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual can you open an issue so I can tag the other author

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual what is this token {{ secrets.TOKEN }}?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a GH action token not Atmos Go template token?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a known issue with using Atmos templates with other templates intended for external systems (e.g. GH actions, Datadog, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since that PR enabled processing templates in all cases (even if the version is not specified as it was before), now Atmos processes the templates every time, and breaks on the templates for the external systems

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a doc about that:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have to do

{{`{{  ...   }}`}} 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is no way around this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if that syntax works for you

jose.amengual avatar
jose.amengual

OHHHHHHHHH let me try that

jose.amengual avatar
jose.amengual

ok, that seems to work but now I’m on this hell

Run #cd atmos/
  #cd atmos/
  # git clone [email protected]:pepe-org/pepe-iac-iac.git
  # git clone ***github.com/pepe-org/pepe-iac-iac.git 
  ATMOS_LOGS_LEVEL=Trace atmos vendor pull
  ls -l
  shell: /usr/bin/bash -e {0}
  env:
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
    ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
    GITHUB_TOKEN: ***
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
Pulling sources from '<https://x-access-token>:${{ secrets.TOKEN }} @github.com/pepe-org/pepe-iac-iac.git' into '/home/runner/work/pepe-iac/pepe-iac/atmos/atmos'
relative paths require a module with a pwd
Error: Process completed with exit code 1.
jose.amengual avatar
jose.amengual

if I switch to a git url ( with ssh) this problem goes away and then I get permission denied because I do not have a key to clone that other repo in the org, which is expected

jose.amengual avatar
jose.amengual

I was trying to use a PAT and change the url to https:// to pull the git repo

jose.amengual avatar
jose.amengual

note: the git clone command with https in the scrip works fine, so I know my PAT works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have a space after {{ secrets.TOKEN }} and before @

jose.amengual avatar
jose.amengual

ohhh no, if that is the problem I quit

jose.amengual avatar
jose.amengual
Run #cd atmos/
  #cd atmos/
  # git clone [email protected]:pepe-org/pepe-iac.git
  # git clone ***github.com/pepe-org/pepe-iac.git 
  ATMOS_LOGS_LEVEL=Trace atmos vendor pull
  ls -l
  shell: /usr/bin/bash -e {0}
  env:
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos/config
    ATMOS_BASE_PATH: /home/runner/work/pepe-iac/pepe-iac/atmos
    GITHUB_TOKEN: ***
  
Processing vendor config file '/home/runner/work/pepe-iac/pepe-iac/atmos/vendor.yaml'
Pulling sources from '<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git' into '/home/runner/work/pepe-iac/pepe-iac/atmos/atmos'
relative paths require a module with a pwd
Error: Process completed with exit code 1.
jose.amengual avatar
jose.amengual

uff, that could have been embarrassing

jose.amengual avatar
jose.amengual

so I changes the token for

- source: "<https://x-access-token:12345>@github.com/pepe-org/pepe-iac.git"

and I got bad response code: 404 which makes me believe that is trying to clone with the Token that I’m passing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual sorry, I don’t understand the whole use-case, so let’s review it step by step

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

before Atmos 1.90.0 release, this was working ?

<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git 
jose.amengual avatar
jose.amengual

I tagged you on the other thread I have, if you prefer that one that is cleaner

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the template {{secrets.TOKEN}} was evaluated by the GH action?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I tagged you on the other thread I have, if you prefer that one that is cleaner

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the same issue

jose.amengual avatar
jose.amengual

ok

jose.amengual avatar
jose.amengual

The goal: clone pepe-iac repo which is on the same org ( pepe-org) from another repo ( the app repo)

jose.amengual avatar
jose.amengual

the rendering of the token was definitely a problem and now that is resolved thanks yo your suggestion

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

<https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git. - was it working before?

jose.amengual avatar
jose.amengual

no, it never did

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

never? even before Atmos release 1.90.0?

jose.amengual avatar
jose.amengual

( using atmos)

jose.amengual avatar
jose.amengual

correct, it did not worked before because I was not using the {{{{ … }}}} format

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t understand

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you mentioned it was working before the Atmos 1.90.0 release

jose.amengual avatar
jose.amengual

I thought it did because I was getting relative paths require a module with a pwd and I thought I had a ATMOS_BASEPATH problem

jose.amengual avatar
jose.amengual

the error through me off to another direction, thinking the token was not an issue anymore

Malte avatar

I broke something?

jose.amengual avatar
jose.amengual

Then I upgraded to 1.90.0 and the error changed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s review this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in vendor.yaml

jose.amengual avatar
jose.amengual

this is what I have in vendor.yaml

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: iac-vendoring
  description: Atmos vendoring manifest for Atmos-iac repo
spec:
  imports: []
  sources:
      - source: "<https://x-access-token>:${{`{{secrets.TOKEN}}`}}@github.com/pepe-org/pepe-iac.git/"
        #version: "main"
        targets:
          - "atmos"
        included_paths:
          - "**/components/**"
          - "**/*.md"
          - "**/stacks/**"
        excluded_paths: 
          - "**/production/**"
          - "**/qa/**"
          - "**/development/**"
          - "**/staging/**"
          - "**/management/**"
          - "**/venueseus/**"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how does your GH action work? It needs to evaluate {{secrets.TOKEN}} and replace it with the value before atmos vendor pull gets to it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s focus on source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

jose.amengual avatar
jose.amengual

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what I said beofre about {{{{secrets.TOKEN}}}} is not your use case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so

how does your GH action work? It needs to evaluate {{secrets.TOKEN}} and replace it with the value before `atmos vendor pull` gets to it
jose.amengual avatar
jose.amengual

ohhh…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you should use

source: <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and make sure the token is replaces in the GH action before atmos vendor pull is executed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s all that you need to check

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this {{secrets.TOKEN}} is not Atmos Go template, it’s not related tpo Atmos at all

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the GH action needs to evaluate it first

jose.amengual avatar
jose.amengual

the token is available to the action as a secret

jose.amengual avatar
jose.amengual

if I run on the action

git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

before the atmos command, I can clone the repo( I’m just clarifying that the token works and is present)

jose.amengual avatar
jose.amengual

I will change the vendor.yaml and try again

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the token ${{secrets.TOKEN}} will be replaced by the GH actions only if it’s in a GH workflow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the GH actions don’t know anything about that token in an external file vendor.yaml

jose.amengual avatar
jose.amengual

I c what you mean

Malte avatar

Is the token also available as an environment variable? Then you should (now) be able to do something like source: <https://x-access-token>:{{env "SECRET_TOKEN"}}@github.com/pepe-org/pepe-iac.git (note that there is no $ in this case, ie. that will be evaluated by atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes correct, thanks @Malte

jose.amengual avatar
jose.amengual

ok, so within the GH workflow, I need a way to use the value of ${{secrets.TOKEN}}

jose.amengual avatar
jose.amengual

my gh action is like this :

- name: Vendoring stack and components
        working-directory: ${{ github.workspace }}
        env:
          GITHUB_TOKEN: ${{ secrets.TOKEN }}
        run: |
           cd atmos/
           # git clone [email protected]:pepe-org/pepe-iac.git
           git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git 
           ls -l
           ATMOS_LOGS_LEVEL=Trace atmos vendor pull
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your GH action workflow file, you create an env variable from the secret, and then use the env variable in the vendor.yaml file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Malte i might have jumped the gun. It might not be related to your PR, but thanks for jumping in!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not related to @Malte PR at all)

Malte avatar

@Erik Osterman (Cloud Posse) no worries, I assumed it was one of those cases where I broke some other weird legit use case of curly braces .-)

Malte avatar

I think ``{{env “GITHUB_TOKEN”}` should do the trick now

this1
jose.amengual avatar
jose.amengual

instead of the secrets.TOKEN you mean?

Malte avatar

instead of ${{secrets.TOKEN}, yes (ie. drop the $ as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
source: '<https://x-access-token>:{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'
this1
jose.amengual avatar
jose.amengual

testing

jose.amengual avatar
jose.amengual

ohhhh single brackets…..

jose.amengual avatar
jose.amengual

ok I did this:

apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: iac-vendoring
  description: Atmos vendoring manifest for Atmos-iac repo
spec:
  imports: []
  sources:
      - source: '<https://x-access-token>:{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'
        #version: "main"
        targets:
          - "atmos"
        included_paths:
          - "**/components/**"
jose.amengual avatar
jose.amengual

looks correct?

1
jose.amengual avatar
jose.amengual

I got

Processing vendor config file 'vendor.yaml'
Pulling sources from '***github.com/pepe-org/pepe-iac.git' into 'atmos'
bad response code: 404
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those stars, the token was replaced correctly?

jose.amengual avatar
jose.amengual

GH action obscure tokens and such

jose.amengual avatar
jose.amengual

I will add the token to the vendor file to just test it ( without using the ENV variable)

2
Malte avatar

I wonder if go-getter supports this at all…

jose.amengual avatar
jose.amengual

ok this is what I got :

Run cd atmos/
  cd atmos/
  # git clone [email protected]:pepe-org/pepe-iac.git
  git clone ***github.com/pepe-org/pepe-iac.git 
  ls -l
  ATMOS_LOGS_LEVEL=Trace atmos vendor pull
  shell: /usr/bin/bash -e {0}
  env:
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/pepe-iac-service/pepe-iac-service/atmos/config
    ATMOS_BASE_PATH: /home/runner/work/pepe-iac-service/pepe-iac-service/atmos
    GITHUB_TOKEN: ***
Cloning into 'pepe-iac'...
total 32
-rw-r--r--  1 runner docker 15178 Oct 14 20:51 README.md
drwxr-xr-x 10 runner docker  4096 Oct 14 20:51 pepe-iac
drwxr-xr-x  2 runner docker  4096 Oct 14 20:51 config
drwxr-xr-x  4 runner docker  4096 Oct 14 20:51 stacks
-rw-r--r--  1 runner docker   760 Oct 14 20:51 vendor.yaml
Processing vendor config file 'vendor.yaml'
Pulling sources from '***github.com/pepe-org/pepe-iac.git' into 'atmos'
bad response code: 404
jose.amengual avatar
jose.amengual

new token, I changed the secret in the repo, git clone command uses same token as

git clone <https://x-access-token>:${{secrets.TOKEN}}@github.com/...
jose.amengual avatar
jose.amengual

and you can see it can clone

jose.amengual avatar
jose.amengual

the vendor file has the same token but clear text now

Malte avatar

Just to be sure: It doesn’t work with a cleartext password either? I quickly checked the go-getter source and it should work. Weird.

jose.amengual avatar
jose.amengual

it does not

jose.amengual avatar
jose.amengual

is it possible that the @ is being changes for something else?

Malte avatar

I digged a bit. Can you try this please? `

source: 'https://{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'
Malte avatar

If that works I don’t understand why your plain git clone works unless maybe GH has some magic in their workflow engine to fix up this common error

jose.amengual avatar
jose.amengual

this is a reusable action, I’m passing the secret as an input

Malte avatar

Ok, I must admit I have no idea why that isn’t working. go-getter should call more or less exactly the git clone command you execute. The only difference is that it will do a `

git clone -- <https://x-access-token>:${{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git

(note the double dash) but I doubt that this makes a difference…

jose.amengual avatar
jose.amengual

I changed the git command to git clone – just to try it and it worked

jose.amengual avatar
jose.amengual

so somehow with atmos this does not work

jose.amengual avatar
jose.amengual

I even tried this:

git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>"
jose.amengual avatar
jose.amengual

git command works, atmos 404s

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#449 Is it possible to copy private github repositories with parameters : token

Is it possible to copy private github repositories with parameters : token ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So just to reiterate, atmos uses go-getter, which is what terraform uses under the hood, so the syntax should be the same.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Was this tried as well?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Adding the git::

jose.amengual avatar
jose.amengual

that worked!!!

jose.amengual avatar
jose.amengual

so if you want to avoid having to template the secret in the vendor.yaml you can do this :

git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>"
jose.amengual avatar
jose.amengual

then run atmos vendor pull

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

super

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we need to use git:: if we provide credentials

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and actaully, what you said above, if you execute the command git config --global url."<https://x-access-token>:${GITHUB_TOKEN}@github.com/".insteadOf "<https://github.com/>" first, then in vendor.yaml you don’t need to specify any credentials at all

jose.amengual avatar
jose.amengual

and this works too

source: 'git::https://{{env "GITHUB_TOKEN"}}@github.com/pepe-org/pepe-iac.git'
1
jose.amengual avatar
jose.amengual

but this does not

source: 'git::https://{{secrets.TOKEN}}@github.com/pepe-org/pepe-iac.git'
jose.amengual avatar
jose.amengual

so the ENV is the way to go

jose.amengual avatar
jose.amengual

this would be good to add to the docs

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yep, this git::https://{{secrets.TOKEN}}@github.com/xxxxxxxx/yyyyyyyyy.git does not work b/c the template is not Atmos template, and the file is not a GH action manifest

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual can you review https://github.com/cloudposse/atmos/pull/723

#723 Document vendoring from private git repos

what

• Document how to vendor from private GitHub repos • Document template syntax for vendoring

why

• The syntax is a bit elusive, and it’s a common requirement

jose.amengual avatar
jose.amengual

jose.amengual avatar
jose.amengual

for those using github and wanted clone internal repos, you can do it without using org-level PAT and using Github App Tokens that expiry every hour, using this actions/create-github-app-token https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for sharing!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, here’s how we’re using that action with GHE environments

https://github.com/cloudposse/.github/blob/main/.github/workflows/shared-auto-release.yml

name: "Shared auto release"
on:
  workflow_call:
    inputs:
      prerelease:
        description: "Boolean indicating whether this release should be a prerelease"
        required: false
        default: false
        type: string
      publish:
        description: "Whether to publish a new release immediately"
        required: false
        default: false
        type: string
      runs-on:
        description: "Overrides job runs-on setting (json-encoded list)"
        type: string
        required: false
        default: '["ubuntu-latest"]'
      summary-enabled:
        description: Enable github action summary.
        required: false
        default: true
        type: boolean

    outputs:
      id:
        description: The ID of the release that was created or updated.
        value: ${{ jobs.release.outputs.id }}
      name:
        description: The name of the release
        value: ${{ jobs.release.outputs.name }}
      tag_name:
        description: The name of the tag associated with the release.
        value: ${{ jobs.release.outputs.tag_name }}
      body:
        description: The body of the drafted release.
        value: ${{ jobs.release.outputs.body }}
      html_url:
        description: The URL users can navigate to in order to view the release
        value: ${{ jobs.release.outputs.html_url }}
      upload_url:
        description: The URL for uploading assets to the release, which could be used by GitHub Actions for additional uses, for example the @actions/upload-release-asset GitHub Action.
        value: ${{ jobs.release.outputs.upload_url }}
      major_version:
        description: The next major version number. For example, if the last tag or release was v1.2.3, the value would be v2.0.0.
        value: ${{ jobs.release.outputs.major_version }}
      minor_version:
        description: The next minor version number. For example, if the last tag or release was v1.2.3, the value would be v1.3.0.
        value: ${{ jobs.release.outputs.minor_version }}
      patch_version:
        description: The next patch version number. For example, if the last tag or release was v1.2.3, the value would be v1.2.4.
        value: ${{ jobs.release.outputs.patch_version }}
      resolved_version:
        description: The next resolved version number, based on GitHub labels.
        value: ${{ jobs.release.outputs.resolved_version }}
      exists:
        description: Tag exists so skip new release issue
        value: ${{ jobs.release.outputs.exists }}

permissions: {}

jobs:
  release:
    runs-on: ${{ fromJSON(inputs.runs-on) }}
    environment: release
    outputs:
      id: ${{ steps.drafter.outputs.id }}
      name: ${{ steps.drafter.outputs.name }}
      tag_name: ${{ steps.drafter.outputs.tag_name }}
      body: ${{ steps.drafter.outputs.body }}
      html_url: ${{ steps.drafter.outputs.html_url }}
      upload_url: ${{ steps.drafter.outputs.upload_url }}
      major_version: ${{ steps.drafter.outputs.major_version }}
      minor_version: ${{ steps.drafter.outputs.minor_version }}
      patch_version: ${{ steps.drafter.outputs.patch_version }}
      resolved_version: ${{ steps.drafter.outputs.resolved_version }}
      exists: ${{ steps.drafter.outputs.exists }}
      
    steps:
      - uses: actions/create-github-app-token@v1
        id: github-app
        with:
          app-id: ${{ vars.BOT_GITHUB_APP_ID }}
          private-key: ${{ secrets.BOT_GITHUB_APP_PRIVATE_KEY }}

      - name: Context
        id: context
        uses: cloudposse/[email protected]
        with:
          query: .${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}
          config: |-
            true: 
              config: auto-release.yml
              latest: true
            false:
              config: auto-release-hotfix.yml
              latest: false

      # Drafts your next Release notes as Pull Requests are merged into "main"
      - uses: cloudposse/github-action-auto-release@v2
        id: drafter
        with:
          token: ${{ steps.github-app.outputs.token }}
          publish: ${{ inputs.publish }}
          prerelease: ${{ inputs.prerelease }}
          latest: ${{ steps.context.outputs.latest }}
          summary-enabled: ${{ inputs.summary-enabled }}
          config-name: ${{ steps.context.outputs.config }}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jobs:
  release:
    runs-on: ${{ fromJSON(inputs.runs-on) }}
    environment: release
    outputs:
   
     ...
      
    steps:
      - uses: actions/create-github-app-token@v1
        id: github-app
        with:
          app-id: ${{ vars.BOT_GITHUB_APP_ID }}
          private-key: ${{ secrets.BOT_GITHUB_APP_PRIVATE_KEY }}

     ...

      # Drafts your next Release notes as Pull Requests are merged into "main"
      - uses: cloudposse/github-action-auto-release@v2
        id: drafter
        with:
          token: ${{ steps.github-app.outputs.token }}
         

2024-10-14

Ryan avatar

Morning everyone. Is there a switch with atmos terraform plan to get it to -out to a readable file? I see it with straight terraform, but wasn’t sure how the command would look as part of an atmos command like terraform plan -s mystack-stack

toka avatar

you mean show commnad?

Ryan avatar

yea but im unsure how to daisy chain it in an atmos command against my s3 backend, a pbkac issue

Ryan avatar

my plans longer than the vscode buffer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share the command you are running? We use -out in all our github actions and it works fine

Ryan avatar

yea apologies atmos terraform plan ec2-instance -s platform-platformprod-instance

Ryan avatar

i didnt play with it much to see if I can tee out at the end

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was expecting to see -out in your command

Ryan avatar

yea im just going to mess around today and read the cli guide, wasnt sure if I can just add an -out at the end with atmos

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

this is nested in a Github Action that Erik mentioned, but here’s an example https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L218-L223

TLDR:

atmos terraform plan ${{ COMPONENT }} \
  --stack ${{ STACK }} \
  -out="${{ PLAN FILE }}" \
          atmos terraform plan ${{ inputs.component }} \
          --stack ${{ inputs.stack }} \
          -out="${{ steps.vars.outputs.plan_file }}" \
          -lock=false \
          -input=false \
          -no-color \
Ryan avatar

ooooh so thats how it gets the plan details into github? I honestly just took a different approach and increased my vscode buffers - i did get it to output without -out:\file\whatever, but it was in what looked like encrypted/plan format.

Ryan avatar

appreciate the multiple responses here

Ryan avatar

it was honestly something new to me, i hadnt run into a plan output that got cutoff in the console

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

storing plans in our github actions are entirely separate from running locally. In github actions, we store the plans with a combination of s3 and dynamodb, but locally we use native terraform - plans arent stored locally unless you specify so

Ryan avatar

no no sorry im not using correct technical terms

Ryan avatar

i mean the console output of whats going to happen

Ryan avatar

thats all i was struggling with, half my planned changes were cut off in the console so i couldnt tell if some stuff was going to be destroyed or not

Ryan avatar

im interested in actions btw but I am on an internal GHES server that I would have to work through compliance reqs to expose.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

that output file is a planfile for Terraform. This is an special Terraform file that you can then pass to Terraform again with terraform apply to apply a specific plan. But you dont have to create a plan file to apply terraform. You can always run terraform apply without a planfile, then manually accept the changes

1
Ryan avatar

yea as soon as i saw the garbled text im like oh thats not what im looking for

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

An internal GHES should be fine for the actions too! But if you want to validate a planfile you can then use terraform show to see what that planfile includes. For example

terraform plan -out=tfplan
terraform show -json tfplan
Ryan avatar

ohhhhh thank you

Ryan avatar

cool to know

Ryan avatar

and yea with GHES but its internal and not exposed yet

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

and you dont need atmos to use terraform show. You could generate the planfile with the atmos command, and then just use native TF with show

Ryan avatar

im pretty sure the OIDC connection needs a name outside my perimeter

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yup we use OIDC as well with our actions. Here’s a little bit about how that works to authenticate with AWS https://docs.cloudposse.com/layers/github-actions/github-oidc-with-aws/

How to use GitHub OIDC with AWS | The Cloud Posse Reference Architecture

This is a detailed guide on how to integrate GitHub OpenID Connect (OIDC) with AWS to facilitate secure and efficient authentication and authorization for GitHub Actions, without the need for permanent (static) AWS credentials, thereby enhancing security and simplifying access management. First we explaini the concept of OIDC, illustrating its use with AWS, and then provide the step-by-step instructions for setting up GitHub as an OIDC provider in AWS.

1

2024-10-15

jose.amengual avatar
jose.amengual

Continuing my vendoring journey with Atmos, I will like to avoid having to do this :

 included_paths:
          - "**/components/**"
          - "**/*.md"
          - "**/stacks/**"
        excluded_paths: 
          - "**/production/**"
          - "**/qa/**"
          - "**/development/**"
          - "**/staging/**"
          - "**/management/**"
         
jose.amengual avatar
jose.amengual

the subdir regex does not work

jose.amengual avatar
jose.amengual

I had to do that so I can only vendor the stacks/sandbox stack

jose.amengual avatar
jose.amengual

the expression "**/stacks/sandbox/**" does not work

jose.amengual avatar
jose.amengual

it would be nice if the **, could work at any level

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual yes, in order to support **/stacks/sandbox/**, you have to also add **/stacks/** - this is a limitation of the Go lib that Atmos uses. But obviously you don’t want to use that b/c oyiu have to then exclude all other folders in excluded_paths. We’ll have to review the lib

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is in our backlog - I think we can probably get to it pretty soon

1
github3 avatar
github3
05:43:44 AM

feat: support for .yml and .yaml file extensions for component vendoring @RoseSecurity (#725)## what • Support for .yml and .yaml when vendoring using component.yaml

why

• The tool is strict about needing component.yaml, file ending for yaml files is a matter of preference and both should be accepted.

testing

make build

component.yml

❯ ./build/atmos vendor pull -c aurora-postgres-resources Pulling sources for the component ‘aurora-postgres-resources’ from ‘github.com/cloudposse/terraform-aws-components.git//modules/aurora-postgres-resources?ref=1.511.0’ into ‘/Users/infra/components/terraform/aurora-postgres-resources’

component.yaml

❯ ./build/atmos vendor pull -c aurora-postgres-resources Pulling sources for the component ‘aurora-postgres-resources’ from ‘github.com/cloudposse/terraform-aws-components.git//modules/aurora-postgres-resources?ref=1.511.0’ into ‘/Users/infra/components/terraform/aurora-postgres-resources’

Missing both

❯ ./build/atmos vendor pull -c aurora-postgres-resources component vendoring config file does not exist in the ‘/Users/infra/components/terraform/aurora-postgres-resources’ folder

references

• Closes the following issue

Summary by CodeRabbit

Summary by CodeRabbit

New Features
• Enhanced file existence checks for component configuration, now supporting both .yaml and .yml file formats. • Refactor
• Streamlined variable declarations for improved readability without changing logic. Add the guide to install atmos using aqua @suzuki-shunsuke (#720)## what

Add the guide to install atmos using aqua.

why

aqua is a CLI Version Manager written in Go.
aqua supports various tools including atmos, Terraform, Helm, Helmfile.

Confirmation

I have launched the webserver on my laptop according to the guide.

image

references

Summary by CodeRabbit

New Features
• Introduced a new installation method for Atmos using the aqua CLI version manager.
• Added a dedicated tab in the installation guide for aqua, including instructions for setup and usage. • Documentation
• Updated the “Install Atmos” document to enhance user guidance on installation options.

github3 avatar
github3
06:12:52 AM

Improve error handling @haitham911 (#726)

what

• Improve error handling , check log level Trace for detailed trace information

why

• Print detailed error only when log level Trace Document vendoring from private git repos @osterman (#723)

what

• Document how to vendor from private GitHub repos • Document template syntax for vendoring

why

• The syntax is a bit elusive, and it’s a common requirement

2024-10-16

jose.amengual avatar
jose.amengual

When do you thin we can get https://github.com/cloudposse/github-action-atmos-affected-stacks updated to use atmos 1.92.0? are you guys ok if I push a PR?

cloudposse/github-action-atmos-affected-stacks

A composite workflow that runs the atmos describe affected command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, please open a PR, thank you

cloudposse/github-action-atmos-affected-stacks

A composite workflow that runs the atmos describe affected command

1
jose.amengual avatar
jose.amengual
#54 Adding new --stack parameter

what

• Upgrade default version to 1.92.0 • Add new –stack option supported on >1.90.0

why

• To allow filter changes bystacks

references

cloudposse/atmos#714

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov

jose.amengual avatar
jose.amengual

I think Igor was waiting for a review

jose.amengual avatar
jose.amengual

this ones Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah yea, this just seemed untennable

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
    - name: Retrieve Plan (Azure)
      if: ${{ env.ACTIONS_ENABLED == 'true' &&
        steps.config.outputs.plan-repository-type != '' &&
        steps.config.outputs.plan-repository-type != 'null' &&
        steps.config.outputs.blob-account-name != '' &&
        steps.config.outputs.blob-account-name != 'null' &&
        steps.config.outputs.blob-container-name != '' &&
        steps.config.outputs.blob-container-name != 'null' &&
        steps.config.outputs.metadata-repository-type != '' &&
        steps.config.outputs.metadata-repository-type != 'null' &&
        steps.config.outputs.cosmos-container-name != '' &&
        steps.config.outputs.cosmos-container-name != 'null' &&
        steps.config.outputs.cosmos-database-name != '' &&
        steps.config.outputs.cosmos-database-name != 'null' &&
        steps.config.outputs.cosmos-endpoint != '' &&
        steps.config.outputs.cosmos-endpoint != 'null' }}
      uses: cloudposse/github-action-terraform-plan-storage@v1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We can’t keep that pattern. While it addresses the current problem it doesn’t scale code wise

jose.amengual avatar
jose.amengual

if you can comment the pr and add your thoughts I could work on it if Igor does not have the time

jose.amengual avatar
jose.amengual

Is there a way to always terraform apply all components of a stack?

jose.amengual avatar
jose.amengual

like atmos terraform apply -all -s mystack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently Atmos does not have that native command (we will add it).

You can do this using Atmos workflows (if you know all the components in the stacks) - this is static config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you can create a custom Atmos command - this is dynamic. You can call atmos describe stacks --stack

jose.amengual avatar
jose.amengual

or this :

atmos describe stacks -s sandbox --format=json|jq -r '.["sandbox"].components.terraform | keys[]'
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then use shell script and jq to loop over all components

jose.amengual avatar
jose.amengual

yes, I will use that for now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual add that to a custom command

jose.amengual avatar
jose.amengual

good idea

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.

2024-10-17

jose.amengual avatar
jose.amengual

I have an Idea: The Plan and apply actions support grabbing the config from the atmos.yaml .integrations.github.gitops.* and since the .integrations is a free map I was wondering if we could add an option to pass a scope to the .integrations so that we can do something like :

github:
    sandbox:
      gitops:
        role:
          plan: sandboxrole
    gitops:
        role:
          plan: generalrole

, by adding a new input to the action to something like this:

atmos-integrations-scope:
    description: The scope for the integrations config in the atmos.yaml
    required: false
    default: ".integrations.github.gitops"

we can allow a more flexible and backwards compatible solution for people that needs to have integrations per stack

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I don’t know that we would add something like atmos-integrations-scope, and we should probably move role assumption like this out of the action itself, as there are just too many ways it can work.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, using the free-form map idea should still work for you, @jose.amengual

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then you can use this action to retrieve the roles https://github.com/cloudposse/github-action-atmos-get-setting

cloudposse/github-action-atmos-get-setting

A GitHub Action to extract settings from atmos metadata

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

from the integrations section

jose.amengual avatar
jose.amengual

the role, was just an example, for me the problem is: I have a bucket per account where I want to store the plans with the storage action

jose.amengual avatar
jose.amengual

so I can’t use one github integration setting for all accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Integrations can be extended in stacks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…following inheritance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pretty sure at least..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

jose.amengual avatar
jose.amengual

but the github actions do not do that, they look at the atmos.config.integrations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we just need to change this action slightly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So today, it does this, like. you said:

    - name: config
      shell: bash
      id: config
      run: |-
        echo "opentofu-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["opentofu-version"]')" >> $GITHUB_OUTPUT
        echo "terraform-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["terraform-version"]')" >> $GITHUB_OUTPUT
        echo "enable-infracost=$(atmos describe config -f json | jq -r '.integrations.github.gitops["infracost-enabled"]')" >> $GITHUB_OUTPUT        
        echo "aws-region=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].region')" >> $GITHUB_OUTPUT
        echo "terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT
        echo "terraform-state-table=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].table')" >> $GITHUB_OUTPUT
        echo "terraform-state-bucket=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].bucket')" >> $GITHUB_OUTPUT        
        echo "terraform-plan-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops.role.plan')" >> $GITHUB_OUTPUT
jose.amengual avatar
jose.amengual

correct

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, these are already required.

  component:
    description: "The name of the component to plan."
    required: true
  stack:
    description: "The stack name for the given component."
    required: true
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So instead, we should be retrieving the integration config for the component in the stack.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then it will do what you want.

jose.amengual avatar
jose.amengual

or that yes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So most things in atmos.yaml can be extended in settings section of a component. Integrations is one of those.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am not sure when @Igor Rodionov can get to it, but we could probably define it well enough so if you wanted to PR it, you could do it - since maybe it’s a blocker for you

jose.amengual avatar
jose.amengual

the other PRs that Igor have inflight and mine to add the --stacks for describe-affected are a blocker for me

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is it should be using this get settings action.

jose.amengual avatar
jose.amengual

( I mention them in the other thread)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
        - name: Get Atmos Multiple Settings
          uses: cloudposse/github-action-atmos-get-setting@main
          id: example
          with:
            settings: |
              - component: foo
                stack: core-ue1-dev
                settingsPath: settings.secrets-arn
                outputPath: secretsArn
              - component: foo
                stack: core-ue1-dev
                settingsPath: settings.secrets-arn
                outputPath: roleArn
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that will handle all the lookups the right way

jose.amengual avatar
jose.amengual

so you are suggesting to move away from using this

"terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT

to use github-action-atmos-get-setting? are you ok with breaking changes, or I keep the current use of atmos describe config and I add the a new section to use the github-action-atmos-get-setting?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am not sure that it would be breaking

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it would work the same way, but add the ability for configurations to be inherited

Markus avatar

I’m having some weird issues with the github-action-atmos-terraform-apply action, where it seemingly forgets which command it should run. I’m using OpenTofu and have set components.terraform.command: tofu which the ...-plan action picks up perfectly fine (and it also works locally), but ...-apply ignores that setting and tries to use terraform which isn’t installed. Using ATMOS_COMPONENTS_TERRAFORM_COMMAND works, which makes me believe it’s an issue on how the config is read (although it’s the only thing that is being ignored).

I’m using the GitHub actions exactly as described in the docs.

I’ve combed through both actions to try and figure out what the difference is, but I got no clue (I did find a discrepancy on how the cache is loaded which led to the cache key not being found as the path is different). For context, it’s defined in atmos.yaml in the workspace root, not in /rootfs/usr/local/etc/atmos/atmos.yaml which is the config-path (the folder not the file) that’s defined when running the action.

Is there anything I should look out for to make this work? Happy to post config files, they are pretty standard.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov any ideas?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you set the atmos-config-path

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

?

Markus avatar

Yep, atmos-config-path is set and I verified it’s the correct value.

Markus avatar

This is the complete action execution:

Run cloudposse/github-action-atmos-terraform-apply@v2
  with:
    component: aurora-postgres
    stack: plat-euw1-prod
    sha: c60fd18fb31bea65c8bf5975913940623c3d98c6
    atmos-config-path: ./rootfs/usr/local/etc/atmos/
    atmos-version: 1.90.0
    branding-logo-image: <https://cloudposse.com/logo-300x69.svg>
    branding-logo-url: <https://cloudposse.com/>
    debug: false
    token: ***
  env:
    AWS_CLI_VERSION: 2
    AWS_CLI_ARCH: amd64
    VERBOSE: false
    LIGHTSAILCTL: false
    BINDIR: /usr/local/bin
    INSTALLROOTDIR: /usr/local
    ROOTDIR: 
    WORKDIR:

and the error message

exec: "terraform": executable file not found in $PATH

exec: "terraform": executable file not found in $PATH
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could this be related? read PR description https://github.com/cloudposse/atmos/pull/717

#717 ci: install Terraform to fix CI failure that Terraform is not found

what

Install Terraform using hashicorp/setup-terraform action in CI.

why

CI failed because Terraform wasn’t found.

https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449045566
https://github.com/cloudposse/atmos/actions/runs/11307359580/job/31449046010

Run cd examples/demo-context
all stacks validated successfully
exec: "terraform": executable file not found in $PATH

This is because ubuntu-latest was updated to ubuntu-24.04 and Terraform was removed from it.

https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2404-Readme.md

On the other hand, Ubuntu 22.04 has Terraform.

https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md

references

Summary by CodeRabbit

Chores
• Enhanced workflow for testing and linting by integrating Terraform setup in multiple job sections.
• Updated the lint job to dynamically retrieve the Terraform version for improved flexibility.

Markus avatar

I don’t think so, since terraform shouldn’t be called at all, tofu should.

When I was running this on GitHub-hosted runners (but apparently that has changed according to the PR), I was actually getting a different error that makes more sense now and it was about schemas not being found because they were using the OpenTofu registry (registry.opentofu.org) but the plugins were downloaded from registry.terraform.io.

I can also see the difference between plans (OpenTofu has successfully been initalized) and applies (Terraform has successfully been initialized) in the output, now that I know what to look for.

Edit: going to rerun with ATMOS_LOGS_LEVEL=Trace and report back.

Markus avatar

Okay, I feel stupid now. :face_palm: Apparently my two atmos configs (rootfs/... in the base directory) were out of sync and that caused the issues, because they are obviously not merged. I guess I misunderstood something in the last couple of years and not re-read the docs properly. Anyways, thanks for your help, Erik!

What still baffles me though, is that the -plan action worked perfectly fine.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmm why do you have 2 atmos configs?

Markus avatar

To be honest, I’ve seen these 2 configs since the early examples in the Atmos repo and always thought that this was a common thing, so I never questioned it.

jose.amengual avatar
jose.amengual

and yet another interesting opportunity: since github-action-atmos-terraform-plan uses action/checkout inside the action, if you vendor files but do not commit the action/cache wipes them out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it would help, checkout could be feature flagged

jose.amengual avatar
jose.amengual

ok, I will create a PR for that

jose.amengual avatar
jose.amengual

it does help but I will need to add a restore cache step too

jose.amengual avatar
jose.amengual

using the same flag

jose.amengual avatar
jose.amengual

basically something like this :

 - name: Checkout
      uses: actions/checkout@v4
      with:
        ref: ${{ inputs.sha }}
    
    - name: Restore atmos files cache
      uses: actions/cache/restore@v4
      with:
        path: atmos
        key: ${{ runner.os }}-atmosvendor
    
jose.amengual avatar
jose.amengual

the checkout needs to exist for the job to see the files of the repo in the new runner

2024-10-18

Dennis Bernardy avatar
Dennis Bernardy

Hey, I’m not sure if I’m getting this right. https://atmos.tools/core-concepts/components/terraform/backends/#terraform-backend-inheritance states, that if I want to manage multiple accounts I need a separate bucket, dynamodb and iam role. So far so good, but if I run atmos describe stacks it seems like the first stack is running fine, but as soon as it is trying to fetch the second stack I get Error: Backend configuration changed. I use init_run_reconfigure: true in atmos.yaml.

My backend configuration looks like this:

terraform:
  backend_type: s3
  backend:
    s3:
      profile: "{{ .vars.account_profile }}"
      encrypt: true
      key: "terraform.tfstate"
      bucket: "kn-terraform-backend-{{ .vars.account_id }}-{{ .vars.region }}"
      dynamodb_table: "TerraformBackendLock"

The profile is switched based on the stack.

Terraform Backends | atmos

Configure Terraform Backends.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you also using {{ atmos.Component ...}} ?

Terraform Backends | atmos

Configure Terraform Backends.

Dennis Bernardy avatar
Dennis Bernardy

Yes

Dennis Bernardy avatar
Dennis Bernardy

I’m using some components that are depending on the vpc id of the vpc component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, I think that could be a scenario not thoroughly tested.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are also working on a replacement of the {{ atmos.Component ...}} implementation, which is problematic for multiple reasons, chief among them is performance, since all must be evaluated everytime - and since this is handled by the go template engine at load time, it’s slow. We’re working on a way to do this with YAML explicit types, and lazy loading.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, I’m really glad that you bring this use-case up, as I don’t believe we’ve considered it. The crux of the problem is the component is invoked nearly concurrently in multiple “init” contexts, so we would need to copy it somewhere so it can be concurrently initialized with different backends.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I can understand why you want to do this, and why it’s needed, but we need to think about how to do this. It’s further complicated by copying, since a root module (component), can have relative source = "../something" module statements

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I thikn the problem may be mititgated by using terraform data sources (in HCL) instead of atmos.Component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) for consideration…

1
Dennis Bernardy avatar
Dennis Bernardy

I will try it with a datasource. Thanks for your explanation

Dennis Bernardy avatar
Dennis Bernardy

Just another question, do you already have something in mind to replace the templating with? Just another templating or remote-state?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yaml explicit type functions - will probably be released this week, and will be used to define Atmos functions in YAML

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the functions will include getting a terraform output from a remote state, getting secrets from many different systems, etc

Dennis Bernardy avatar
Dennis Bernardy

Hey, just a quick catchup. This is not released yet, is it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have the branch published but the PR is not yet ready

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We discussed this on Friday internally, and are considering introducing feature flags - as this should be considered experimental until we understand all the implications and edge cases.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We learned a lot from the template functions, and the way it was used was not the way we used it ourselves :-)

Dennis Bernardy avatar
Dennis Bernardy

Ah, good to know. I would gladly appreciate such feature flag and try it out

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I anticipate we should be able to get this wrapped up this week, but defer to @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dennis Bernardy are you suing the latest Atmos release https://github.com/cloudposse/atmos/releases/tag/v1.100.0 ?

Please try it out and let me know if it fixes the issue with multiple backends

Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

 1. default
 2. <the previously used workspace>
The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD.

The PR adds the logic that deletes the .terraform/environment file from the component directory before executing terraform init. The .terraform/environment file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing terraform init to prevent the Terraform prompt asking to select the default or the previously used workspace.
Dennis Bernardy avatar
Dennis Bernardy

Looks like the same error still:

template: describe-stacks-all-sections:56:24: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1

Error: Backend configuration changed

A change in the backend configuration has been detected, which may require
migrating existing state.

If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I see you are using atmos.Component (and multiple backends). This case is not solved yet, we know this is an issue and discussed it (will have a solution for that prob this week)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we will probably have a solution for that today, but no ETA

Dennis Bernardy avatar
Dennis Bernardy

Ok, I’m looking forward to it

Ryan avatar

I’m trying to decide something similar with my backend. I have everything right now in Govcloud West, but was hoping to monorepo East as part of the same deployment repository. Unsure if anyones done that here.

tretinha avatar
tretinha

Is it expected to get “<no value>” for when I describe a stack that has one of its values coming from a datasource like Vault, or does that mean that I’m not grabbing the value correctly? Thanks!

tretinha avatar
tretinha

Just bumping this up, in case anyone knows something about it. I get a map[] when I don’t specify a key so I assume it’s correctly accessing Vault? But it’s weird that the map is always empty

Dennis Bernardy avatar
Dennis Bernardy

How do you access that value?

tretinha avatar
tretinha

Hey, I’m trying to get it like this:

- name: KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY
  value: '{{ (datasource "vault-dev" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'
tretinha avatar
tretinha

And this is part of a ECS “environment” list

tretinha avatar
tretinha

This works, for example:

gomplate -d vault=vault+http:/// -i '{{(datasource "vault" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'

Given the proper vault token and addr are set

tretinha avatar
tretinha

And I’ve set the vault-dev data source in my atmos.yaml

tretinha avatar
tretinha

And I’m setting an environment variable in my terminal with VAULT_TOKEN before running atmos

Dennis Bernardy avatar
Dennis Bernardy

Ah. That looks like a different approach then ours. I only noticed recently that if you use .atmos.component to template outputs from a module you can not get lists or maps. Maps specifically only return no_value. Maybe that limitation also applies here and templating in general only is supported for strings, but that is better confirmed by someone as this is just a suspicion

tretinha avatar
tretinha

Hmm, this is interesting, the KC_SPI… var should be a string, if I try to get the whole secret path, like not specifying the ).KC... part, I get a “map[]”, which seems to be an empty map. The worst part is not knowing what I’m doing wrong lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I only noticed recently that if you use .atmos.component to template outputs from a module you can not get lists or maps.
See this workaround

https://sweetops.slack.com/archives/C031919U8A0/p1727909658480679?thread_ts=1726147975.215489&cid=C031919U8A0

I was able to work around issues with passing more complex objects between components by changing the default delimiter for templating

Instead of the default delimiters: ["{{", "}}"]

Use delimiters: ["'{{", "}}'"].

It restricts doing some things like having a string prefix or suffix around the template, but there are other ways to handle this.

      vars:
        vpc_config: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_attrs) }}'

works with a object like

output "vpc_attrs" {
  value = {
    vpc_id          = "test-vpc-id"
    subnet_ids      = ["subnet-01", "subnet-02"]
    azs             = ["us-east-1a", "us-east-1b"]
    private_subnets = ["private-subnet-01", "private-subnet-02"]
    intra_subnets   = ["intra-subnet-01", "intra-subnet-02"]
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, we’re working on a better long-term solution using YAML explicit types. ETA this week.

tretinha avatar
tretinha

This is interesting, do you think this applies to my issue with the vault gomplate data source?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmm… probably less so, since you’re trying to retrieve a string.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you tried enabling ATMOS_LOG_LEVEL=Trace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It might shed more light

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you able to share your datasource configuration in atmos.yml for

vault-dev

tretinha avatar
tretinha

I currently have Trace set in my atmos.yaml, it doesn’t output any errors. Yes, sure:

templates:
  settings:
    enabled: true
    gomplate:
      enabled: true
      datasources:
        vault-prod:
          url: "vault+<http://vault.companydomain:8200>"
        vault-dev:
          url: "vault+<http://vault-dev.companydomain:8200>"
tretinha avatar
tretinha
gomplate -d "vault-dev=vault+<http://vault-dev.companydomain:8200/>" -i '{{ (datasource "vault-dev" "infrastructure/keycloak").RANDOM_SECRET_KEY }}'

works on the same terminal, in case it helps

tretinha avatar
tretinha

I tried it without specifying an actual secret too and got a “map[]” in the value field of my variable. This is why I think I’m not correctly grabbing the variable but I just wanted to confirm

tretinha avatar
tretinha

For context, I’m trying to grab the value like this:

- name: KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY
  value: '{{ (datasource "vault-dev" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'
tretinha avatar
tretinha

Ah, also wanted to mention that when I do something like:

gomplate -d vault=vault+http:/// -i '{{(datasource "vault" "infrastructure/keycloak").KC_SPI_REALM_RESTAPI_EXTENSION_SCIM_LICENSE_KEY }}'

I do receive a value. Also, I’ve set up a custom command just to see if I’m getting the VAULT_TOKEN correctly inside atmos and it seems that I am:

$ atmos validate-token

Executing command:
echo "VAULT_TOKEN is: $VAULT_TOKEN"

VAULT_TOKEN is: [[redacted]]

2024-10-19

github3 avatar
github3
02:59:50 PM

Fix: condition to display help interactive menu @Cerebrovinny (#724)

what

• Ensure that the usage is displayed only when invoking the help commands or when the help flag is set

why

• Running incorrect commands in Atmos caused the output to be an interactive help menu forcing the user to manually exit the UI

jose.amengual avatar
jose.amengual

is it possible use the cloudposse/github-action-atmos-get-setting@v2 action to retrieve a setting that is not at the component level and is outside the terraform/components?( like a var or anything else)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you give an example

jose.amengual avatar
jose.amengual

I can get this no problem

 - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.role
            outputPath: integrations-gitops
jose.amengual avatar
jose.amengual

but then I thought about doing this :

import:


vars:
  location: west
  environment: us3
  namespace: dev

settings:
  github:
    actions_enabled: true
  integrations:
    github:
      gitops:
        region: "westus3"
        role: "arn:aws:iam::123456789012:role/atmos-terraform-plan-gitops"
        table: "terraform-state-lock"

components:
   terraform:
    ......
jose.amengual avatar
jose.amengual

but I do not think that will work since the stack yaml is not free form as I understand it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But that is then inherited by all components, so no need to retrieve it from anywhere else

jose.amengual avatar
jose.amengual

but this is on a stack file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thats fine

jose.amengual avatar
jose.amengual

this goes back to my questions about

 echo "aws-region=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].region')" >> $GITHUB_OUTPUT
        echo "terraform-state-role=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].role')" >> $GITHUB_OUTPUT
        echo "terraform-state-table=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].table')" >> $GITHUB_OUTPUT
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Its still inherited right?

jose.amengual avatar
jose.amengual

for the plan action

jose.amengual avatar
jose.amengual

integration.config, lives in atmos.yaml ( we could call that global)

jose.amengual avatar
jose.amengual

if I add overwrite the settings at the component level then I can retrieve it like this :

- component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.role
            outputPath: integrations-gitops
jose.amengual avatar
jose.amengual

from

cosmosdb:
      settings:
        github:
          actions_enabled: true
        integrations:
          github:
            gitops:
              region: "westus3"
              role: "arn:aws:iam::123456789012:role/atmos-terraform-plan-gitops"
              table: "terraform-state-lock"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am on my phone so I cannot give examples

jose.amengual avatar
jose.amengual

I was trying to avoid to have to do it per component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The point ianthst we should not be retrieving the value from integrations, we should be using the settings action but we are not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You are only applying 1 component at a time, so this is ideal - use settings action

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am pretty sure everything in atmos integrations is just deepmerged into .settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov should not be calling jq on values in the atmos config, but instead using the final value. Then it works both the way you write it here, with putting it in a stack config, as well as putting it in atmos yaml

jose.amengual avatar
jose.amengual

understand

jose.amengual avatar
jose.amengual

I started implemented your suggestion as follows:

     - name: Get atmos settings
      uses: cloudposse/github-action-atmos-get-setting@v2
      id: component
      with:
        settings: |
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.github.actions_enabled
            outputPath: enabled
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: component_info.component_path
            outputPath: component-path
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: atmos_cli_config.base_path
            outputPath: base-path
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: command
            outputPath: command
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.role
            outputPath: integrations-gitops
jose.amengual avatar
jose.amengual

and use that to get the settings for the gitops integration

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On my phone that looks like the right trajectory

jose.amengual avatar
jose.amengual

and with glasses?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would have to compare what was there originally, with this new one and my mental cache scrolling up and down (or late night willpower) is depleted. The real question is, is that working? If so, I think its probably good

jose.amengual avatar
jose.amengual

yes, it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sweet! Lets have @Igor Rodionov review.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Gabriela Campana (Cloud Posse)

1
jose.amengual avatar
jose.amengual

I will add a few more things and I will create a PR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @jose.amengual !

jose.amengual avatar
jose.amengual

https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/92, needs doc changes but I want to see if that seems ok so far

#92 Replace describe-config for atmos-get-setting, add restore cache options and azure storage options

what

This is based on #90 that @goruha was working on.

Replace the describe config for cloudposse/github-action-atmos-get-setting
replace If statements to check for azure repository type
Add azure blob storage and cosmos
Add cache restore option for vendor files outside git

why

To support azzure and better config settings

references

#90

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov up

Igor Rodionov avatar
Igor Rodionov

@jose.amengual why do we need to cache the whole repo while we can check it out?

jose.amengual avatar
jose.amengual

not the whole repo, but just the stacks folder

jose.amengual avatar
jose.amengual

I have been testing a lot , so I tried many things

jose.amengual avatar
jose.amengual

somehow the cache is not getting created

Igor Rodionov avatar
Igor Rodionov

what is the profit? hit rate would be low as key based on the sha

jose.amengual avatar
jose.amengual

the problem is when using vendor files or in this case Templates stack files that are changed in another step but not commited to the repo

jose.amengual avatar
jose.amengual

Any file part of the repo (in git) will be overwrited by action/checkout after the sed command runs

jose.amengual avatar
jose.amengual

same happens with any untracked files

Igor Rodionov avatar
Igor Rodionov

could you provide example of untracked files?

jose.amengual avatar
jose.amengual

untracked files is a problem I have when I use atmos vendor since I vendor stacks files from another repo

jose.amengual avatar
jose.amengual

for the test github-action-atmos-terraform-plan something similar happens

jose.amengual avatar
jose.amengual

Keep in mind that Erik wanted to move the integration settings from atmos.yaml to each component

Igor Rodionov avatar
Igor Rodionov

why do not you commit vendor dir into the repo?

Igor Rodionov avatar
Igor Rodionov

we have vendoring to terraform components and commit them

Igor Rodionov avatar
Igor Rodionov

let’s use the same for stacks vendoring

jose.amengual avatar
jose.amengual

Lets focus on the test issue because if we solve that problem, my usecase will be solved too

Igor Rodionov avatar
Igor Rodionov

sure

Igor Rodionov avatar
Igor Rodionov

what you need to is

Igor Rodionov avatar
Igor Rodionov

rollback changes like htis

Igor Rodionov avatar
Igor Rodionov
Igor Rodionov avatar
Igor Rodionov

I suppose you would have to get rid of this part

Igor Rodionov avatar
Igor Rodionov
integrations:
  github:
    gitops:
      terraform-version: 1.5.2
      infracost-enabled: __INFRACOST_ENABLED__
      artifact-storage:
        region: __STORAGE_REGION__
        bucket: __STORAGE_BUCKET__
        table: __STORAGE_TABLE__
        role: __STORAGE_ROLE__
        plan-repository-type: azureblob
        blob-account-name: 
        blob-container-name: 
        metadata-repository-type: 
        cosmos-container-name: 
        cosmos-database-name: 
        cosmos-endpoint: 
      role:
        plan: __PLAN_ROLE__
        apply: __APPLY_ROLE__
      matrix:
        sort-by: .stack_slug
        group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
Igor Rodionov avatar
Igor Rodionov

here

Igor Rodionov avatar
Igor Rodionov
integrations:
  github:
    gitops:
      terraform-version: 1.5.2
      infracost-enabled: __INFRACOST_ENABLED__
      artifact-storage:
        region: __STORAGE_REGION__
        bucket: __STORAGE_BUCKET__
        table: __STORAGE_TABLE__
        role: __STORAGE_ROLE__
        plan-repository-type: azureblob
        blob-account-name: 
        blob-container-name: 
        metadata-repository-type: 
        cosmos-container-name: 
        cosmos-database-name: 
        cosmos-endpoint: 
      role:
        plan: __PLAN_ROLE__
        apply: __APPLY_ROLE__
      matrix:
        sort-by: .stack_slug
        group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")
Igor Rodionov avatar
Igor Rodionov

and replace it with

Igor Rodionov avatar
Igor Rodionov

and add here

Igor Rodionov avatar
Igor Rodionov
components:
  terraform:
    settings:
      github:
        gitops:
          terraform-version: 1.5.2
          infracost-enabled: __INFRACOST_ENABLED__
          artifact-storage:
            region: __STORAGE_REGION__
               .....
    
Igor Rodionov avatar
Igor Rodionov

and so on

jose.amengual avatar
jose.amengual

can you have settings at that level?

Igor Rodionov avatar
Igor Rodionov

@Andriy Knysh (Cloud Posse) would be suggestion works ?

jose.amengual avatar
jose.amengual

and what do we do with those files after the template runs ? this part here

 sed -i -e "s#__INFRACOST_ENABLED__#false#g" "$file"
            sed -i -e "s#__STORAGE_REGION__#${{ env.AWS_REGION }}#g" "$file"          
            sed -i -e "s#__STORAGE_BUCKET__#${{ secrets.TERRAFORM_STATE_BUCKET }}#g" "$file"
            sed -i -e "s#__STORAGE_TABLE__#${{ secrets.TERRAFORM_STATE_TABLE }}#g" "$file"          
            sed -i -e "s#__STORAGE_TABLE__#${{ secrets.TERRAFORM_STATE_TABLE }}#g" "$file"
            sed -i -e "s#__STORAGE_ROLE__#${{ secrets.TERRAFORM_STATE_ROLE }}#g" "$file"
            sed -i -e "s#__PLAN_ROLE__#${{ secrets.TERRAFORM_PLAN_ROLE }}#g" "$file"
            sed -i -e "s#__APPLY_ROLE__#${{ secrets.TERRAFORM_PLAN_ROLE }}#g" "$file"
Igor Rodionov avatar
Igor Rodionov

@jose.amengual it seems my suggestion would not work

Igor Rodionov avatar
Igor Rodionov

let me think about this

jose.amengual avatar
jose.amengual

that is why I added the cache

jose.amengual avatar
jose.amengual

in my fork, this works just fine

Igor Rodionov avatar
Igor Rodionov

I got the problem, I’ll come back with the workaround tomorrow. It is a night in my time now and I’m too tied to find the right solution. But caching still looks not good for me as it’s breaks gitops conception

jose.amengual avatar
jose.amengual

the caching is optional in my code for that reason, but let me know tomorrow

jose.amengual avatar
jose.amengual

to explain the problem and scenarios: I will fully explains the use cases:

1.- github-action-atmos-terraform-plan changes and cache issues:

for https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/92, Erik wanted to move the integration settings to the component istead of the atmos.yaml file, to accomplish that I needed to add the integration setting to the test stacks and use cloudposse/github-action-atmos-get-setting@v2, so now the test stacks look like this :

components:
  terraform:
    foobar/changes:
      component: foobar
      settings:
        github:
          actions_enabled: true
          gitops:
            terraform-version: 1.5.2
            infracost-enabled: __INFRACOST_ENABLED__
            artifact-storage:
              region: __STORAGE_REGION__
              bucket: __STORAGE_BUCKET__
              table: __STORAGE_TABLE__
              role: __STORAGE_ROLE__
              plan-repository-type: s3
              blob-account-name: 
              blob-container-name: 
              metadata-repository-type: dynamo
              cosmos-container-name: 
              cosmos-database-name: 
              cosmos-endpoint: 
            role:
              plan: __PLAN_ROLE__
              apply: __APPLY_ROLE__
      vars:
        example: blue
        enabled: true
        enable_failure: false
        enable_warning: true

Previously, we relied solely on atmos.yaml:

 mkdir -p ${{ runner.temp }}
          cp ./tests/terraform/atmos.yaml ${{ runner.temp }}/atmos.yaml

This workaround no longer works because the component settings now manage integration settings. The new approach required stack files to persist after replacing values. We still set atmos-config-path for actions, but stacks requires atmos_base_path, which isn’t available in the current action input.

The action calculates the base path as follows:

 - name: Set atmos cli base path vars
      if: ${{ fromJson(steps.atmos-settings.outputs.settings).enabled }}
      shell: bash
      run: |-
        # Set ATMOS_BASE_PATH allow `cloudposse/utils` provider to read atmos config from the correct path 
        ATMOS_BASE_PATH="${{ fromJson(steps.atmos-settings.outputs.settings).base-path }}"
        echo "ATMOS_BASE_PATH=$(realpath ${ATMOS_BASE_PATH:-./})" >> $GITHUB_ENV

` The issue is that without the base path, Atmos can’t locate the stack files. The checkout action wipes untracked or modified files, which is why we need to use the cache action. This cache will restore files with the template replacements after the checkout action. Currently: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L62 My-PR: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/.github/workflows/integration-tests.yml#L32

The problem: In my tests, the caching worked fine, but during the actual test runs, no cache was created. You can see the cache setup here:

https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/.github/workflows/integration-tests.yml#L46 and https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/add-cache-and-azure/action.yml#L72

But the cache isn’t created during test runs https://github.com/cloudposse/github-action-atmos-terraform-plan/actions/caches, Any insights into this issue would be appreciated.

2.- github-action-atmos-terraform-plan vendoring support:

in the following scenario :

pepe-iac-repo = the atmos monorepo that contains all the stacks and all components but does not deploy any services, only core functionality like vpcs, subnets, firewalls etc. pepe-service-repo = A service repository that hold code for a serverless function and the following structure:

/pepe-service-repo
  /atmos
    /stacks
      /sandbox
        service.yaml
    /config
      /atmos.yaml

The goal is to keep the service stack (service.yaml) near the developers, without needing to submit two PRs (one for core infra in the pepe-iac-repo and another for the service). The service stack should only describe service-related components, excluding core infra.

The deployment process is simple; create a pr atmos vendor all the components(atmos vendor pull), the sandbox stack and catalog from the pepe-iac-repo run github-action-atmos-terraform-plan after identifying all the components from the service.yaml

Issue: The action/checkout wipes untracked files (like service.yaml), so the cache action must restore these files after the checkout to preserve the service.yaml during the process.

I personally believe that this is not against any gitops principal, in fact a similar mechanism can be found in atlantis with pre-workflow-hooks and post-workflow-hooks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Why don’t we just make checkout optional? That way you can manage the checkout outside of the action?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you able to share your workflows?

jose.amengual avatar
jose.amengual

I believe I tried that and it didn’t persist

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov sounded like he understood better than me. But all actions are executed in a single workflow run, not spanning multiple jobs, caching is not needed

jose.amengual avatar
jose.amengual

but I could be wrong

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Well, you are right if there are multiple checkouts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But we don’t need to enforce multiple checkouts

jose.amengual avatar
jose.amengual

if the checkout persist then we do the checkout outside and that should do it

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our actions often combine a lot of repetitive steps in our workflows, but those might cause problems in your workflows. But it would conceivably be easy to make checkouts optional.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Exactly..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Changes to files definitely persist between steps in job, but they definitely do not persist between jobs or between workflow executions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But steps in a workflow can “undo” changes, which seems like what you are seeing. In that case, we need to prevent the undoing. :-)

jose.amengual avatar
jose.amengual

testing

jose.amengual avatar
jose.amengual

well that worked

1
jose.amengual avatar
jose.amengual

I have no idea why it didn’t work when I tried it, maybe it was because at that point I didn’t know the action/checkout was on the action.yaml

jose.amengual avatar
jose.amengual

ok, I will change the input name and the other tests tomorrow and do some cleanup

1
jose.amengual avatar
jose.amengual

that is easiest solution

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for the thorough write up. Made it easier to help.

1
Igor Rodionov avatar
Igor Rodionov

@jose.amengual I agree with additional input that will skip checkout

fast_parrot1
Igor Rodionov avatar
Igor Rodionov

you can provide PR with that and I will approve it

jose.amengual avatar
jose.amengual

will do a bit later

Igor Rodionov avatar
Igor Rodionov

Thanks

2024-10-20

Kalman Speier avatar
Kalman Speier

hey folks, i would like to use secrets from 1password, is there a way to replace the terraform command with op run --env-file=.env -- tofu ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) you did something similar before for vault secrets, no?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we used a custom Atmos command to access Hashi Vault

Kalman Speier avatar
Kalman Speier

can you share that command snippet please?

Kalman Speier avatar
Kalman Speier

just for reference, make sure i’m doing fine.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

overriding an existing command and calling it in the custom command creates a loop (as you see in your code)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can create a custom command with a diff name, or you can call just terraform (w/o Atmos) from the custom command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Erik Osterman (Cloud Posse) we need to think about adding hooks (e.g. before_exec, after_exec) to Atmos commands - this will allow calling other commands before executing any Atmos command

Kalman Speier avatar
Kalman Speier

i got this if i remove atmos and calling just terraform as you suggested:

│ Error: Too many command line arguments
│
│ Expected at most one positional argument.

the command:

- terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
ommands:
  - name: vault
    description: Get vault token
    verbose: false
    arguments:
      - name: component
        description: Name of the component
    flags:
      - name: stack
        shorthand: s
        description: stack
        required: true
    component_config:
      component: "{{ .Arguments.component }}"
      stack: "{{ .Flags.stack }}"
    steps:
      - |
        set -e
        AWS_ROLE_ARN=......
        credentials=$(aws sts assume-role --role-arn "$AWS_ROLE_ARN" --role-session-name vaultSession --duration-seconds 3600 --output=json)
        export AWS_ACCESS_KEY_ID=$(echo "${credentials}" | jq -r '.Credentials.AccessKeyId')
        export AWS_SECRET_ACCESS_KEY=$(echo "${credentials}" | jq -r '.Credentials.SecretAccessKey')
        export AWS_SESSION_TOKEN=$(echo "${credentials}" | jq -r '.Credentials.SessionToken')
        export AWS_EXPIRATION=$(echo "${credentials}" | jq -r '.Credentials.Expiration')
        VAULT_TOKEN=$(vault login -token-only -address=https://"$VAULT_ADDR" -method=aws header_value="$VAULT_ADDR" role="$VAULT_ROLE")
        echo export VAULT_TOKEN="$VAULT_TOKEN"
1
Kalman Speier avatar
Kalman Speier

hooks can be very handy also

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

example of atmos vault command ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


got this if i remove atmos and calling just terraform as you suggested:

Kalman Speier avatar
Kalman Speier

thanks for sharing the above command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can’t just call terraform, you need to prepare vars, backend and other things that Atmos does

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can call Atmos commands to generate varfile and backend, then call terraform apply

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s easier to create a separate command, e.g. atmos terraform provision, and call op and atmos terraform apply from it)

Kalman Speier avatar
Kalman Speier

ok thanks

Kalman Speier avatar
Kalman Speier

another way i was thinking is template function, is that possible to create a custom template function in atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

good question, currently not, but we are thinking about it.

Kalman Speier avatar
Kalman Speier

gomplate would be even better but there is no support for 1password it seems

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we already have custom template functions in Atmos, e.g. atmos.Component, but it’s inside Atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

gomplate is supported, I’m not sure about 1pass (did not see that)

Kalman Speier avatar
Kalman Speier

would amazing to have support for at least generic cli call template function

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, atmos.Exec - we can do that

Kalman Speier avatar
Kalman Speier

exactly, i was thinking similar

vars:
  foo: '{{ (atmos.Exec "op read ...").stdout }}'
Kalman Speier avatar
Kalman Speier

or something like this

Kalman Speier avatar
Kalman Speier

or consider the integration of this: https://github.com/helmfile/vals, this works fantastic in helmfiles

1
Kalman Speier avatar
Kalman Speier

i can simply reference vars, and vals replaces during templating: token: <ref+op://infra/ghcr-pull/token> and they support many various sources, including sops

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s nice, thanks for sharing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can prob embed the binary in Atmos and reuse it

Kalman Speier avatar
Kalman Speier

it’s a go library you can easily embed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriela Campana (Cloud Posse) please create the following Atmos tasks (so we track them):

• Add before_exec and after_exec hooks to Atmos commands (configured in atmos.yaml) • Implement atmos.Exec custom template function to be able to execute any external command or script and return the result

• Review https://github.com/helmfile/vals, and implement similar functionality in Atmos (embed the lib into Atmos if feasible/possible)

helmfile/vals

Helm-like configuration values loader with support for various sources

1
1
Kalman Speier avatar
Kalman Speier

this would be fantastic. secrets handling is really overlooked in tooling, if you using something else than vault.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, agree, once you get to the secrets handling, it’s a complicated and takes a lot of time and effort. We’ll improve it in Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks for the conversation

Kalman Speier avatar
Kalman Speier

thank you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note atmos supports multiple other ways to retrieve secrets via SSM. S3, Vault, etc. these are via the Gomplate data sources. Gomplate doesn’t have one for 1Password.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also have plans to support SOPS

Igor M avatar

Is using Gomplate a valid (and recommended) approach for passing secrets into Atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s just one way that is currently supported

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll implement the other ways described above

Igor M avatar

I am just curious if it’s worth investing in this approach or holding off until something more solid is in place

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll be working on it, might have something next week, although no ETA

Igor M avatar

Got it. No urgency here - will hold off on using Gomplate for now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I knew I had seen something like this before!! Thanks for sharing. Well use that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) we should discuss how we can leverage vals. Do you think it should be another data source?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me review it, we’ll discuss the implementation (interface)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it could be another data source (e.g atmos.XXXX), or/and Yaml explicit type functions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya explicit type

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s do it that way

1
Kalman Speier avatar
Kalman Speier

it seems i can create a custom command in atmos.yaml, but overriding terraform apply is not working. command just hanging.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share your custom command

Kalman Speier avatar
Kalman Speier
- name: apply2
  arguments:
    - name: component
      description: Name of the component
  flags:
    - name: stack
      shorthand: s
      description: Name of the stack
      required: true
    - name: auto-approve
      shorthand: a
      description: Auto approve
  steps:
    - op run --no-masking --env-file=.env -- atmos terraform apply {{ .Arguments.component }} -s {{ .Flags.stack }}
Kalman Speier avatar
Kalman Speier

it works, but if i rename to apply to override the command, it’s just hanging..

kinda make sense, as it calls itself eventually?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As implemented that creates a recursive command

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s why it doesn’t return

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Renaming apply to apply2 works, because it’s no longer recursive

Kalman Speier avatar
Kalman Speier

yes, i think so too. but in the docs there is an example hence i thought it could work.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you CAN override any existing Atmos command (native or custom), but you can’t call the command from itself

Kalman Speier avatar
Kalman Speier

sure, thanks!

jose.amengual avatar
jose.amengual

can you do this?

- path: "sandbox/service"
    context: {}
    skip_templates_processing: false
    ignore_missing_template_values: false
    skip_if_missing: true

to then import the file after is being created?

jose.amengual avatar
jose.amengual

so , run atmos vendor , then atmos terraform plan..... and I’m expecting the service.yaml to be imported after

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically that stack import is optional?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What causes the file to get created?

jose.amengual avatar
jose.amengual

I need a valid stack file for atmos vendor to work

jose.amengual avatar
jose.amengual

that is my service.yaml

jose.amengual avatar
jose.amengual

then I vendor and bring all the other stacks files to complete the sandbox stack

jose.amengual avatar
jose.amengual

I was thinking on how could have a conditional import so that I could add in the service side new files to import

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We we have a PR open now to eliminate that requirement I think

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It shouldn’t be required to have stack configs to use vendoring

jose.amengual avatar
jose.amengual

ahhh ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Actually, looks like we didn’t - but will get that fixed this week

jose.amengual avatar
jose.amengual

no problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@haitham911eg let’s prioritize this fix

1
haitham911eg avatar
haitham911eg

ok I start work on it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#738 stop process stack config when cmd atmos vendor pull

what

• stop process stack config when cmd atmos vendor pull

why

• Atmos vendor should not require stack configs

references

• DEV-2689 • https://linear.app/cloudposse/issue/DEV-2689/atmos-vendor-should-not-require-stack-configs

Summary by CodeRabbit

New Features
• Updated vendor pull command to no longer require stack configurations. • Bug Fixes
• Maintained error handling and validation for command-line flags, ensuring consistent functionality.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Do not process stack configs when executing command atmos vendor pull and the stack flag is not specified @haitham911 (#738)

what

• Do not process stack configs when executing command atmos vendor pull and the stack flag is not specified

why

• Atmos vendor should not require stack configs if the stack flag is not provided

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual can you let me know as soon as you validated it?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is not working, cc @haitham911eg

haitham911eg avatar
haitham911eg

What’s the issue I tested it yesterday

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you attempt to run atmos vendor pull with only an atmos.yaml and vendor.yaml file, it will error because no stacks folder exists, and further more, no stack files exist in the folder.

haitham911eg avatar
haitham911eg

ahh yes I tested with include path not exist only

haitham911eg avatar
haitham911eg

I will review if stack not exist

1
haitham911eg avatar
haitham911eg

@Erik Osterman (Cloud Posse)

haitham911eg avatar
haitham911eg
haitham911eg avatar
haitham911eg

I fixed it work without stacks folder

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, cool - post screenshots in PR too

haitham911eg avatar
haitham911eg

I will send PR now with screenshots

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Add the label patch to your PR

1
haitham911eg avatar
haitham911eg
#740 stop process stack with command atmos vendor pull

what

• Do not process stack configs when executing command atmos vendor pull and the stack flag is not specified

why

• Atmos vendor should not require stack configs if the stack flag is not provided

references

• DEV-2689 • https://linear.app/cloudposse/issue/DEV-2689/atmos-vendor-should-not-require-stack-configs
image (1)
image (2)

1
github3 avatar
github3
12:31:22 AM

Updates to Installation Instructions @osterman (#549)

what

• Add terminal configuration instructions • Add installation instructions for fonts

why

• Not everyone is familiar with what makes atmos TUI look good =) Read Atmos config and vendor config from .yaml or .yml @haitham911 (#736)

what

• Read Atmos config and vendor file from atmos.yaml or atmos .yml, vendor.yaml or vendor.yml • If both .yaml and .yml files exist, the .yaml file is prioritized

why

• Supports both YAML extensions Improve logging in atmos vendor pull @haitham911 (#730)

What

• Added functionality to log the specific tags being processed during atmos vendor pull --tags demo. • Now, when running the command, the log will display: Processing config file vendor.yaml for tags {demo1, demo2, demo3}.

Why

• This update improves visibility by explicitly showing the tags during the pull operation

jose.amengual avatar
jose.amengual

do you guys allow GH action to create caches? https://github.com/cloudposse/github-action-atmos-terraform-plan/actions/runs/11432993177/job/31804302214

Run actions/cache@v4
  with:
    path: ./
    key: atmos
    enableCrossOsArchive: false
    fail-on-cache-miss: false
    lookup-only: false
    save-always: false
  env:
    AWS_REGION: us-east-2
Cache not found for input keys: atmos
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

the run failed because of

Error: Region is not valid: __STORAGE_REGION__
Igor Rodionov avatar
Igor Rodionov

we use cache to speedup terraform init by caching providers . But the failure is not related to cache

jose.amengual avatar
jose.amengual

is because the test components are templated and processed before the call to the action, but the action has action/checkout, so it wipes out the changes. Since we move from atmos.yaml settings to component settings, then the test stacks are need after the template execution so that is why I wanted to create a cache and restore it ( similar problem when you vendor stack files , they get wipped by the action)

Igor Rodionov avatar
Igor Rodionov

@jose.amengual I will check your PRs today or tomorrow.

jose.amengual avatar
jose.amengual

I want to finish fixing the tests , so I need to figure out this cache thing

2024-10-21

Kalman Speier avatar
Kalman Speier

is there any simple example regarding whats the best practice to create a kubernetes cluster (let’s say gke but the provider doesn’t really matter) and deploy some kubernetes manifests to the newly provisioned cluster?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and for the remote state, using the cloudposse/stack-config/yaml//modules/remote-state module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cert-manager/remote-state.tf#L1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the remote state is done in TF, and in Atmos we just configure the variables for the remote-state module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that instead os using remote-state TF module, you could prob use atmos.Component template function (but we did not test it with the EKS components since all of them are using the remote-state TF modules now)

Kalman Speier avatar
Kalman Speier

cool, thanks for sharing, i will check out these resources.

Kalman Speier avatar
Kalman Speier

so what i’m after really is how to provide kube host, token and cert from the cluster component to another component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do this in our EKS components

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not personally familiar with the details

Kalman Speier avatar
Kalman Speier

thanks!

Kalman Speier avatar
Kalman Speier

what is the recommended way, atmos.Component template function or something else maybe?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note, in Cloud Posse components, we rely more on terraform data sources, than atmos.Component)

Kalman Speier avatar
Kalman Speier

i see

github3 avatar
github3
02:20:56 PM

Do not process stack configs when executing command atmos vendor pull and the stack flag is not specified @haitham911 (#738)

what

• Do not process stack configs when executing command atmos vendor pull and the stack flag is not specified

why

• Atmos vendor should not require stack configs if the stack flag is not provided

MP avatar

Hi all - how do you usually handle replacing an older version of an atmos component with a new one? For example, I have an existing production EKS cluster defined in the “cluster” component in my “acme-production.yaml” stack file. I want to replace this with a new cluster component with different attributes.

I looked into using the same stack file and adding a new component called “cluster_1”, but that breaks some of my reusable catalog files that have the component name set to “cluster”. I know I can also just create a new stack file but that approach also seems not ideal.

Any advice is appreciated!

2024-10-22

Kalman Speier avatar
Kalman Speier

is that possible to reference to a list with atmos.Component function? it seems the genereated tfvars will be string instead of list:

...
components:
  terraform:
    foo:
      vars:
        bar: '{{ (atmos.Component "baz" .stack).outputs.mylist }}'
...
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was able to work around issues with passing more complex objects between components by changing the default delimiter for templating

Instead of the default delimiters: ["{{", "}}"]

Use delimiters: ["'{{", "}}'"].

It restricts doing some things like having a string prefix or suffix around the template, but there are other ways to handle this.

      vars:
        vpc_config: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_attrs) }}'

works with a object like

output "vpc_attrs" {
  value = {
    vpc_id          = "test-vpc-id"
    subnet_ids      = ["subnet-01", "subnet-02"]
    azs             = ["us-east-1a", "us-east-1b"]
    private_subnets = ["private-subnet-01", "private-subnet-02"]
    intra_subnets   = ["intra-subnet-01", "intra-subnet-02"]
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re also working towards an alternative implementation we hope to release this week that will improve the DX and avoid templates.

Kalman Speier avatar
Kalman Speier

thanks!

Kalman Speier avatar
Kalman Speier

trying that toRawJson thing but it still seems i got strings. maybe i’m doing wrong.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

do you see the tricky thing we’re doing with the delimiters?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The delimiter is '{{ not {{

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and }}' not }}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This way, the Go template engine removes the '{{ and '}} and replaces i with the raw JSON

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

without the ' in the delimiter, the json gets encoded inside of a string, and does nothing

Kalman Speier avatar
Kalman Speier

ok. i see. and isn’t that could cause issues if i change the default delimiters?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Your mileage may vary.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The forth coming YAML explicit types will be a much better solution, not requiring these types of workarounds.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g.

components:
  terraform:
    foo:
      vars:
        bar: !terraform.outputs "baz" "mylist"
Kalman Speier avatar
Kalman Speier

nice! any eta on that?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we are gunning for EOW

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But it could slip. Anyways, it’s days not weeks away.

Kalman Speier avatar
Kalman Speier

sounds good thx

hamza-25 avatar
hamza-25

hey all,

is there a way to edit where the terraform init -reconfigure looks for AWS credentials? I want to select an AWS profile name dynamically via my cli flags and without hardcoding it into the terraform source code

My current non dynamic solution is setting a local env variable with the aws profile name and atmos picks that up fine and finds my credentials. But is there a way to configure the atmos.yaml so that the terraform init -reconfigure looks for the aws profile in a flag in my cli command such as -s <stackname> ? where the stackname matches my aws profile name?

so far it doesnt look to have an option like that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you familiar with atmos backend generation? That’s how we do it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Doesn’t have to be S3 (that’s just an example)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The backend configuration supports a profile name, so that’s how we do it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# <https://developer.hashicorp.com/terraform/language/backend/s3>
terraform:
  backend_type: s3
  backend:
    s3:
      acl: "bucket-owner-full-control"
      encrypt: true
      bucket: "your-s3-bucket-name"
      dynamodb_table: "your-dynamodb-table-name"
      key: "terraform.tfstate"
      region: "your-aws-region"
      profile: "...."
Bernie avatar

we use aws-vault and a make file if that’s helpful

hamza-25 avatar
hamza-25

Thanks both, Im going to explore these solutions

1

2024-10-23

Patrick McDonald avatar
Patrick McDonald

When working in a monorepo for Atmos with various teams organized under stacks/orgs/acme/team1, team2, etc., will the affected stacks GitHub Action detect changes in other teams’ stacks? At times, we only want to plan/apply changes to our team’s stacks and not those of other teams.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That makes sense… Yes, I think something like that is possible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

looking it up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov I can’t find the setting where we can pass a jq filter

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I thought we had the option to filter affected stacks that way

Kalman Speier avatar
Kalman Speier

is it possible get the path of a stack? for example:

...
components:
  terraform:
    kubeconfig:
      vars:
        filename: '{{ .stacks.base_path }}/{{ .stack }}/kubeconfig'
        ...
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe if you run atmos describe stacks you can see the complete data model. Any thing in that structure should be accessible. I believe you will find one that represents the file, then take the dirname of that. Gomplate probably provides a dirname function.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) may have another idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use atmos describe component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the output, you will see

component_info - a block describing the Terraform or Helmfile components that the Atmos component manages. The component_info block has the following sections:

component_path - the filesystem path to the Terraform/OpenTofu or Helmfile component

component_type - the type of the component (terraform or helmfile)

terraform_config - if the component type is terraform, this sections describes the high-level metadata about the Terraform component from its source code, including variables, outputs and child Terraform modules (using a Terraform parser from HashiCorp). The file names and line numbers where the variables, outputs and child modules are defined are also included. Invalid Terraform configurations are also detected, and in case of any issues, the warnings and errors are shows in the terraform_config.diagnostics section
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you would use component_path to get the path to the Terraform component for the Atmos component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kalman Speier, everything that yuo see in the outputs of the atmos describe component command (https://atmos.tools/cli/commands/describe/component/#output) can be used in the Go templates

atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In general we don’t recommend putting non stack configs in the stacks folder

1
Kalman Speier avatar
Kalman Speier

thanks! i will check that out.

Kalman Speier avatar
Kalman Speier

it felt natural to have a kubeconfig file per stack, but maybe you right i will re-consider where to write those files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it does not need to go to the component folders, it can be anywhere

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  - name: set-eks-cluster
Kalman Speier avatar
Kalman Speier

yeah, i was thinking on that to keep the stacks dirs clean, maybe i will create a kube folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    # If a custom command defines 'component_config' section with 'component' and 'stack',
    # Atmos generates the config for the component in the stack
    # and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,
    # exposing all the component sections (which are also shown by 'atmos describe component' command)
    component_config:
      component: "{{ .Arguments.component }}"
      stack: "{{ .Flags.stack }}"
    env:
      - key: KUBECONFIG
        value: /dev/shm/kubecfg.{{ .Flags.stack }}-{{ .Flags.role }}
    steps:
      - >
        aws
        --profile {{ .ComponentConfig.vars.namespace }}-{{ .ComponentConfig.vars.tenant }}-gbl-{{ .ComponentConfig.vars.stage }}-{{ .Flags.role }}
        --region {{ .ComponentConfig.vars.region }}
        eks update-kubeconfig
        --name={{ .ComponentConfig.vars.namespace }}-{{ .Flags.stack }}-eks-cluster
        --kubeconfig="${KUBECONFIG}"
        > /dev/null
      - chmod 600 ${KUBECONFIG}
      - echo ${KUBECONFIG}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is what we usually do ^

Kalman Speier avatar
Kalman Speier

i see thx

1

2024-10-24

2024-10-28

RB avatar

Anything on the roadmap for org wide stacksets natively supported as an upstream component? These are very useful because the stackset auto deploys infrastructure when a new account is created.

For example, aws-team-roles component could be converted to a stackset and get deployed without manually needing to provision the component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Definitely something we would like to implement.

1
github3 avatar
github3
05:37:51 AM

Set custom User Agent for Terraform providers @Cerebrovinny (#729)

What

• The Atmos command now sets the TF_APPEND_USER_AGENT environment variable, which Terraform uses when interacting with the AWS provider. • Users can also append custom values to TF_APPEND_USER_AGENT, allowing further flexibility in monitoring or tracking specific operations. • If no user-defined value is provided, the system will automatically set atmos {currentVersion} as the default.

Why

• Add a customer-specific user agent for Terraform operations. This enhancement ensures that Atmos-driven actions are identifiable and distinct from other operations. • Users will be able to differentiate and monitor Atmos-initiated actions within AWS, facilitating better tracking, logging, and troubleshooting. WIP: Document helmfile, template imports @osterman (#741)

what

• Document options for helmfile • Update helmfile demo to use options • Document versioning of components Fix example demo-context @Cerebrovinny (#743)

what

• Running atmos in demo-context folder causes the code to process all stack configurations, including the catalog stacks fix atmos version @haitham911 (#735)

what

• atmos version should work regardless if the stack configs are provided

1
1

2024-10-29

Ryan avatar

Just a question as a more junior atmos/terraformer. I am prepping to deploy some terraform stacks to GovCloud East. My backend sits in West right now. I took this morning to create an east “global” yaml that references my West backend and ARNs, but configures East region. I tested a small dev S3 module and it deployed in the org, in East, in the account it’s supposed to be in. I’m wondering from a design perspective do these backends get split off. It would seem easier if I can just leverage what I already have, but I’m not sure what any negatives to doing this are. Hope this question makes sense, just trying to strategize with regards to my backend and atmos.

Ryan avatar

For reference for non-govcloud users - this would just be a separate region, same account.

Ryan avatar

Note my codes all in one repo right now as well.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Ryan regarding splitting backends, there are a few possible ways to do it (depending on security requirements, access control, audit, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• One backend per Org (e.g. provisioned in the root account). All other accounts (e.g. dev, staging, prod) use the same backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Separate backend per account (dev, staging, prod). You need to manage many backends and provision them. This is used for security reasons to completely separate the accounts (they don’t share anything)

Ryan avatar

Yea I might be overthinking, just trying to strategize before I build in east

Ryan avatar

It’s just me primarily and likely one person I train to run it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• A combination of the above - e.g. one backend for dev and staging, a separate backend for prod

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use all the above depending on many factors like security, access control, etc. - there is no one/best way, it all depends on your requirements

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(all of them are easily configured with Atmos, let us know if you need any help on that)

Ryan avatar

thank you andriy, i honestly wasnt sure if I could keep everything all within one west db and s3, first time poking this thing over to east

Ryan avatar

idk if we need to get that crazy with separation of duties or new roles for east

Ryan avatar

whoops wrong thread

Michal Tomaszek avatar
Michal Tomaszek

Hi, I’m trying to deploy Spacelift components following these instructions: https://docs.cloudposse.com/layers/spacelift/?space=tenant-specific&admin-stack=managed Everything seems to be ok until I try to deploy plat-gbl-spacelift stack. I get: │ Error: │ Could not find the component ‘spacelift/spaces’ in the stack ‘plat-gbl-spacelift’. │ Check that all the context variables are correctly defined in the stack manifests. │ Are the component and stack names correct? Did you forget an import? │ │ │ with module.spaces.data.utils_component_config.config[0], │ on .terraform/modules/spaces/modules/remote-state/main.tf line 1, in data “utils_component_config” “config”: │ 1: data “utils_component_config” “config” {

Any ideas?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Michal Tomaszek did you add the component ‘spacelift/spaces’ in the stack ‘plat-gbl-spacelift’?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Spaces must be provisioned first before provisioning Spacelift stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the component spacelift/spaces needs to be added to the Atmos stack and provisioned

(the error above means that the component is not added to the Atmos stack and Atmos can’t find it. It’s a dependency of Spacelift stack component)

Michal Tomaszek avatar
Michal Tomaszek

spacelift/spaces is already deployed before in root-gbl-spacelift stack. I can see both root and plat spaces in Spacelift dashboard. so, deployment of these: atmos terraform deploy spacelift/spaces -s root-gbl-spacelift atmos terraform deploy spacelift/admin-stack -s root-gbl-spacelift went fine. provisioning of: atmos terraform deploy spacelift/admin-stack -s plat-gbl-spacelift results in the issue I described. if you look at the Tenant-Specific tab on the website (section https://docs.cloudposse.com/layers/spacelift/#spaces), it actually doesn’t import catalog/spacelift/spaces. if I do this, it eventually duplicates plat space as it’s already created in root-gbl-spacelift stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i see what you are saying. I think the docs don’t describe the steps in all the details. Here’s a top-level overview:

• Spacelift has the root space that is always present. atmos terraform deploy spacelift/spaces -s root-gbl-spacelift does not provision it, it configures it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• In each account, you need to provision child spaces (under root). You need to configure and execute atmos terraform deploy spacelift/spaces -s plat-gbl-spacelift

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Ater having the child spaces in plat tenant, you can provision Spacelift stacks in plat that use the plat space(s)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in Spacelift, there is a hierarchy of spaces (starting with the root), and a hierarchy of admin stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are related, but separate concepts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the admin stack in root is the root of the stack hierarchy, all other admin stacks are managed by the root admin stack (note that the child admin stacks can be provisioned in the same or diff environments (e.g. diff tenant)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each admin and child stack belongs to a space (root or child)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if I do this, it eventually duplicates plat space as it’s already created in root-gbl-spacelift stack this is prob some omission in the docs or in config if you don’t find the issue, you can DM me your config to take a look

Michal Tomaszek avatar
Michal Tomaszek

ok, I’ll give this a try and reach out in case of issues. thanks for help!

Patrick McDonald avatar
Patrick McDonald

Hello, our setup is each AWS account has its own state bucket and DynamoDB table. I’m using a role in our identity account that authenticates via GitHub OIDC and can assume roles in target accounts. My challenge is with the GitHub Action for “affected stacks”, how can I configure Atmos to assume the correct role in each target account when it runs? Any guidance would be much appreciated!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are usually two roles:

• For the terraform aws provider (with permissions to provision the AWS resources)

• For the backend with permissions to access the S3 bucket and DynamoDB table

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when you run an atmos commend like describe affected or terraform plan, the identity role needs to have permissions to assume the above two roles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, in Atmos manifests, you can configure the two roles in YAML filesL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Roles for aws providers - two ways of doing it:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If you are using the terraform components <https://github.com/cloudposse/terraform-aws-components/tree/main/modules>, then each one has [providers.tf](http://providers.tf) file (e.g. https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/providers.tf). It also uses module "iam_roles" module to read terraform roles per account/tenant/environment (those roles can and should be different)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Similar things can be configured in Atmos manifests by using the providers section https://atmos.tools/core-concepts/components/terraform/providers/#provider-configuration-and-overrides-in-atmos-manifests
Terraform Providers | atmos

Configure and override Terraform Providers.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depending on what you are using and how you want to configure it, you can use one way or the other (or both)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the idea is to have a way to get (different) IAM roles to access the backends and AWS resources

Patrick McDonald avatar
Patrick McDonald

ok.. I’ll give these options a try. I added the role_arn to backend.tf - it seem to have accessed the first account fine then error’ed out on the second account.

Wrote the backend config to file:
components/terraform/eks/otel-collectors/backend.tf.json
Executing 'terraform init eks/otel-collectors -s core-site-use1-prod'
template: describe-stacks-all-sections:17:32: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1
Error: Backend configuration changed
A change in the backend configuration has been detected, which may require
migrating existing state.
If you wish to attempt automatic migration of the state, use "terraform init
-migrate-state".
If you wish to store the current configuration with no changes to the state,
use "terraform init -reconfigure".
Error: Process completed with exit code 1.
Patrick McDonald avatar
Patrick McDonald

can describe affected run with state buckets in each target account or is it expecting a single state bucket for all accounts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual might have solved this in his forth coming PR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To have a gitops role configured per stack

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#92 Replace describe-config for atmos-get-setting, add optional cache and azure storage options

what

This is based on #90 that @goruha was working on.

• Replace the describe config for cloudposse/github-action-atmos-get-setting • Replace If statements to check for azure repository type • Add azure blob storage and cosmos • Add cache parameter to enable or disable caching inside the action • Add pr-comment parameter to allow the user to get the current summary and a PR comment if they want to. • Updated docs and Tests.

why

To support azure and better config settings

references

#90

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

His PR references Azure, but the solution is not azure specific

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With this change all the gitops settings can be defined in stack configs that extend atmos.yml

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) the GitHub OIDC portion is handled by the action and not atmos or terraform

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov needs to give the final to get this merged

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think there are some other PRs to other actions too. @jose.amengual is doing the same thing with a state bucket and dynamo per account.

Patrick McDonald avatar
Patrick McDonald

Looking forward to this! Thanks @Erik Osterman (Cloud Posse)

2024-10-30

github3 avatar
github3
12:59:13 PM

Add support for remote validation schemas @haitham911 (#731)

What

• Add support for remote schemas in atmos for manifest validation • Updated schemas configuration to allow referencing remote schema files, e.g.:
schemas:
atmos:
manifest: “https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json

Why

• This reduces redundancy, the schema file can be referenced remotely . Do not process stack configs when executing command atmos vendor pull and the --stack flag is not specified @haitham911 (#740)

what

• Do not process stack configs when executing command atmos vendor pull and the --stack flag is not specified

why

• Atmos vendor should not require stack configs if the stack flag is not provided

party_parrot1
github3 avatar
github3
01:37:08 PM

Add atmos docs command @RoseSecurity (#751)

what

• Add atmos docs <component> CLI command • Render component documentation utilizing the atmos docs <component> command

atmos_docs

why

• Improve user experience when navigating component documentation

testing

• Ensure existing functionality to the docs command is not affected • Tested without valid Atmos Base Path • Tested with nonexistent component name • Tested with valid component name • Tested with invalid component name • Tested with nested component names

references

Vitalii avatar
Vitalii

hello guys I am playing Atmos and GitHub for my project currently, I am having a problem with posting comments to GitHub pull requests with atmos terraform plan <stack> -s ##### I cant parse output in a readable this relative to terraform -no-color my question is: can I run atmos terraform plan -s ##### -no-color` or any other arguments that will be relative to `-no color` In the documentation, I didn`t find anything about it If you know any other way how I can post comments to pull requests in a readable way via `atmos`, or some parse way to parse please help appreciate any help

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos should send the command-line flags to Terraform, so this should work

atmos terraform plan <component> -s <stack> -no-color
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you try it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use double-dash like so

double-dash -- can be used to signify the end of the options for Atmos and the start of the additional native arguments and flags for the terraform commands. For example:

atmos terraform plan <component> -s <stack> -- -refresh=false
atmos terraform apply <component> -s <stack> -- -lock=false
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

using double-dash, you can specify any arguments and flags, and they will be sent directly to terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform | atmos

Use these subcommands to interact with terraform.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Vitalii also, did you see the GH actions that are already supported and work with Atmos

https://atmos.tools/integrations/github-actions/

GitHub Actions | atmos

Use GitHub Actions with Atmos

Vitalii avatar
Vitalii

@Andriy Knysh (Cloud Posse) Thank you for your help I dont no why this command didnt work yesterday on my laptop but after your answer it start working atmos terraform plan sns -s bm-dev-ue1 -no-color

and yes I am using GH action with Atmos

2
Vitalii avatar
Vitalii

and one more question Id like to run atmos in this way atmos terraform deploy sns -s bm-dev-ue1 –from-plan atmos says that Error: stat bm-dev-ue1-sns.planfile: no such file or directory I assume that atmos trying to find this ^^^ file in component/sns directory so my question is: how to configure or run atmos that command atmos terraform plan sns -s bm-dev-ue1 saved plan automatically to the components/sns folder without -out=` @Andriy Knysh (Cloud Posse)

sorry for tagging

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos supports these two flags

--from-plan	If the flag is specified, use the planfile previously generated by Atmos instead of
generating a new planfile. The planfile name is in the format supported by Atmos
and is saved to the component's folder

--planfile	The path to a planfile. The --planfile flag should be used instead of the planfile
argument in the native terraform apply <planfile> command
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to first run atmos terraform plan sns -s bm-dev-ue1, then atmos terraform apply sns -s bm-dev-ue1 --from-plan

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the planfile generated by terraform plan will be used instead of creating a new planfile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos automatically uses the component directory to save the planfiles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to run these two cmmands:

atmos terraform plan sns -s bm-dev-ue1 atmos terraform deploy sns -s bm-dev-ue1 --from-plan

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Vitalii ^

Vitalii avatar
Vitalii

Thankyou very much it works

1
Vitalii avatar
Vitalii

have a good day

Vitalii avatar
Vitalii

Hello guys one more question how to post atmos outputs (like terraform plan) to PR branch in human-readable way

2024-10-31

Michael Dizon avatar
Michael Dizon

hey guys! just a quick PR for the terraform-aws-config module https://github.com/cloudposse/terraform-aws-config/pull/124

#124 fix(main.tf): handle enabled boolean in manage_rules

what

use enabled boolean in managed_rules variable

why

aws_config_config_rule resources were still being created despite enabled being set to false

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best place for this is #pr-reviews

#124 fix(main.tf): handle enabled boolean in manage_rules

what

use enabled boolean in managed_rules variable

why

aws_config_config_rule resources were still being created despite enabled being set to false

1
github3 avatar
github3
09:11:12 PM

Update Atmos manifests validation JSON schema. Improve help for Atmos commands. Deep-merge the settings.integrations.github section from stack manifests with the integrations.github section from atmos.yaml @aknysh (#755)

what

• Update Atmos manifests validation JSON schema • Improve help and error handling for Atmos commands • Deep-merge the settings.integrations.github section from Atmos stack manifests with the integrations.github section from atmos.yaml

why

• In Atmos manifests validation JSON schema, don’t “hardcode” the s3 backend section fields, allow it to be a free-form map so the user can define any configuration for it. The Terraform s3 backend can change, can be different for Terraform and OpenTofu. Also, the other backends (e.g. GCP, Azure, remote) are already free-form maps in the validation schema • When Atmos commands are executed w/o specifying a component and a stack (e.g. atmos terraform plan, atmos terraform apply, atmos terraform clean), print help for the command w/o throwing errors that a component and stack are missing • Deep-merging the settings.integrations.github section from Atmos stack manifests with the integrations.github section from atmos.yaml allows configuring the global settings for integrations.github in atmos.yaml, and then override them in the Atmos stack manifests in the settings.integrations.github section. Every component in every stack will get settings.integrations.github from atmos.yaml. You can override any field in stack manifests. Atmos deep-merges the integrations.github values from all scopes in the following order (from the lowest to highest priority):
integrations.github section from atmos.yaml
• stack-level settings.integrations.github configured in Atmos stack manifests per Org, per tenant, per region, per account
• base component(s) level settings.integrations.github section
• component level settings.integrations.github section
For example:
atmos.yaml
integrations:
# GitHub integration
github:
gitops:
opentofu-version: 1.8.4
terraform-version: 1.9.8
infracost-enabled: false
stacks/catalog/vpc.yaml
components:
terraform:
vpc:
metadata:
component: vpc
settings:
integrations:
github:
gitops:
infracost-enabled: true
test_enabled: true
Having the above config, the command atmos describe component vpc -s tenant1-ue2-dev returns the following deep-merged configuration for the component’s settings.integrations.github section:
settings:
integrations:
github:
gitops:
infracost-enabled: true
opentofu-version: 1.8.4
terraform-version: 1.9.8
test_enabled: true Improve custom command error message for missing arguments @pkbhowmick (#752)

what

• Improved custom command error message for missing arguments including the name of argument for better user’s understanding.

why

• If a custom command expects an argument, it should say so with the arguments name. Fix helmfile demo @osterman (#753)

what

• Enable templates in atmos.yaml so we can use env function to get current work directory • Do not default KUBECONFIG to /dev/shm as /dev/shm is a directory, and the kube config should be a YAML file • Fix stack includes • Set KUBECONFIG from components.helmfile.kubeconfig_path if set (previously only set if use_eks was true)

why

• Demo was not working

    keyboard_arrow_up