#atmos (2024-09)

2024-09-02

github3 avatar
github3
05:33:07 PM

Update the Go YAML lib from [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) to [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3). Support YAML v1.2 (latest version) @aknysh (#690) ## what • Update the Go YAML lib from [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) to [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) • Support YAML v1.2 (latest version) • Support YAML explicit typing (explicit typing is denoted with a tag using the exclamation point (“!”) symbol) • Improve code, e.g. add YAML wrappers in one yaml_utils file (which imports [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3`)) to control all YAML marshaling and un-marshaling from one place

why

[gopkg.in/yaml.v3](http://gopkg.in/yaml.v3`*)

The main differences between [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) and [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) include enhancements in functionality, bug fixes, and improvements in performance. Here’s a summary of key distinctions:

1. Better Conformance to YAML 1.2 Specification:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) offers improved support for the YAML 1.2 specification. This includes better handling of complex YAML features such as core schema, block styles, and anchors. • [gopkg.in/yaml.v2 is more aligned with YAML 1.1, meaning it might not fully support some of the YAML 1.2 features.

2. Node API Changes:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) introduced a new Node API, which provides more control and flexibility over parsing and encoding YAML documents. This API is more comprehensive and allows for detailed inspection and manipulation of YAML content. • [gopkg.in/yaml.v2 has a simpler node structure and API, which might be easier to use for simple use cases but less powerful for advanced needs.

3. Error Handling:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) offers improved error messages and better context for where an error occurs during parsing. This makes it easier to debug and correct YAML syntax errors. • [gopkg.in/yaml.v2 has less detailed error reporting, which can make debugging more challenging.

4. Support for Line and Column Numbers:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) includes support for tracking line and column numbers of nodes, which can be useful when dealing with large or complex YAML files. • [gopkg.in/yaml.v2 does not provide this level of detail in terms of tracking where nodes are located within the YAML document.

5. Performance:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) has various performance improvements, particularly in the encoding and decoding process. However, these improvements might not be significant in all scenarios. • [gopkg.in/yaml.v2 might be slightly faster in certain cases, particularly when dealing with very simple YAML documents, due to its simpler feature set.

6. Deprecation of Legacy Functions:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) deprecates some older functions that were available in v2, encouraging developers to use more modern and efficient alternatives. • [gopkg.in/yaml.v2 retains these older functions, which may be preferred for backward compatibility in some projects.

7. Anchors and Aliases:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) has better handling of YAML anchors and aliases, making it more robust in scenarios where these features are heavily used. • [gopkg.in/yaml.v2 supports anchors and aliases but with less robustness and flexibility.

8. API Changes and Compatibility:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) introduces some API changes that are not backward-compatible with v2. This means that upgrading from v2 to v3 might require some code changes. • [gopkg.in/yaml.v2 has been widely used and is stable, so it may be preferable for projects where stability and long-term support are critical.

YAML v1.2

YAML v1.1 and YAML v1.2 differ in several key aspects, particularly in terms of specification, syntax, and data type handling. Here’s a breakdown of the most significant differences:

1. Specification and Goals:

YAML 1.1 was designed with some flexibility in its interpretation of certain constructs, aiming to be a human-readable data serialization format that could also be easily understood by machines. • YAML 1.2 was aligned more closely with the JSON specification, aiming for better interoperability with JSON and standardization. YAML 1.2 is effectively a superset of JSON.

2. Boolean Values:

YAML 1.1 has a wide range of boolean literals, including y, n, yes, no, on, off, true, and false. This flexibility could sometimes lead to unexpected interpretations. • YAML 1.2 standardizes boolean values to true and false, aligning with JSON. This reduces ambiguity and ensures consistency.

3. Integers with Leading Zeros:

YAML 1.1 interprets integers with leading zeros as octal (base-8) numbers. For example, 012 would be interpreted as 10 in decimal. • YAML 1.2 no longer interprets numbers with leading zeros as octal. Instead, they are treated as standard decimal numbers, which aligns with JSON. This change helps avoid confusion.

4. Null Values:

YAML 1.1 allows a variety of null values, including null, ~, and empty values (e.g., an empty string). • YAML 1.2 standardizes the null value to null (or an empty value), aligning with JSON’s null representation.

5. Tag Handling:

YAML 1.1 uses an unquoted !! syntax for tags (e.g., !!str for a string). The tag system is more complex and includes non-standard tags that can be confusing. • YAML 1.2 simplifies tag handling and uses a more JSON-compatible syntax, with less emphasis on non-standard tags. Tags are optional and less intrusive in most use cases.

6. Floating-Point Numbers:

YAML 1.1 supports special floating-point values like .inf, -.inf, and .nan with a dot notation. • YAML 1.2 aligns with JSON and supports Infinity, -Infinity, and NaN, which are the standard representations in JSON.

7. Direct JSON Compatibility:

YAML 1.2 is designed to be a strict superset of JSON, meaning any valid JSON document is also a valid YAML 1.2 document. This was not the case in YAML 1.1, where certain JSON documents could be interpreted differently.

8. Indentation and Line Breaks:

YAML 1.2 introduced more consistent handling of line breaks and indentation. While YAML 1.1 was flexible, it sometimes led to ambiguities in how line breaks and whitespace were interpreted. • YAML 1.2 has clearer rules, reducing the potential for misinterpretation of line breaks and indentation.

9. Miscellaneous Syntax Changes:

YAML 1.2 introduced some syntax changes for better clarity and alignment with JSON. For instance, YAML 1.2 removed support for single-quoted multiline strings, which were present in YAML 1.1.

10. Core Schema vs. JSON Schema:

YAML 1.2 introduced the concept of schemas, particularly the Core schema, which aims to be YAML’s native schema, and the JSON schema, which strictly follows JSON’s data types and structures. • YAML 1.1 did not have this formal schema distinction, leading to more flexible but sometimes less predictable data handling.

Summary:

YAML 1.2 is more standardized, consistent, and aligned with JSON, making it more predictable and easier to interoperate with JSON-based systems. • YAML 1.1 offers more flexibility and a wider range of literal values, but this flexibility can sometimes lead to ambiguities and unexpected behavior.

references

https://yaml.org/spec/1.2.2https://yaml.org/spec/1.1 • <https://pkg.go.dev/gopkg.in/yaml…

2

2024-09-04

Miguel Zablah avatar
Miguel Zablah

Hey I’m having an issue when using the github actions to apply atmos terraform where I get this error:

Error: invalid character '\x1b' looking for beginning of value
Error: Process completed with exit code 1.

the tf actually apply correctly but this error always comes up, any idea why is this happening?

this is the actions I use: https://github.com/cloudposse/github-action-atmos-terraform-apply/tree/main

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah could you pls send me full github actions log?

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov I will send it via DM

Igor Rodionov avatar
Igor Rodionov

could you try to run locally

 atmos terraform output iam --stack data-gbl-dev --skip-init -- -json
Igor Rodionov avatar
Igor Rodionov

Is there anything suspicious output

Igor Rodionov avatar
Igor Rodionov

?

Miguel Zablah avatar
Miguel Zablah

1 sec

Miguel Zablah avatar
Miguel Zablah

nop it looks good

Igor Rodionov avatar
Igor Rodionov

is it correct json ?

Miguel Zablah avatar
Miguel Zablah

yes it looks as correct json I will run jq to see if it fails there but it looks good

Miguel Zablah avatar
Miguel Zablah

I see the error I think

Miguel Zablah avatar
Miguel Zablah

jq type fails

Miguel Zablah avatar
Miguel Zablah

it’s bc that cmd will still echo the switch context

Miguel Zablah avatar
Miguel Zablah

and then it will give the json

Miguel Zablah avatar
Miguel Zablah

for example:

Switched to workspace "data-gbl-dev-iam".
{
...
}
Igor Rodionov avatar
Igor Rodionov

Hm.. what version of atmos are you using?

Miguel Zablah avatar
Miguel Zablah

1.88.0

Miguel Zablah avatar
Miguel Zablah

let me update to latest

Igor Rodionov avatar
Igor Rodionov

Yea. Try pls

Igor Rodionov avatar
Igor Rodionov

I have meeting now

Igor Rodionov avatar
Igor Rodionov

Will be back in 1 hour

Miguel Zablah avatar
Miguel Zablah

no worries I will put the update here in a bit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah in your atmos.yaml, update the logs section to this

logs:
  # Can also be set using 'ATMOS_LOGS_FILE' ENV var, or '--logs-file' command-line argument
  # File or standard file descriptor to write logs to
  # Logs can be written to any file or any standard file descriptor, including `/dev/stdout`, `/dev/stderr` and `/dev/null`
  file: "/dev/stderr"
  # Supported log levels: Trace, Debug, Info, Warning, Off
  # Can also be set using 'ATMOS_LOGS_LEVEL' ENV var, or '--logs-level' command-line argument
  level: Info
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

/dev/stderr tells Atmos to log messages to /dev/stderr instead of /dev/stdout (which pollutes the output from terraform)

Miguel Zablah avatar
Miguel Zablah

same issue with the latest version of atmos

Miguel Zablah avatar
Miguel Zablah

and I have /dev/stderr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not related to Atmos version, please check the logs section

Miguel Zablah avatar
Miguel Zablah

I see okay let me change this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if yiu have already stderr, then it’s not related. This output is from TF itself

Switched to workspace "data-gbl-dev-iam"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not controlled by Atmos)

Miguel Zablah avatar
Miguel Zablah

I see so what about doing a sed '1d' before saving it to file maybe that will fix it

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah I suppose 1d is color special char that is part of “switch ” output

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah can you try full command

Igor Rodionov avatar
Igor Rodionov

atmos terraform output iam --stack data-gbl-dev --skip-init -- -json 1> output_values.json

Miguel Zablah avatar
Miguel Zablah

1d is just means delete first line haha

Igor Rodionov avatar
Igor Rodionov

what do you have in json file?

Miguel Zablah avatar
Miguel Zablah

yeah that will saved all the output including the first line: Switched to workspace "data-gbl-dev-iam".

Miguel Zablah avatar
Miguel Zablah

I think we can maybe validate it instead

Miguel Zablah avatar
Miguel Zablah

so you have this: https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/main/action.yml#L332

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> output_values.json    

we can add some steps here like this:

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> tmb_output_values.json

cat tmb_output_values.json | (head -n 1 | grep -q '^[{[]' && cat tmb_output_values.json || tail -n +2 tmb_output_values.json) 1> output_values.json
Miguel Zablah avatar
Miguel Zablah

this will ensure that if the value dose not start with { or [ it will delete the first line

Igor Rodionov avatar
Igor Rodionov

the case is that we do not have the problem in our tests

Miguel Zablah avatar
Miguel Zablah

are your test running the latest atmos version?

Igor Rodionov avatar
Igor Rodionov

so I want to find the reason instead quick fixes

Igor Rodionov avatar
Igor Rodionov

1.81.0

Miguel Zablah avatar
Miguel Zablah

let me downgrade to that version to see if this is new

Miguel Zablah avatar
Miguel Zablah

and yeah I think your right lets find that out

Igor Rodionov avatar
Igor Rodionov

this is our tests

Miguel Zablah avatar
Miguel Zablah

same issue hmm

Miguel Zablah avatar
Miguel Zablah

let me check your test

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov this test is using openTofu right?

Igor Rodionov avatar
Igor Rodionov

we have both

Miguel Zablah avatar
Miguel Zablah

oh let me check the tf I think that one is opentofu

Igor Rodionov avatar
Igor Rodionov

terraform and opentofu

Miguel Zablah avatar
Miguel Zablah

ah I see them sorry

Miguel Zablah avatar
Miguel Zablah

haha

Miguel Zablah avatar
Miguel Zablah

okay I’m using a different tf version, your using: 1.5.2

Miguel Zablah avatar
Miguel Zablah

the strange thing is that when you do the plan on this test you have the: Switched to workspace "plat-ue2-sandbox". but on apply I don’t see that

Igor Rodionov avatar
Igor Rodionov

that’s fine

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov @Andriy Knysh (Cloud Posse) I have copy the action to my repo and added this 2 lines and this makes it work:

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> tmb_output_values.json

cat tmb_output_values.json | (head -n 1 | grep -q '^[{[]' && cat tmb_output_values.json || tail -n +2 tmb_output_values.json) 1> output_values.json

can we maybe add this fix to the action? idk why for me it’s giving the msg Switched to workspace "plat-ue2-sandbox". have you guys configure the /dev/stderr to write to a file? bc by default it will aslo write to terminal or maybe if your running it docker it treats this differently?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov bumping this up

Michal Tomaszek avatar
Michal Tomaszek

hi, have you ever considered adding Atmos plugin for asdf/mise? I think it would be useful as other tools like Terragrunt, Terramate, etc. are available within asdf plugins.

RB avatar
#170 Switching between atmos versions (asdf plugin)

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Sometimes I need to switch between atmos versions in order to debug something or perhaps a client uses version X and another client uses version Y. I usually have to run another go install of a specific atmos release or use apt install atmos="<version>-*" from geodesic.

It would be nice to use a tool like asdf with an atmos plugin

https://asdf-vm.com/
https://github.com/asdf-vm/asdf
https://github.com/asdf-vm/asdf-plugins

Another advantage of supporting asdf is that we can pin versions of atmos using the asdf .tool-versions file for users that want to use atmos within geodesic and from outside.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As you may know, we predominantly use geodesic. Anyone open to working with us to implement it? I can create the asdf-atmos repo

Michal Tomaszek avatar
Michal Tomaszek

yep, I even use it myself. I use geodesic as a base image of my custom Docker image. AFAIK, there are some packages installed already in it, like OpenTofu. the others can be installed using cloudposse/packages repository - that’s probably the easiest way. however, that’s somehow limited to the set of packages that this repository offers. I started to explore the idea of installing only mise (tool like asdf) and then, copying .mise.toml config file from my repository into it. that way mise can install all the dependencies via plugins. Renovate supports mise, too, which makes upgrades collected within .mise.toml file smooth and very transparent.

Michal Tomaszek avatar
Michal Tomaszek

unless I’m not aware of something and trying to reinvent the wheel here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Understood, it can definitely be layered on. @Michal Tomaszek do you have experience writing plugins for asdf?

RB avatar

I do! I can help with this if Michael doesnt have the time

Michal Tomaszek avatar
Michal Tomaszek

sadly, I have no experience with plugins for asdf. however, I’m happy to contribute under someone’s guidance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB i created this from their template, and modified it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/asdf-atmos

Asdf plugin for atmos

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I can’t seem to get it working.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Michal Tomaszek

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nevermind, it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just had to run asdf global atmos latest after installing it

RB avatar

Cool! I’ll check this out tomorrow. Thanks Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#1042 feat: Add Atmos by Cloud Posse

Summary

Description:

• Tool repo URL: https://github.com/cloudposse/atmos • Plugin repo URL: https://github.com/cloudposse/asdf-atmos

Checklist

• Format with scripts/format.bash • Test locally with scripts/test_plugin.bash --file plugins/<your_new_plugin_name> • Your plugin CI tests are green (see check)
Tip: use the plugintest action from asdf-actions in your plugin CI

1
Michal Tomaszek avatar
Michal Tomaszek

for anyone interested in installing Atmos via Mise: I raised PR that was eventually released in Mise 2024.9.6 (https://github.com/jdx/mise). it’s based on asdf plugin created by @Erik Osterman (Cloud Posse)

jdx/mise

dev tools, env vars, task runner

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nice! We need to update the docs to add that and asdf

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would you be willing to open a PR against this page? https://atmos.tools/install @Michal Tomaszek

Install Atmos | atmos

There are many ways to install Atmos. Choose the method that works best for you!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For reference, PR adding atmos to mise https://github.com/jdx/mise/pull/2577/files

Michal Tomaszek avatar
Michal Tomaszek

sure, I can do this

Michal Tomaszek avatar
Michal Tomaszek

@Erik Osterman (Cloud Posse), I guess that website/docs/quick-start/install-atmos.mdx in atmos repo should be updated, right?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes I think it would be a new tab

1
Michal Tomaszek avatar
Michal Tomaszek

@Erik Osterman (Cloud Posse), related PR: https://github.com/cloudposse/atmos/pull/699

#699 docs: add installation guides for asdf and Mise

what

Docs for installing Atmos via asdf or Mise

why

As of recent, Atmos can be installed by asdf and Mise. Installation guides are not yet included on the website. This PR aims to fill this gap.

references

Plugin repo

Michael Rosenfeld avatar
Michael Rosenfeld

Does CloudPosse provide any Atmos pre-commit hooks? I was curious if there were any community maintained hooks for atmos validate and such

Michael Rosenfeld avatar
Michael Rosenfeld

Or even a GitHub Action that runs atmos validate stacks before other actions run?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s the setup atmos action, and after that you can just run any atmos command like validate stacks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The precommits would be smart, but I don’t believe we have any documentation on that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we should have a page on setting up precommiti hooks here https://atmos.tools/core-concepts/projects/

Setup Projects for Atmos | atmos

Atmos is a framework, so we suggest some conventions for organizing your infrastructure using folders to separate configuration from components. This separation is key to making your components highly reusable.

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we need a task for that?

Michael Rosenfeld avatar
Michael Rosenfeld

Feel free to vendor this action we created for implementing this functionality if you think it’ll benefit the community:

name: "Atmos Validate Stacks"

on:
  pull_request:
    paths:
      - "stacks/**"
      - "components/**"
    branches:
      - main

env:
  ATMOS_CLI_CONFIG_PATH: ./rootfs/usr/local/etc/atmos

jobs:
  validate_stacks:
    runs-on: [Ubuntu]
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Extract ATMOS_VERSION from Dockerfile
        run: |
          version=$(grep 'ARG ATMOS_VERSION=' Dockerfile | cut -d'=' -f2)
          echo "atmos_version=$version" >> "$GITHUB_ENV"

      - name: Setup Atmos
        uses: cloudposse/github-action-setup-atmos@v2
        with:
          atmos-version: ${{ env.atmos_version }}
          install-wrapper: false

      - name: Validate stacks
        run: atmos validate stacks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Michael Rosenfeld

github3 avatar
github3
07:09:08 PM

Enhancements

BUG: fix error with affected stack uploads @mcalhoun (#691) ## what • Fix an error with affected stack uploads to atmos pro • Fix an error with URL extraction for locally cloned repos • Add better JSON formatting for affected stacks • Add additional debugging info

Andrew Ochsner avatar
Andrew Ochsner

Hi. My team is leveraging github-action-terraform-plan-storage Should it also store the .terraform.lock.hcl file that gets generated as part of the plan?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe so

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun

Matt Calhoun avatar
Matt Calhoun

Yes, as without that you won’t be able to do an exact “apply” of what was planned.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andrew Ochsner are you not seeing this behavior?

Andrew Ochsner avatar
Andrew Ochsner

well i assumed it didn’t because our pipeline team built a pipeline that saves the plan then saves the hcl lockfile separately (via save plan… so they override the path)… any chance you can point me to the code that does it? or i can go back and validate w/ the team

Andrew Ochsner avatar
Andrew Ochsner

So… it appears we leveraged mostly from github-action-atmos-terraform-plan which saves the lock files as a separate step (implying the storePlan doesn’t by default, just the plan file that you give it). So i’m back to my connundrum of the getPlan not overwriting the lock file if it exists (because it’s committed to my repo)https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L276-L299

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun is OoO and won’t have a chance to look at it until next week

Andrew Ochsner avatar
Andrew Ochsner

ok

Andrew Ochsner avatar
Andrew Ochsner

just bumping this to see if @Matt Calhoun has a chance to look at it

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Gabriela Campana (Cloud Posse)

1
Andrew Ochsner avatar
Andrew Ochsner

bump?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

~Hey @Andrew Ochsner, reviewing your conundrum now. Why do you need to commit your TF lock file? The lock file used when Terraform is planned must be the same as the lock file used to apply that planfile. That’s why we store both (separately)~span class=”message__username”>@Igor Rodionov</span>’s got this covered

1
Igor Rodionov avatar
Igor Rodionov

@Andrew Ochsner You are right that the lock file is very important for the terraform plan / apply workflow. Our GitHub action cloudposse/GitHub-action-atmos-terraform-plan stores it for consistency reasons. But as I know, some companies prefer to commit lock file and do want to regenerate it on each run. Actually, I was thinking of adding github action input to skip storing lock files. Does the problem block you or cause any functionality failure?

1
Andrew Ochsner avatar
Andrew Ochsner

well we definitnely want to commit the lock files… that’s terraform’s recommendation,,, https://go.microsoft.com/fwlink/?linkid=2139369 Terraform automatically creates or updates the dependency lock file each time you run the terraform init command. You should include this file in your version control repository so that you can discuss potential changes to your external dependencies via code review, just as you would discuss potential changes to your configuration itself.

Andrew Ochsner avatar
Andrew Ochsner

and we definintely want to store the lock file that was used when we store the plan

Andrew Ochsner avatar
Andrew Ochsner

but it appears (just deducing from the atmos terraform plan action) that it takes 2 store-plan”s to store both the plan and the lockfile…. https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L276-L299

Igor Rodionov avatar
Igor Rodionov

sure. but that’s does not breaks anything

Igor Rodionov avatar
Igor Rodionov

only +1 record per run

Andrew Ochsner avatar
Andrew Ochsner

but … because i’ve commited my lockfile… the retrieve fails because it already exists locally

Igor Rodionov avatar
Igor Rodionov

yea. you will see the error message, but it would not break apply operation as I know

Andrew Ochsner avatar
Andrew Ochsner

i’m good w/ all of it except the fact that it fails becasue it checks if the file exists

Andrew Ochsner avatar
Andrew Ochsner

it does

Igor Rodionov avatar
Igor Rodionov

ok. I’ll fix that this tomorrow

1
Andrew Ochsner avatar
Andrew Ochsner

thanks!

Andrew Ochsner avatar
Andrew Ochsner

ultimately i want it to overwrite the existing lockfile (from SCM) w/ teh one stored w/ the plan

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, that the lockfiles contents can vary depending on the platform used to generate it. Unless the local runtime is identical to the GHA runners, it won’t work the right way. I think that’s why we generate the lock file as part of the plan process, store it, then use it as part of the apply process.

Andrew Ochsner avatar
Andrew Ochsner

that’d be new to me… all will be on teh same version of terraform, but what other differenes would impact that?

Andrew Ochsner avatar
Andrew Ochsner

haven’t experienced that yet thus far TBH

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

providers are OS/CPU specific, and each have a different hash. It used to be, at least, that the lock file was generated based on the providers from terraform init and not a hash of each version of each arch.

e.g.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If it’s not affecting you maybe a) it was fixed b) ignores when no matching entry exists for that arch

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov any updates here?

Igor Rodionov avatar
Igor Rodionov

no. actually reading the conversion I have doubts what should be the right path. @Erik Osterman (Cloud Posse) made the right point about lock file created on different OS will break workflows. But if we want to override lock file, why @Andrew Ochsner do you want to commit the lock file then?

Andrew Ochsner avatar
Andrew Ochsner

because it’s what terraform recommends (https://developer.hashicorp.com/terraform/language/files/dependency-lock#lock-file-location) I haven’t experienced what Eric describes. And the reason to do it is to ensure reproucability at a given point in time… know the exact versions that were used in a point in time

Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform | HashiCorp Developerattachment image

Terraform uses the dependency lock file .teraform.lock.hcl to track and select provider versions. Learn about dependency installation and lock file changes.

Andrew Ochsner avatar
Andrew Ochsner

much like we commit lockfiles for go, and npm, and python and….

Igor Rodionov avatar
Igor Rodionov

then why do you want to override it?

Andrew Ochsner avatar
Andrew Ochsner

because i want what was captured during the planfile that was stored to definintely be the one that is used on the apply

Andrew Ochsner avatar
Andrew Ochsner

Also, related to github-action-terraform-plan-storage why such an explicit check when getting the plan if a planfile already exists? I kinda expected default behavior to overwrite or at least be able to set an input var to overwrite… but maybe i’m missing something https://github.com/cloudposse/github-action-terraform-plan-storage/blob/main/src/useCases/getPlan/useCase.ts#L39-L42

  const planFileExists = existsSync(pathToPlan);
  if (planFileExists) {
    return left(new GetTerraformPlanErrors.PlanAlreadyExistsError(pathToPlan));
  }
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

  const planFileExists = existsSync(pathToPlan);
  if (planFileExists) {
    return left(new GetTerraformPlanErrors.PlanAlreadyExistsError(pathToPlan));
  }
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun

Matt Calhoun avatar
Matt Calhoun

The reason we are checking that is that we don’t expect that a plan file with the same name as the one we are retrieving to exist on disk at that point, so if it does, we’re considering that an error condition. We could consider adding flag for overwrite, but could you provide the use case for this so we can think it through a little bit more? Thanks!

Andrew Ochsner avatar
Andrew Ochsner

sure.. it’s a little bit related to the question above…. we have a savePlan that saves the plan and then another savePlan that saves the lockfile. I didn’t write it so id ont’ know if the first savePlan actually saves the lock file.. if so this is all moot.

it was the 2nd get plan for the lock file that we run into issues… and i definitely want it to overwrite whatever lock file is there and use the one that was generated or existed when the plan was built

Andrew Ochsner avatar
Andrew Ochsner

so i think i don’t care about this as much so long as the save plan saves the lock file then we can get rid of the extra step somoene put in there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Andrew Ochsner as a side note, just wanted to make sure you knew that the github-action-terraform-plan-storage action was written to support multiple clouds, with AWS and Azure presently implemented. It’s a pluggable implementation, so if you’re doing this for GCP, it would be ideal to contribute it back to this action)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Andrew Ochsner avatar
Andrew Ochsner

oh 100% would… we’re leveraging azure plans but storing in s3 buckets lol

Matt Calhoun avatar
Matt Calhoun

Save Plan is probably a bad name since the action only saves one file at a time (the plan or the lockfile), which is why it is being called twice (ones for the planfile and once for the lockfile).

Andrew Ochsner avatar
Andrew Ochsner

agree… the problem i run into is i want the lockfile to be overwritten on the get/retrieve plan for the lockfile… because we commit our lockfiles per terraform recommendations and right now it fails if a file (assumes plan but in this case lockfile) exists (which it does because it’s in SCM)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, we’ll get this fixed soon. It’s on @Igor Rodionov’s plate now.

1
Andrew Ochsner avatar
Andrew Ochsner

appreciate it btw… not a burning thing. we’re just deleting lockfiles but eventually would like to commit them back in

1

2024-09-05

Ryan avatar

Running into a fun issue this AM in where I’m getting invalid stack manifest errors. I haven’t touched anything aside from trying to run one plan to test some work for next week. It started with me getting back an error that a var was defined twice in an older piece of work, which at first I’m like were these defined this way when committed and yep. The really strange thing for me though now is that it’s telling me there’s a variable thats already defined, but I removed it, saved it, tried to run my plan again but it says its still there.

1
Ryan avatar

Example error -

╷
│ Error: invalid stack manifest 'mgmt/demo/root.yaml'
│ yaml: unmarshal errors:
│   line 83: mapping key "s3_bucket_name" already defined at line 77
│
│   with module.iam_roles.module.account_map.data.utils_component_config.config,
│   on .terraform\modules\iam_roles.account_map\modules\remote-state\main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│
╵
exit status 1
Ryan avatar

VSCodes even pointing at the line and im like its not there lol.

Ryan avatar

Ok I got it back to functioning again. Essentially I had many old template files in where team members had put some double variables in the template. In the past it didn’t seem to care, but today it was finding everywhere one of those was and forcing me to correct. Not a huge issue other than I wish I know why it decided to do that now.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ryan can you please confirm you don’t need any assistance here?

Ryan avatar

I am good now, apologies. Really strange come back from a few weeks OOO. Thanks for checking.

1
Yangci Ou avatar
Yangci Ou

Hey all, does anyone have experience with renaming an Atmos stack without recreating the resources. As far as I know, an Atmos stack corresponds to a Terraform workspace, so is it similar to renaming a workspace in Terraform and updating the Atmos stack name in the yaml file? I have the state in S3, so move the state/push the old workspace state to new workspace state?

1
RB avatar

You’d need to go into the base component directory switch to the correct workspace Dump the state create a new workspace switch to the new workspace push the state

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also override the workspace name in the stack configurations to maintain the original workspace

2
Yangci Ou avatar
Yangci Ou

Great info, thanks!

2024-09-06

Roman Orlovskiy avatar
Roman Orlovskiy

Hi all. I am considering using atmos and Cloudposse root components on my new project with an existing AWS org that was created using ControlTower before I came in. So I am wondering, if it is still possible to leverage the root components and atmos in this case, and are there any examples of this setup? As I understand, the main issue will be due to usage of account/account-map/sso components. Do I need to to import the existing AWS org setup into them, or implement some replacement root components instead of account/account-map to provide account-based data to other components?

Igor M avatar

I recently found myself in the same position, and I advocated to start with a fresh org/set of accounts. There is documentation in Atmos on Brownfield but if there is an option to rebuild/migrate, it might be easier to go down this road.

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

1
RB avatar

Take a look at the following prs. This is how I’ve personally gotten this to work.

https://github.com/cloudposse/terraform-aws-components/pull/945 - account map • https://github.com/cloudposse/terraform-aws-components/pull/943 - account These prs weren’t merged in based on Nurus comment. I haven’t had the time to address the needs in a separate data source component. On my eventual plate. But if you feel ambitious, please feel free to address it so we can have a maintained upstream component for this.

For the account component, i used the yaml to define my accounts which gets stored in its outputs. Usually this component will create the accounts too. (The original reason why i didn’t want to create a new component is because the logic of this component is fairly complex and i didn’t want to copy that logic and have it break between the existing and new flavor of the account components). This is a root level component meaning only super user can apply it.

the account-map is also a root level component and it reads the output of the account component and has its own set of outputs. These are read by all the downstream components and can be read by any role.

1
1
RB avatar

These flavors above are as-is and unofficial from me so if you decide to use, please use with caution

2024-09-09

Patrick avatar
Patrick

Hi CloudPosse Team I got an issue when I enabled vpc_flow_logs_enabled: true in the quick-start-advanced lab. I also create vpc-flow-logs-bucket before. I spend around 2 days for this issue but I can’t fix it. I dont know why. Can you help me check this? Thanks. Here is my code:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Using Remote State | atmos

Terraform natively supports the concept of remote state and there’s a very easy way to access the outputs of one Terraform component in another component. We simplify this using the remote-state module, which is stack-aware and can be used to access the remote state of a component in the same or a different Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when using remote-state, your atmos.yaml will not work if places in the root of the repo (since Terraform executes the providers from the component directory)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to place atmos.yaml into one of the known paths

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
System dir (/usr/local/etc/atmos/atmos.yaml on Linux, %LOCALAPPDATA%/atmos/atmos.yaml on Windows)

Home dir (~/.atmos/atmos.yaml)

Current directory

ENV variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH
Patrick avatar
Patrick

noted & thanks @Andriy Knysh (Cloud Posse) Let me check that.

Patrick avatar
Patrick

Hi @Andriy Knysh (Cloud Posse) I fixed this issue, thank you!

Miguel Zablah avatar
Miguel Zablah

Hey I have a question about this docs: https://atmos.tools/integrations/github-actions/atmos-terraform-drift-detection

it mentions a ./.github/workflows/atmos-terraform-plan-matrix.yaml but I don’t see that anywhere dose it exist?

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dan Miller (Cloud Posse) can you please take a look at this ?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Yes that shouldve been included on that page, but it’s missing. Here’s a link to our Cloud Posse reference architecture that has the same workflows

https://docs.cloudposse.com/layers/gitops/example-workflows/#atmos-terraform-plan-matrix-reusable

Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

Miguel Zablah avatar
Miguel Zablah

ah thanks! but I think this is outdated the atmos-terraform-drift-detection

dose not work for me

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

Miguel Zablah avatar
Miguel Zablah

I have solve the issue with this actions:

cloudposse/github-action-atmos-terraform-select-components

that is the same that @Igor Rodionov and @Yonatan Koren help solve for plan and apply here: https://sweetops.slack.com/archives/C031919U8A0/p1724845173352589

Hey guys I found a potential bug on the atmos plan workflow for github actions,

You guys have this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L110

It will run atmos describe <component> -s <stack> .. but this requires the authentication to happen and that is something that happens after this on this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L268

I have fix this by doing the authentication before the actions runs on the same job that looks to do the trick but maybe is something that should be documented or fix on the action

Miguel Zablah avatar
Miguel Zablah

then the matrix for the plan that that job outputs is not valid when doing this plan

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

The latest version of all workflows are on that same page I linked above. This page is automatically updated with the latest version, whereas atmos.tools is manually updated with a point in time version. I recommend using the docs.cloudposse version for now, and I will update atmos.tools

https://docs.cloudposse.com/layers/gitops/example-workflows/

Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

Miguel Zablah avatar
Miguel Zablah

yeah I think this needs some update this workflow is not working

Patrick avatar
Patrick

hi @Dan Miller (Cloud Posse) Do we have any document for Gitlab CICD?

Miguel Zablah avatar
Miguel Zablah

@Dan Miller (Cloud Posse) with the docs on the workflow that you provided I was able to make the drift detection work, thanks!

2
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

hi @Dan Miller (Cloud Posse) Do we have any document for Gitlab CICD? unfortunately no, we only use GitHub at this time

1
Patrick avatar
Patrick

Thank you!

2024-09-10

Patrick avatar
Patrick

Hi Team I’m using ec2-instance components for provisioning EC2 instance on AWS. But, I want to change several variables it don’t appear in ec2-instance component at root like: root_volume_type, root_volume_type. How can I invoke these variables in .terraform/modules/ec2-instance/variables.tf from catalogs config. Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Some components do not have the variables defined. Not by design, but just we hadn’t needed them before. Feel free to open a PR against the components repo to add the missing variables and pass them through to the underlying module(s)

1
Patrick avatar
Patrick

@Erik Osterman (Cloud Posse) You mean I had to customize the component to suit my needs. Right?

RB avatar

You’d need to modify the component to expose the inputs of the module with copied and pasted variables.

E.g. ec2 instance module has a variable for ami so the ami variable needs to be set in 2 additional places, one at the module level within the component and it would need a variable definition

In component main.tf

# in component
module "ec2" {
  # ...
  ami = var.ami
  # ...
}

And in component variables.tf

variable "ami" {
  default = null
}

Now the input can be exposed to the Atmos yaml interface

components:
  terraform:
    ec2-instance/my-fav-singleton:
      # ...
      vars:
        ami: ami-0123456789
RB avatar

For one of the components, rds, i recall copying all the variables from the upstream module directly into the component (with convention {component}-[variables.tf](http://variables.tf)) and exposing each input in the module definition within the component which worked fairly well

https://github.com/cloudposse/terraform-aws-rds/blob/main/variables.tf

https://github.com/cloudposse/terraform-aws-components/blob/main/modules/rds/rds-variables.tf

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


You mean I had to customize the component to suit my needs. Right?
Yes, although if it’s just adding some variables I think it’s suitable to upstream for everyone

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think the method @RB is also a good technique, similar to monkeypatching or swizzling.

https://developer.hashicorp.com/terraform/language/files/override

Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

1
Patrick avatar
Patrick

Thanks @RB, @Erik Osterman (Cloud Posse) If I modify config in components like [main.tf](http://main.tf) and [variables.tf](http://variables.tf) , It works for me but If I run the command atmos vendor pull in the future, It will overwrite my change.

RB avatar

Yes that’s true. You could modify the component locally, make sure it works, if it does, upstream it

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


It works for me but If I run the command atmos vendor pull in the future, It will overwrite my change.
Unless you use terraform overrides pattern

1
RB avatar

That is pretty cool how the override will just merge the definitions together. So basically you can do something like this, is that right?

# components/terraform/ec2-instance/main_override.tf
variable "root_volume_type" {}

module "ec2_instance" {
  root_volume_type = var.root_volume_type
}

Strange that that page mentions almost every block except module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it works with module blocks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But even ChatGPT says I’m wrong, so

RB avatar

Let’s assume it does work, for convenience, would you folks be open to PRs for all components to follow a convention like [variables.tf](upstream-module>-<http://variables.tf) ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not following. From my POV, the whole point of overrides is you have full control and we don’t need a strict convention persay.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or in otherwords, the point is not to upstream override files

RB avatar

So let’s use the component ec2-instance as an example. We know that root_volume_type is missing from the component variables.tf file but exists in the upstream module terraform-aws-ec2-instance.

In the rds component example, we use [rds-variables.tf](http://rds-variables.tf) which is a copy of its upstream module’s variables.tf file. After implementing that change, the rds components variables.tf file only contains the region and other component-specific variables.

We can do the same for ec2-instance and other components too. This way the override method could be used as a stop-gap until all components are updated.

One more benefit of this approach is that cloudposse can also have a github action that monitors the upstream module so when its inputs are modified, it can auto copy those inputs directly into the component itself.

Difficulties here

• adding the input directly to the module definition, however, this can also be remediated using something like hcledit.

• If multiple upstream modules are used in a component, such as eks/cluster, then a copied file would also need to have each input var prefixed with the module name to avoid collisions

RB avatar

friendly ping ^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure we will invest in automating those kind of updates. That said we plan to support JIT vendoring of modules, and using the existing support for provider and backend generation, you will be able to provision modules directly as components with state.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not exactly the same thing, but means fewer custom components are required

RB avatar

Oh wow so a lot of the components can be deleted if all they do is wrap an upstream module.

But there are some components that wrap a single module but also have some component-logic under the hood. I imagine those components would still be retained unless their logic gets some how moved into its upstream module

RB avatar

I think there might still need to be a stop gap in place for components that use multiple modules or for components that don’t have logic that can be easily moved into the modules it wraps, no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We will continue to mange/develop components, but adopt a hybrid model

1
RB avatar

For the modules that still need to be developed, would y’all consider the {upstream-module}-[variables.tf](http://variables.tf)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I still don’t follow why such a convention is necessary. Can you elaborate? What problem is it addressing

RB avatar

Ah sorry, let me think of what i can add to https://sweetops.slack.com/archives/C031919U8A0/p1726067246212129?thread_ts=1725969302.837589&cid=C031919U8A0

It’s just all about scalability. If you have anything thats not DRY so in this case copying vars from module to component, would require a system to keep these vars in line. So if we maintain separate files, as opposed to a single variables.tf, between component vars and module vars then we can simplify upgrading to a newly exposed var by downloading the modules vars into the spec {module}-[variables.tf](http://variables.tf)

RB avatar

So instead of one component variables.tf file that contains

# e.g. component vars
variable region {}

# e.g. module vars
variable "engine_version" {}

So for example, we put var region in [variables.tf](http://variables.tf) and put var engine_version in [rds-variables.tf](http://rds-variables.tf)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha i see what you mean now. Let me think on it

1
Patrick avatar
Patrick
Igor M avatar

Is there a way to hide abstract components from the atmos UI / component list?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor M do you mean the the terminal UI when you execute the atmos command?

Igor M avatar

That’s right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we def should not show the abstract components in the UI (since we don’t provision them). I’ll check it, the fix will be in the next release

2
Igor M avatar

another minor observation for the UI - the up/down/left/right letter keys don’t work when within filter (/) mode

2024-09-11

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:wave: Hi everyone, I’m looking into the Atmos project and I want to say it’s awesome, it should make our lives easier. Can you please tell me if it is possible to specify a single component or a group of components and exclude one or more components during the deployment? That is, we need to be able to have multiple engineers working on the same stack and if someone makes a change, they can roll back the changes of another colleague. Of course, we can look at the plan, but is there a possibility to just specify what should be deployed? There are commands in terragrunt: --terragunt-include-dir, --terragrunt-exclude-dir, --terragrunt-strict-include, --terragrunt-ignore-external-dependencies, etc. We need the same in Atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In Atmos, when you break your infrastructure into components, you’re essentially working with one component at a time. The behavior you’re describing would, therefore, work out of the box.

When you run Atmos commands like atmos terraform apply $component_name --stack foo, you only target that specific component. This is somewhat analogous to the Terragrunt --include-dir behavior, but with Atmos, you’re naturally working with one component at a time. Atmos does not support a DAG at this time for automatic dependency order applies. For that we use workflows instead.

:wave: Hi everyone, I’m looking into the Atmos project and I want to say it’s awesome, it should make our lives easier. Can you please tell me if it is possible to specify a single component or a group of components and exclude one or more components during the deployment? That is, we need to be able to have multiple engineers working on the same stack and if someone makes a change, they can roll back the changes of another colleague. Of course, we can look at the plan, but is there a possibility to just specify what should be deployed? There are commands in terragrunt: --terragunt-include-dir, --terragrunt-exclude-dir, --terragrunt-strict-include, --terragrunt-ignore-external-dependencies, etc. We need the same in Atmos.

Petr Dondukov avatar
Petr Dondukov

Thank you so much for your reply! What do you mean by workflows?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

2024-09-12

Dennis DeMarco avatar
Dennis DeMarco

Hi. I’m trying to us private_subnets: ‘{{ (atmos.Component “vpc” .stack).outputs.private_subnets }}’

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please see some related threads in this channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

these two calls will execute terraform output just once for the same stack

vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are working on improvements for this, using YAML explicit types

Dennis DeMarco avatar
Dennis DeMarco

Alright. I think this is closer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key part is toRawJson

Dennis DeMarco avatar
Dennis DeMarco
"private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
Dennis DeMarco avatar
Dennis DeMarco

Almost, going to grab some food. I’m getting closer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Dennis DeMarco avatar
Dennis DeMarco

hmm Do I have a setting wrong, Does output need to do json? I put it on trace and saw Executing ‘terraform output vpc -s lab’

Dennis DeMarco avatar
Dennis DeMarco
Executing template function 'atmos.Component(vpc, lab)'
Found the result of the template function 'atmos.Component(vpc, lab)' in the cache
'outputs' section:
azs:
    - us-east-1c
    - us-east-1d
nat_public_ips:
    - 52.22.211.159
private_subnets:
    - subnet-00ec468ff0c78bf7c
    - subnet-029a6827865e98783
public_subnets:
    - subnet-02ab8a96c39abf71a
    - subnet-0b58b0578f1ec373f
stage: lab
vpc_cidr_block: 10.0.0.0/16
vpc_default_security_group_id: sg-0b124eb183c6b52ba
vpc_id: vpc-09ead1bd292122e33

ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'

Variables for the component 'test' in the stack 'lab':
private_subnets: '["subnet-00ec468ff0c78bf7c","subnet-029a6827865e98783"]'
stage: lab
vpc_id: vpc-09ead1bd292122e33

Writing the variables to file:
components/terraform/test/lab-test.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/terraform init -reconfigure
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of cloudposse/utils from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed cloudposse/utils v1.26.0
- Using previously-installed hashicorp/aws v5.66.0

Terraform has been successfully initialized!

Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: test
Stack: lab
Working dir: components/terraform/test

Executing command:
/opt/homebrew/bin/terraform workspace select lab

Executing command:
/opt/homebrew/bin/terraform plan -var-file lab-test.terraform.tfvars.json -out lab-test.planfile

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Invalid value for input variable
│
│   on lab-test.terraform.tfvars.json line 2:
│    2:    "private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
│
│ The given value is not suitable for var.private_subnets declared at variables.tf:13,1-27: list of string required.
╵
exit status 1
Dennis DeMarco avatar
Dennis DeMarco

interestingt the tfvars look correct in the vpc, yet in the component not well

{
   "private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
   "stage": "lab",
   "vpc_id": "vpc-09ead1bd292122e33"
}
Dennis DeMarco avatar
Dennis DeMarco

bug I guess

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dennis DeMarco this is a limitation that we have in Atmos using YAML and Go templates in the same files.

this is an invalid YAML (because { ... } is a map in YAML (same as in JSON), but double {{ breacks the YAML parser

a: {{ ... }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we have to quote it

a: "{{ ... }}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when we qoute it, it’s a string in YAML. And the string is sent to the TF component input

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I personally have not found a good way to deal with that. There is a thread with the same issue, and one person said they found a solution (I can try to find it)

Dennis DeMarco avatar
Dennis DeMarco

I tried it, but did not get good results today

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another way to deal with that is to change the TF input type from list(string) to string and then do jsondecode on it (but that will require changing the TF components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, we are working on a new way to define atmos.Component function - not as a Go template, but as YAML custom type

a: !terraform.output <component> <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this should be done in the next week or so, we’ll let you know

Dennis DeMarco avatar
Dennis DeMarco

Ah alrighty. Thank you. I did a work around by just using data aws_subnets with the vpc id that came over.

Dennis DeMarco avatar
Dennis DeMarco

But I was like a dog with a bone trying to make it work today hah

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, and as I mentioned, you can change the var type to string and then do jsondecode on it

Dennis DeMarco avatar
Dennis DeMarco

Thank you very much

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF will get this as input

"[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]"
1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which is a string, and after jsondecode you will get list(string)

Brian avatar

I was able to work around issues with passing more complex objects between components by changing the default delimiter for templating

Instead of the default delimiters: ["{{", "}}"]

Use delimiters: ["'{{", "}}'"].

It restricts doing some things like having a string prefix or suffix around the template, but there are other ways to handle this.

      vars:
        vpc_config: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_attrs) }}'

works with a object like

output "vpc_attrs" {
  value = {
    vpc_id          = "test-vpc-id"
    subnet_ids      = ["subnet-01", "subnet-02"]
    azs             = ["us-east-1a", "us-east-1b"]
    private_subnets = ["private-subnet-01", "private-subnet-02"]
    intra_subnets   = ["intra-subnet-01", "intra-subnet-02"]
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow, what a great hack @Brian

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is super nice and even looks like real quoted templates

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Brian

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dennis DeMarco ^

Liam McTague avatar
Liam McTague

I’m tying the exact same using

{'{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
Liam McTague avatar
Liam McTague

but Im getting an error template: all-atmos-sections:120: function "toRawJson" not defined

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Default Functions

Useful template functions for Go templates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob did not enable templating in atmos.yaml , in particular the Sprig functions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
templates:
  settings:
    # Enable `Go` templates in Atmos stack manifests
    enabled: true
    sprig:
      # Enable Sprig functions in `Go` templates in Atmos stack manifests
      enabled: true
    # <https://docs.gomplate.ca>
    # <https://docs.gomplate.ca/functions>
    gomplate:
      # Enable Gomplate functions and data sources in `Go` templates in Atmos stack manifests
      enabled: true
Liam McTague avatar
Liam McTague

Thanks @Andriy Knysh (Cloud Posse) I managed to get that working I had tempting disabled

However I still can’t seen to get list of private subnets as a list of strings is there something I’m still missing?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share what your have?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Liam McTague please wait until next week, we are working on a better way to get a list (e.g. a list of subnets) in the atmos stack manifests

1
Liam McTague avatar
Liam McTague

You can ignore me it was me my fault… this is what happens when try doing IAC and 03:00 AM

Liam McTague avatar
Liam McTague

Loving Atmos though!

1
2
Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) what is the better way to get a list?

Miguel Zablah avatar
Miguel Zablah

@Erik Osterman (Cloud Posse) ^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) did you try that !template trick we discussed?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, I tried it, it still complains about invalid YAML if templates are used w/o quoting. It will be the same solution as for YAML explicit type functions

Miguel Zablah avatar
Miguel Zablah

So for now what is like the way to go?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos.Component | atmos

Read the remote state or configuration of any Atmos component

Miguel Zablah avatar
Miguel Zablah

I’m testing this but I have some issues when I set delimiters: ["'{{", "}}'"] on the atmos.yaml it breaks but idk where is there maybe a better way to debug this? bc it dose not tell me where this is I just get this error:

template: all-atmos-sections:38: unexpected "}" in operand
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah it’s not easy to figure out the issue with the templates without seeing and running the code. Having said that, in the next few days we will release a new way of getting terraform outputs in atmos stack manifests, which will work with lists and other complex types

Miguel Zablah avatar
Miguel Zablah

oh that is great news!! perfect I might wait for that then, thanks @Andriy Knysh (Cloud Posse)!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Miguel Zablah that error you are getting indicates your YAML is not using quotes around the template values. Can you share the configuration of the stack? I can point it out

Miguel Zablah avatar
Miguel Zablah

@Erik Osterman (Cloud Posse) no worries I just found my issue it was bc I did interpolation of strings in some parts so I change that to use the function printf instead and it works

1
Dennis DeMarco avatar
Dennis DeMarco

And it’s returning this

Dennis DeMarco avatar
Dennis DeMarco

“private_subnets”: “[subnet-00ec468ff0c78bf7c subnet-029a6827865e98783]”,

Dennis DeMarco avatar
Dennis DeMarco

expected something like

Dennis DeMarco avatar
Dennis DeMarco

“private_subnets”: [ “subnet-00ec468ff0c78bf7c”, “subnet-029a6827865e98783” ],

Dennis DeMarco avatar
Dennis DeMarco

I suspect the template componet is doing something, with string lists, but can’t figure out how to stop it

2024-09-13

jose.amengual avatar
jose.amengual

I have a stack that has some core components in it, and the stack names are based on environment: us2 namespace: dev but now we need to move this core stuff to another stack file that we will call core, the problem is that the components are already deployed and using the context ( so all the names are using namespace and environment), so I guess the only way for me to be able to move teses components without having to redeploy ( due to the name change) is to override the namespace or environment variable at the component level, but I know that is not a recommended practice, is there a better way to do this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


but now we need to move this core stuff to another stack file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in Atmos, stack and stack file (stack manifest) are diff things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use any file names, in any folder, and move the components b/w them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stack files are for people to organize the stack hierarchy (folder structure)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stack is a logical construct, and it can be defined in any stack manifest (or in more than one)

jose.amengual avatar
jose.amengual

yes, it can be pepe.yaml, but in this case we need to move it to another stack ( not file) to run the workflows at different times

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so oyu want to move some components into another Atmos stack (like from plat-ue2-dev to core-ue2-dev)?

jose.amengual avatar
jose.amengual

correct, because at some point they might be owned by someone else

jose.amengual avatar
jose.amengual

I remember you told me to avoid overriding the variables using to name stacks since it is a bad idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Import the components into the new stack manifest file
  2. (remove them from the old manifest file)
  3. Override terraform workspace for the components
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I remember you told me to avoid overriding the variables using to name stacks since it is a bad idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a bad pattern, but can be used in cases like that (and documented)

jose.amengual avatar
jose.amengual

documented, can you point me to the doc?

jose.amengual avatar
jose.amengual

when you say import you are talking about moving it to the new .yaml file , not using import: on the yaml file, right?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also override the workspace name in the stack configurations to maintain the original workspace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos component migration in YAML config | atmos

Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance.

jose.amengual avatar
jose.amengual

thanks guys

2024-09-14

2024-09-16

Roman Orlovskiy avatar
Roman Orlovskiy

Hi all. I need some help to understand the difference between AWS account name and the stage and their relation. How does those two relate? Should they be identical or not? If not, then how to deal with account-map containing the AWS account names as keys in account_info_map but using the stage value to locate the necessary AWS account instead?

Roman Orlovskiy avatar
Roman Orlovskiy

For example, I configured the following AWS OU and account based on this example here:

- name: labs
  accounts
  - name: labs-sandbox
    tenant: labs
    stage: sandbox
    tags:
      eks: true

As result, account-map creates account_info_map output with

account_info_map = {
  ...
  "labs-sandbox" = {
    ....
    "ou" = "labs"
    "parent_ou" = "none"
    "stage" = "sandbox"
    "tenant" = "labs"
  }

I prepared the necessary stacks for it:

├── mixins
    ...
│   ├── stage
│   │   ├── dev.yaml.        
│   │   ├── prod.yaml.        
│   │   ├── sandbox.yaml.       # stage = "sandbox"
│   └── tenant
        ...
│       ├── labs.yaml
├── orgs
│   └── acme
        ...
│       ├── labs
│       │   ├── _defaults.yaml
│       │   └── labs-sandbox
│       │       ├── _defaults.yaml
│       │       ├── global-region.yaml
│       │       └── us-east-1.yaml

However, because the stage is set to sandbox ( and not labs-sandbox like the AWS account name) I am getting this:

atmos terraform apply aws-team-roles --stack labs-gbl-sandbox

╷
│ Error: Invalid index
│
│   on ../account-map/modules/iam-roles/main.tf line 49, in locals:
│   49:   is_target_user = local.current_identity_account == local.account_map.full_account_map[local.account_name]
│     ├────────────────
│     │ local.account_map.full_account_map is object with 8 attributes
│     │ local.account_name is "sandbox"

If I set stage: labs-sandbox and run atmos terraform apply aws-team-roles --stack labs-gbl-labs-sandbox instead, then it works.

Am I missing something here?

Roman Orlovskiy avatar
Roman Orlovskiy

However, the recommendation here is not use dashes in the tenant/stage/env, so I am not sure how to make this work properly

Roman Orlovskiy avatar
Roman Orlovskiy

I found this explanation and this example of the descriptor_formats usage, which resolved my issue

@mlanci91 I’m glad you figured out the issue with the Kubernetes version.

Regarding naming the node group, this module uses the Cloud Posse standard null-label module to construct a name. Give it any of the inputs (e.g. name), and the module will append “-workers” to the constructed ID and use that as the name of the node group. At the moment, there is no way to suppress the “-workers” suffix, but other than that you can set whatever name you want via the null-label inputs.

    account_name = {
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


AWS account name and the stage and their relation.
By convention, in our refarch we generally provision a dedicated AWS account per stage. That keeps IAM level boundaries between stages.

1

2024-09-18

2024-09-19

Michal Tomaszek avatar
Michal Tomaszek

hey, are there any examples of using Geodesic as a container in Atmos related GitHub Actions workflows (e.g. github-action-atmos-affected-stacks)? since Geodesic (or customized image based on it) contains Atmos, Terraform, etc. I guess it could be used this way. do you see any cons with this approach?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) I think we can publish this as a snippet on the docs site

2
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

to clarify here, we don’t use a unique container in GitHub Actions, but we do have an example of building Geodesic with Atmos, Terraform, etc for local dev usage

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

The main cons of that approach would be time to pull that image. Historically, Geodesic is a been a larger image that we use interactively. @Jeremy G (Cloud Posse) has made significant strides in reducing that image size, but generally we still use smaller, minimal images with actions

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Geodesic (after you customize it) is meant to be a complete toolbox for interactive use. We generally consider it to be too big for GitHub Actions, although we do use it as the basis automated Terraform actions, since the customized version includes all the configuration information (via Atmos Stacks).

Michal Tomaszek avatar
Michal Tomaszek

I use image based on Geodesic v3 myself for local development, so that part is not an issue. As it was already mentioned, v3 reduced the size substantially, hence the shenanigans with remote usage consistent with local environment. Recently, I was exploring the idea of using it in GitHub Actions workflows as it contains everything necessary. From my experience, using Actions that install terraform, Atmos, linters, etc. (whatever is needed in specific workflow) take similar time as pulling the image from Docker Hub . I tried to pull the same image from GHCR and the time it took was slightly longer. I was reading somewhere that the data centers used for GHCR are not the same as the ones for GitHub runners, so the benefit is lost. The last thing I checked was caching the image in GitHub Cache. For my size of image (Geodesic + couple of hundred MBs) it was slower than pulling the image from Docker Hub. However, I read somewhere that for larger images (like 4 GB I think), the time needed to load it from cache was almost 2 times shorter.

jose.amengual avatar
jose.amengual

how do you guys run this:

atmos-affected:
    runs-on: ubuntu-latest
    environment: XXXX
    permissions:
      id-token: write
      contents: read
    steps:
      - id: affected
        uses: cloudposse/github-action-atmos-affected-stacks@v3
        with:
          atmos-config-path: "./config"
          nested-matrices-count: 1
          atmos-version: 1.88.0

to use per environment credentials?

1
jose.amengual avatar
jose.amengual

it look like describe affected will run against all the environments so if I run this it will fail to authenticate to other environments except for the one the job is running in

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Depends on how you architect the IAM roles for workflows. We generally have a role assumed via GH OIDC that can plan across all environments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share some more about how you designed your roles

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, I think we publish our workflows in the new docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

jose.amengual avatar
jose.amengual

we are in azure…..

jose.amengual avatar
jose.amengual

so the subscription-ID will have to be passed to the provider , similar to the aws roles you are talking

jose.amengual avatar
jose.amengual

but it is correct to say that describe-affected runs in all stacks at once?

jose.amengual avatar
jose.amengual

it would be nice if you can pass a param -stack to it so it run on that stack only

this1
jose.amengual avatar
jose.amengual

that way, you could matrix against the github environments in the repo, grab the credentials and aggregate the affected-stacks.json or have one per each

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using the atmos.Component template function?

jose.amengual avatar
jose.amengual

we are

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Got it. So with how templating works, it will be a problem. But the good news is that we are fixing this in a more elegant way using YAML explicit types. With that we can defer evaluation and do what you need.

jose.amengual avatar
jose.amengual

ohhh interesting

jose.amengual avatar
jose.amengual

actually for this usecase having a -stacks on describe affected could be a good solution

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) keep this use-case in mind when implementing !terraform.outputs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It might be a few weeks @jose.amengual but we are necessarily implementing this due to performance issues with templating and component outputs

jose.amengual avatar
jose.amengual

no worries for now I will make a hack solution of deleting the stacks directories that I do not want atmos to process

jose.amengual avatar
jose.amengual

I just need to figure out where atmos checks out the ref locally to do that

jose.amengual avatar
jose.amengual

ufff that hack is not going to work, atmos is actually doing git checkout main….

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

keep this use-case in mind when implementing !terraform.outputs ok, we’ll keep that in mind, thanks @jose.amengual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual let me know how we can help (note that we can’t currently change how the template function atmos.Component works b/c it depends on how Go templating engine works)

jose.amengual avatar
jose.amengual

my problem right now is that since describe affected does all the stacks at once, and I’m using github repo environments to get the secrets as ENV variables to pass to the provider for authentication, I needs to somehow to tell atmos to just use one stack

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea not technically possible with templating. With templating the document is treated as text and we don’t have control over when certain template functions are evaluated.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

With explicit types we have full control

jose.amengual avatar
jose.amengual

but wait, if my template is localized to the stack, if describe affected only parses one stack then it will work

jose.amengual avatar
jose.amengual

we do not have template functions that uses data from other stacks

jose.amengual avatar
jose.amengual

in this case

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But that’s not right now how files are loaded. We build first a complete data model in memory of all configs fully deep merged. I think @Andriy Knysh (Cloud Posse) is writing up something that does a better job explaining how everything is loaded.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it probably possible to add --stack param to describe affected

1
jose.amengual avatar
jose.amengual

PLEASE PLEASE add this, filtering the output of the matrix to do this was a painful exercise and not very fast, this can improve the speed of planning quite a lot

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) FYI

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual we’ll add this in the next few days

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jose.amengual avatar
jose.amengual

I saw it

2024-09-20

jose.amengual avatar
jose.amengual

How do you gate what, and how do stacks get deployed? For example, I created PR1 for a component on dev, and after testing it is all good, now we want to deploy to prod, then I created PR2 to deploy to prod; how do I gate prod with rules? Github environments? General repo rules? I ask because in the past, I only enforced reviews, and depending on the folder using CODEOWNERS we will require different reviewers, but I was wondering if there is a better way these days

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I recommend looking into environment protection rules on GitHub

jose.amengual avatar
jose.amengual

when you say environment you mean github environments?

this1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Environment protection rules can be based on any action, so they can be as elaborate as you need, provided you stick the logic into the action

jose.amengual avatar
jose.amengual

ok, I will keep this in mind

2024-09-21

2024-09-23

jose.amengual avatar
jose.amengual

in the case of deploying things like lambdas/function apps, etc, with Atmos, how do you approach the code deployment? , do you guys add to the mono repo all the app codes and let TF deploy of the app pipelines separately from the IAC repo?

jose.amengual avatar
jose.amengual

I have done the latter and it works somewhat if the communication between repo owners is the same but we will like to give more independence to app teams

jose.amengual avatar
jose.amengual

we were thinking to vendor the iac repo into the app repos and then trigger a pipeline

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Igor Rodionov @Dan Miller (Cloud Posse)

Igor Rodionov avatar
Igor Rodionov

Actually, I had yet to gain experience with lambdas-based applications. We use Lambdas for some of our DevOps operations - like clean old records in eleasticsearch records. That type of lamdas has a different release engineering workflow.

But if I had to implement lambdas-based application deployment, I would follow one of 2 strategies we had with ECS.

Strategy 1.

  1. Decouple lambdas configs into 2 parts: a. IAC repo component with something consistent resources used by lambda - like IAM roles, s3 buckets, LB rules and etc. b. Application-related lambda configs and lambda source code stored in app repo.
  2. The application repo would have a pipeline that deploys lambdas with some lambda deployment tool (for ecs, we have ecspresso). The plus of Strategy 1 is speed. Usually, special tools for deployment are faster than running Terraform.

Strategy 2.

  1. Decouple lambdas configs into 2 parts: a. IAC repo component with something consistent resources used by lambda - like IAM roles, s3 buckets, LB rules and etc. b. Application-related lambda configs and lambda source code stored in app repo.
  2. The application repo would have to prepare some artifacts (docker image, pip packages, commits configs to some specific repo and etc) then it triggers the terraform apply pipeline that will deploy the application with terraform. We used Spacelift, but GHA will work also. Plus, that strategy is a single deployment actor, consistent deployment history, and easy recovery with Terraform. Cons deployment would be slow because Terraform has to check the state of all resources. Deployment concurrency - deployments would wait for each other in a queue, and you should care about what to do with pending deployments in the middle.
Igor Rodionov avatar
Igor Rodionov

Hope @jose.amengual, you will find something relevant from my experience

jose.amengual avatar
jose.amengual

Thanks Igor. Why do you say the deployment will be slow? when using Atmos the workspace backend will be only related to the resource being deployed and not the whole infra, so it should not take that long, right?

Igor Rodionov avatar
Igor Rodionov

for ECS there is a conception task that actually contains docker image and related configs. So deployment in 99% only change docker image version in task configuration.

Igor Rodionov avatar
Igor Rodionov

but terraform module for ECS service provision also ECS service, task, IAM role, LB rules, internal dns

Igor Rodionov avatar
Igor Rodionov

so terraform checks state for all of that resources and it is much slow thenapply task configuration

Igor Rodionov avatar
Igor Rodionov

what actually doing special ecs deployment tools

Igor Rodionov avatar
Igor Rodionov

I think lambda will have something simular.

jose.amengual avatar
jose.amengual

Yes, lambda will be similar; I have used SAN to deploy the lambdas and do something similar to what you suggest. I think the problem is that we want teams to learn how to stand their infra with the shared components, but at the same time, we do not want them to have to go to another repo to do so and hopefully to have everything on one repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Lambda with GitHub Workflows | The Cloud Posse Reference Architecture

Deploy a Lambda functions using GitHub Workflows.

1
jose.amengual avatar
jose.amengual

ohhh how did I not see that

Igor Rodionov avatar
Igor Rodionov

@Erik Osterman (Cloud Posse) even I have not seen that

Paweł Rein avatar
Paweł Rein

I’m also looking into this topic, thanks for sharing! The approach with GHA WF on the app repo end would be DRY-er if made using org wide reusable WF, if possible. Is there any operator way to do it? Crossplane? So that no CI/CD logic is in the app repo, only code and configs.

Igor Rodionov avatar
Igor Rodionov

Here is how we use it for our terraform modules

Paweł Rein avatar
Paweł Rein

yes, I reuse WFs, just wasn’t sure how much this one is reusable without taking a closer look

Igor Rodionov avatar
Igor Rodionov

you can have up to 4 nested levels of reusable WF

1

2024-09-24

RB avatar

Regarding the atmos actions, what do you folks think of an atmos-version-file input similar to the setup-python’s python-version-file input?

https://github.com/cloudposse/github-action-setup-atmos/issues/97

  - name: Setup atmos
    uses: cloudposse/github-action-setup-atmos@v2
    with:
      atmos-version-file: .atmos-version

maybe an .atmos-version file ? or perhaps being able to read the asdf .tool-versions file ?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, we need a better way of defining this. Very error prone, if not updating all places atmos version is set

1
jose.amengual avatar
jose.amengual

for deploying after the merge, I think I’m doing something wrong because my action is always saying that there are no changes, but the plan on the PR says otherwise, so now I wonder, when using describe affected, do I need to find the hash of the sha commit?

atmos-affected:
    if: ${{ !contains( github.event.pull_request.labels.*.name, 'no-plan') }}
    name: Determine Affected Stacks
    runs-on: ["self-hosted", "terraform"]
    steps:
      - id: affected
        uses: cloudposse/github-action-atmos-affected-stacks@v4
        with:
          atmos-version: ${{ vars.ATMOS_VERSION }}
          atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}
          base-ref: ${{ github.event.pull_request.base.sha }}
          head-ref: ${{ github.event.pull_request.head.sha }}
jose.amengual avatar
jose.amengual
Atmos Terraform Apply | atmos

Run a terraform apply to provision changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov

Miguel Zablah avatar
Miguel Zablah

@jose.amengual if the plan work on the PR is probably and issue with the base-ref or head-ref Ifor me what it work was to point to the output of PR so something like this:

uses: cloudposse/github-action-atmos-affected-stacks@v4
with:
  base-ref: ${{ needs.pr.outputs.base }}
  head-ref: ${{ needs.pr.outputs.head }}
Miguel Zablah avatar
Miguel Zablah
Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

Miguel Zablah avatar
Miguel Zablah

although it still have that bug

Igor Rodionov avatar
Igor Rodionov
name: 👽 Atmos Terraform Apply
run-name: 👽 Atmos Terraform Apply

on:
  push:
    branches:
      - main

permissions:
  id-token: write
  contents: read
  issues: write
  pull-requests: write

jobs:
  pr:
    name: PR Context
    runs-on:
      - "self-hosted"
      - "amd64"
      - "common"
    steps:
      - uses: cloudposse-github-actions/get-pr@v1
        id: pr

    outputs:
      base: ${{ fromJSON(steps.pr.outputs.json).base.sha }}
      head: ${{ fromJSON(steps.pr.outputs.json).head.sha }}
      auto-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'auto-apply') }}
      no-apply: ${{ contains( fromJSON(steps.pr.outputs.json).labels.*.name, 'no-apply') }}

  atmos-affected:
    name: Determine Affected Stacks
    if: needs.pr.outputs.no-apply == 'false'
    needs: ["pr"]
    runs-on:
      - "self-hosted"
      - "amd64"
      - "common"
    steps:
      - id: affected
        uses: cloudposse/github-action-atmos-affected-stacks@v3
        with:
          base-ref: ${{ needs.pr.outputs.base }}
          head-ref: ${{ needs.pr.outputs.head }}
          atmos-version: ${{ vars.ATMOS_VERSION }}
          atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}

    outputs:
      stacks: ${{ steps.affected.outputs.matrix }}
      has-affected-stacks: ${{ steps.affected.outputs.has-affected-stacks }}
Igor Rodionov avatar
Igor Rodionov

This is example for running on push

Igor Rodionov avatar
Igor Rodionov

This one was on PR close

Igor Rodionov avatar
Igor Rodionov
name: 👽 Atmos Terraform Apply
run-name: 👽 Atmos Terraform Apply

on:
  pull_request:
    types:
      - closed
    branches:
      - main

permissions:
  id-token: write
  contents: read
  issues: write
  pull-requests: write

jobs:
  atmos-affected:
    if: ${{ github.event.pull_request.merged && !contains( github.event.pull_request.labels.*.name, 'no-apply') }}
    name: Determine Affected Stacks
    runs-on: ["self-hosted"]
    steps:
      - id: affected
        uses: cloudposse/github-action-atmos-affected-stacks@v3
        with:
          base-ref: ${{ github.event.pull_request.base.sha }}
          atmos-version: ${{ vars.ATMOS_VERSION }}
          atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}
    outputs:
      stacks: ${{ steps.affected.outputs.matrix }}
      has-affected-stacks: ${{ steps.affected.outputs.has-affected-stacks }}

  plan-atmos-components:
    needs: ["atmos-affected"]
    if: |
      needs.atmos-affected.outputs.has-affected-stacks == 'true' && !contains(github.event.pull_request.labels.*.name, 'auto-apply')
    name: Validate plan (${{ matrix.name }})
    uses: ./.github/workflows/atmos-terraform-plan-matrix.yaml
    strategy:
      matrix: ${{ fromJson(needs.atmos-affected.outputs.stacks) }}
      max-parallel: 1 # This is important to avoid ddos GHA API
      fail-fast: false # Don't fail fast to avoid locking TF State
    with:
      stacks: ${{ matrix.items }}
      drift-detection-mode-enabled: "true"
      continue-on-error: 'true'
      atmos-version: ${{ vars.ATMOS_VERSION }}
      atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}
    secrets: inherit
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov infra-test is a private repo

jose.amengual avatar
jose.amengual

I’m doing basically the same thing using the atmos-plan matrix reusable workflow, but I’m getting a very weird error :

Binaries available at /opt/hostedtoolcache/terraform-docs/terraform-docs/v0.18.0/linux-x64
Run cloudposse/github-action-atmos-get-setting@v2
  with:
    settings: - component: loganalytics
    stack: dev
    settingsPath: settings.github.actions_enabled
    outputPath: enabled
  - component: loganalytics
    stack: dev
    settingsPath: component_info.component_path
    outputPath: component-path
  - component: loganalytics
    stack: dev
    settingsPath: atmos_cli_config.base_path
    outputPath: base-path
  - component: loganalytics
    stack: dev
    settingsPath: command
    outputPath: command
  
    process-templates: true
  env:
    ATMOS_CLI_CONFIG_PATH: /home/runner/work/repoiac/repoiac/config
  
Error: SyntaxError: Unexpected token 'a', "
atmos.yaml"... is not valid JSON
Error: SyntaxError: Unexpected token 'a', "
atmos.yaml"... is not valid JSON
    at JSON.parse (<anonymous>)
    at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/lib/settings.ts:28:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:40:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at processMultipleSettings (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:37:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/main.ts:32:1
jose.amengual avatar
jose.amengual

keep in mind this is on push to main and the plan worked totally fine with those stack files and is using the same github-action-atmos-get-setting action version for plan

jose.amengual avatar
jose.amengual

I have no idea where the error is and how to find it

jose.amengual avatar
jose.amengual

I switched to main to make sure atmos validate works etc and all seems to be fine

jose.amengual avatar
jose.amengual

I enabled debug on the action ETC, is a mystery to me so far

Igor Rodionov avatar
Igor Rodionov

@jose.amengual could you pls show me atmos.yaml config?

Igor Rodionov avatar
Igor Rodionov

is it the right path to the config /home/runner/work/repoiac/repoiac/config ?

jose.amengual avatar
jose.amengual
# CLI config is loaded from the following locations (from lowest to highest priority):
# system dir (`/usr/local/etc/atmos` on Linux, `%LOCALAPPDATA%/atmos` on Windows)
# home dir (~/.atmos)
# current directory
# ENV vars
# Command-line arguments
#
# It supports POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>

# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: "../"

components:
  terraform:
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_BASE_PATH` ENV var, or `--terraform-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/terraform"
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE` ENV var
    apply_auto_approve: false
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT` ENV var, or `--deploy-run-init` command-line argument
    deploy_run_init: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE` ENV var, or `--init-run-reconfigure` command-line argument
    init_run_reconfigure: true
    # Can also be set using `ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE` ENV var, or `--auto-generate-backend-file` command-line argument
    auto_generate_backend_file: true
  helmfile:
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_BASE_PATH` ENV var, or `--helmfile-dir` command-line argument
    # Supports both absolute and relative paths
    base_path: "components/helmfile"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH` ENV var
    kubeconfig_path: "/dev/shm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN` ENV var
    helm_aws_profile_pattern: "{namespace}-{tenant}-gbl-{stage}-helm"
    # Can also be set using `ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN` ENV var
    cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"

stacks:
  # Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
  included_paths:
    - "**/**"
  # Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
  excluded_paths:
    - "**/_defaults.yaml"
    - "catalog/**/*"
    - "**/globals.yaml"
  # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
  name_pattern: "{namespace}-{environment}"

workflows:
  # Can also be set using `ATMOS_WORKFLOWS_BASE_PATH` ENV var, or `--workflows-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks/workflows"

logs:
  verbose: false
  colors: true

# Custom CLI commands
commands:
  - name: tf
    description: Execute terraform commands
    # subcommands
    commands:
      - name: plan
        description: This command plans terraform components
        arguments:
          - name: component
            description: Name of the component
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        env:
          - key: ENV_VAR_1
            value: ENV_VAR_1_value
          - key: ENV_VAR_2
            # `valueCommand` is an external command to execute to get the value for the ENV var
            # Either 'value' or 'valueCommand' can be specified for the ENV var, but not both
            valueCommand: echo ENV_VAR_2_value
        # steps support Go templates
        steps:
          - atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
  - name: terraform
    description: Execute terraform commands
    # subcommands
    commands:
      - name: provision
        description: This command provisions terraform components
        arguments:
          - name: component
            description: Name of the component
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        # ENV var values support Go templates
        env:
          - key: ATMOS_COMPONENT
            value: "{{ .Arguments.component }}"
          - key: ATMOS_STACK
            value: "{{ .Flags.stack }}"
        steps:
          - atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
          - atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
  - name: az
    description: Executes Azure CLI commands (not implemented)
    steps:
      - echo az...
    # subcommands
    commands:
      - name: hello
        description: This command says Hello world
        steps:
          - echo Saying Hello world...
          - echo Hello world
      - name: ping
        description: This command plays ping-pong
        steps:
          - echo Playing ping-pong...
          - echo pong
  - name: setup
    description: Atmos setup commands
    # subcommands
    commands:
      - name: check
        description: This command checks whether atmos is setup correctly
        steps:
          - echo "Looks good."
  - name: clean
    description: Removes any .terraform, .terraform.lock.hcl, .planfile, .tfvars.json files left over from an atmos terraform command
    steps: |
      patterns=(".terraform" ".terraform.lock.hcl" "*.planfile" "*.tfvars.json" "backend.tf.json")
      for pattern in "${patterns[@]}"; do
        find . -name "$pattern" -exec rm -rf {} +
      done

# Integrations
integrations:
  # Atlantis integration
  # <https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html>
  atlantis:
    # Path and name of the Atlantis config file `atlantis.yaml`
    # Supports absolute and relative paths
    # All the intermediate folders will be created automatically (e.g. `path: /config/atlantis/atlantis.yaml`)
    # Can be overridden on the command line by using `--output-path` command-line argument in `atmos atlantis generate repo-config` command
    # If not specified (set to an empty string/omitted here, and set to an empty string on the command line), the content of the file will be dumped to `stdout`
    # On Linux/macOS, you can also use `--output-path=/dev/stdout` to dump the content to `stdout` without setting it to an empty string in `atlantis.path`
    path: "atlantis.yaml"

    # Config templates
    # Select a template by using the `--config-template <config_template>` command-line argument in `atmos atlantis generate repo-config` command
    config_templates:
      config-1:
        version: 3
        automerge: true
        delete_source_branch_on_merge: true
        parallel_plan: true
        parallel_apply: true
        allowed_regexp_prefixes:
          - automation/
          - dev/
          - qa/
          - staging/
          - prod/

    # Project templates
    # Select a template by using the `--project-template <project_template>` command-line argument in `atmos atlantis generate repo-config` command
    project_templates:
      project-1:
        # generate a project entry for each component in every stack
        name: "{namespace}-{environment}-{component}"
        workspace: "{workspace}"
        dir: "{component-path}"
        terraform_version: v1.2.9
        delete_source_branch_on_merge: true
        apply_requirements:
          - "approved"
        autoplan:
          enabled: false
          when_modified:
            - "**/*.tf"
            - "$PROJECT_NAME.tfvars.json"

    # Workflow templates
    # <https://www.runatlantis.io/docs/custom-workflows.html#custom-init-plan-apply-commands>
    # <https://www.runatlantis.io/docs/custom-workflows.html#custom-run-command>
    # Select a template by using the `--workflow-template <workflow_template>` command-line argument in `atmos atlantis generate repo-config` command
    workflow_templates:
      workflow-1:
        plan:
          steps:
            - run: cp backends/backend.tf .
            - run: terraform init --reconfigure -input=false -backend-config=backends/$PROJECT_NAME.backend
            # When using workspaces, you need to select the workspace using the $WORKSPACE environment variable
            - run: terraform workspace select $WORKSPACE || terraform workspace new $WORKSPACE
            # You must output the plan using `-out $PLANFILE` because Atlantis expects plans to be in a specific location
            - run: terraform plan -input=false -refresh -out $PLANFILE -var-file varfiles/$PROJECT_NAME.tfvars.json
        apply:
          steps:
            - run: terraform apply $PLANFILE

# `Go` templates in Atmos manifests
# <https://atmos.tools/core-concepts/stacks/templates>
# <https://pkg.go.dev/text/template>
templates:
  settings:
    enabled: true
    evaluations: 1
    # <https://masterminds.github.io/sprig>
    sprig:
      enabled: true
    # <https://docs.gomplate.ca>
    gomplate:
      enabled: true
      timeout: 5
      # <https://docs.gomplate.ca/datasources>
      datasources: {}
jose.amengual avatar
jose.amengual
apply-atmos-components:
    needs: ["atmos-affected", "pr"]
    if: |
      needs.atmos-affected.outputs.has-affected-stacks == 'true' && needs.pr.outputs.auto-apply != 'true'
    name: Validate plan (${{ matrix.stack }})
    uses: ./.github/workflows/atmos-terraform-apply-matrix.yaml
    strategy:
      matrix: ${{ fromJson(needs.atmos-affected.outputs.stacks) }}
      max-parallel: 1 # This is important to avoid ddos GHA API
      fail-fast: false # Don't fail fast to avoid locking TF State
    with:
      stacks: ${{ matrix.items }}
      atmos-version: 1.88.0
      atmos-config-path: "./config"
      sha: ${{ needs.pr.outputs.head }}
    secrets: inherit
jose.amengual avatar
jose.amengual

pr and armos-affected run fine

jose.amengual avatar
jose.amengual

my apply matrix

name: Atmos Terraform Apply Matrix (Reusable)
run-name: Atmos Terraform Apply Matrix (Reusable)

on:
  workflow_call:
    inputs:
      stacks:
        description: "Stacks"
        required: true
        type: string
      sha:
        description: "Commit SHA to apply. Default: github.sha"
        type: string
        required: false
        default: "${{ github.event.pull_request.head.sha }}"
      atmos-version:
        description: The version of atmos to install
        required: false
        default: ">= 1.63.0"
        type: string
      atmos-config-path:
        description: The path to the atmos.yaml file
        required: true
        type: string

permissions:
  id-token: write # This is required for requesting the JWT
  contents: read  # This is required for actions/checkout

jobs:
  atmos-apply:
    if: ${{ inputs.stacks != '{include:[]}' }}
    name: ${{ matrix.stack_slug }}
    runs-on: ubuntu-latest
    environment: ${{ matrix.stack }}
    strategy:
      max-parallel: 10
      fail-fast: false # Don't fail fast to avoid locking TF State
      matrix: ${{ fromJson(inputs.stacks) }}
    ## Avoid running the same stack in parallel mode (from different workflows)
    concurrency:
      group: ${{ matrix.stack_slug }}
      cancel-in-progress: false
    steps:
      - id: azure-login
        name: Azure Login
        uses: Azure/[email protected]
        with:
            client-id: ${{ secrets.ARM_CLIENT_ID }}
            tenant-id: ${{ secrets.ARM_TENANT_ID }}
            subscription-id: ${{ secrets.ARM_SUBSCRIPTION_ID }}
            enable-AzPSSession: true

      - name: Checkout code
        uses: actions/[email protected]

      - name: Apply Atmos Component
        uses: ./.github/workflows/github-action-atmos-terraform-apply
        with:
          component: ${{ matrix.component }}
          stack: ${{ matrix.stack }}
          sha: ${{ inputs.sha }}
          atmos-version: ${{ inputs.atmos-version }}
          atmos-config-path: ${{ inputs.atmos-config-path }}
jose.amengual avatar
jose.amengual

I have a local copy of github-action-atmos-terraform-apply where I added support for azureblob, but the PlanGet part is way down the line on the composite action

Igor Rodionov avatar
Igor Rodionov

@jose.amengual why I do not see gitops integration section in your atmos.yaml ? https://github.com/cloudposse/github-action-atmos-terraform-apply?tab=readme-ov-file#config

jose.amengual avatar
jose.amengual

do I need that?

Igor Rodionov avatar
Igor Rodionov

absolutly

jose.amengual avatar
jose.amengual

for planning this is not needed right?

Igor Rodionov avatar
Igor Rodionov

actually it do

Igor Rodionov avatar
Igor Rodionov

I do not know why plan works

jose.amengual avatar
jose.amengual

because for plan it works just fine

Igor Rodionov avatar
Igor Rodionov

weird

jose.amengual avatar
jose.amengual

so the atmos configs are passed to the actions?

Igor Rodionov avatar
Igor Rodionov

yea.

Igor Rodionov avatar
Igor Rodionov

probably you use old plan action

jose.amengual avatar
jose.amengual

so I guess this works because I’m passing all the values

jose.amengual avatar
jose.amengual

I’m using latest version

Igor Rodionov avatar
Igor Rodionov

Please note! This GitHub Action only works with atmos >= 1.63.0. If you are using atmos < 1.63.0 please use v1 version of this action.

Igor Rodionov avatar
Igor Rodionov

that’s for plan

jose.amengual avatar
jose.amengual

I’m using 1.88.0

Igor Rodionov avatar
Igor Rodionov

v1 did not use config

Igor Rodionov avatar
Igor Rodionov

what version of plan action do you use?

jose.amengual avatar
jose.amengual

I cloned the action last week and modified to add azure blob

jose.amengual avatar
jose.amengual

I will create a PR to add it after this is solved

Igor Rodionov avatar
Igor Rodionov

@jose.amengual have you seen this PRs ?

jose.amengual avatar
jose.amengual

I’m basically doing this:

 - name: Store New Plan
      if: ${{ steps.atmos-plan.outputs.error == 'false' }}
      uses: cloudposse/github-action-terraform-plan-storage@v1
      id: store-plan
      with:
        action: storePlan
        commitSHA: ${{ inputs.sha }}
        planPath: ${{ steps.vars.outputs.plan_file }}
        component: ${{ inputs.component }}
        stack: ${{ inputs.stack }}
        tableName: ${{ steps.config.outputs.terraform-state-table }}
        bucketName: ${{ steps.config.outputs.terraform-state-bucket }}
        planRepositoryType: ${{ inputs.planRepositoryType }}
        metadataRepositoryType: ${{ inputs.metadataRepositoryType }}
        blobAccountName: ${{ inputs.blobAccountName }}
        blobContainerName: ${{ inputs.blobContainerName }}
        cosmosContainerName: ${{ inputs.cosmosContainerName }}
        cosmosDatabaseName: ${{ inputs.cosmosDatabaseName }}
        cosmosEndpoint: ${{ inputs.cosmosEndpoint }}
jose.amengual avatar
jose.amengual

in my case I will not be able to use this section :

artifact-storage:
        region: us-east-2
        bucket: cptest-core-ue2-auto-gitops
        table: cptest-core-ue2-auto-gitops-plan-storage
        role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
Igor Rodionov avatar
Igor Rodionov

the PRs I sent are exactly what you want.

jose.amengual avatar
jose.amengual

ohhh I see

jose.amengual avatar
jose.amengual

ok

jose.amengual avatar
jose.amengual

I see it now

Igor Rodionov avatar
Igor Rodionov

how you use plan storage I guess it would try to use aws as I know

jose.amengual avatar
jose.amengual

ok, let me add that to atmos.yaml and see if this works

jose.amengual avatar
jose.amengual

so cloudposse/github-action-atmos-get-setting reads the atmos gitops integrations section?

jose.amengual avatar
jose.amengual

the other problem with this integration section :

integrations:
  github:
    gitops:
      opentofu-version: 1.7.3  
      terraform-version: 1.5.2
      infracost-enabled: false
      artifact-storage:
        region: us-east-2
        bucket: cptest-core-ue2-auto-gitops
        table: cptest-core-ue2-auto-gitops-plan-storage
        role: arn:aws:iam::xxxxxxxxxxxx:role/cptest-core-ue2-auto-gitops-gha
      role:
        plan: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
        apply: arn:aws:iam::yyyyyyyyyyyy:role/cptest-core-gbl-identity-gitops
      matrix:
        sort-by: .stack_slug
        group-by: .stack_slug | split("-") | [.[0], .[2]] | join("-")

is that it assumes the same integration for each stack

jose.amengual avatar
jose.amengual

which we do not want to do

Igor Rodionov avatar
Igor Rodionov

yea

Igor Rodionov avatar
Igor Rodionov

you have

integrations:
  atlantis:
Igor Rodionov avatar
Igor Rodionov

just add gitops to the same integrations section

jose.amengual avatar
jose.amengual

so I could leave the storage settings out and grab those from the github environment secrets

Igor Rodionov avatar
Igor Rodionov

• github.gitops

Igor Rodionov avatar
Igor Rodionov

probably. depends of what you changed in your actions

jose.amengual avatar
jose.amengual

when are you PRs going to me merged?

Igor Rodionov avatar
Igor Rodionov

Waiting @Erik Osterman (Cloud Posse) to approve and @Matt Calhoun to review

1
jose.amengual avatar
jose.amengual

I added the section for now and pipeline is running now, so we will see how it goes

jose.amengual avatar
jose.amengual

I was reading your code, the reason why my action works is because I’m doing this:

planRepositoryType: ${{ inputs.planRepositoryType }}
        metadataRepositoryType: ${{ inputs.metadataRepositoryType }}
        blobAccountName: ${{ inputs.blobAccountName }}
        blobContainerName: ${{ inputs.blobContainerName }}
        cosmosContainerName: ${{ inputs.cosmosContainerName }}
        cosmosDatabaseName: ${{ inputs.cosmosDatabaseName }}
        cosmosEndpoint: ${{ inputs.cosmosEndpoint }}
jose.amengual avatar
jose.amengual

so I’m using the input and not the interations.github config

jose.amengual avatar
jose.amengual

that is why it works on plan

jose.amengual avatar
jose.amengual

and I did the same in apply so it should work the same

jose.amengual avatar
jose.amengual

same thing :

Received 4189464 of 4189464 (100.0%), 4.0 MBs/sec
Received 6189241 of 6189241 (100.0%), 5.9 MBs/sec
Run cloudposse/github-action-atmos-get-setting@v2
  
Error: SyntaxError: Unexpected token 'a', "
atmos.yaml"... is not valid JSON
Error: SyntaxError: Unexpected token 'a', "
atmos.yaml"... is not valid JSON
    at JSON.parse (<anonymous>)
    at getSetting (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/lib/settings.ts:28:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:40:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:38:1
    at processMultipleSettings (/home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/useCase/process-multiple-settings.ts:37:1)
    at /home/runner/work/_actions/cloudposse/github-action-atmos-get-setting/v2/src/main.ts:32:1
Igor Rodionov avatar
Igor Rodionov

could your run on local the settings command?

Igor Rodionov avatar
Igor Rodionov

in the github logs should be somethings like

jose.amengual avatar
jose.amengual

how do I do that?

Igor Rodionov avatar
Igor Rodionov

atmos describe component

Igor Rodionov avatar
Igor Rodionov
atmos describe component ${component} -s ${stack} --format=json
Igor Rodionov avatar
Igor Rodionov

put component and stack name here

jose.amengual avatar
jose.amengual

one sec

jose.amengual avatar
jose.amengual

no errors, and I got json output

Igor Rodionov avatar
Igor Rodionov

could you send it here?

jose.amengual avatar
jose.amengual

PM

jose.amengual avatar
jose.amengual

for those with similar issues, the problem was that the ATMOS_BASE_PATH: "./" was not defined on the Reusable action

jose.amengual avatar
jose.amengual

It was in all the other actions, but since reusable actions do not inherit the ENV from the action calling them, it was not set

jose.amengual avatar
jose.amengual

I still think @Matt Calhoun @Erik Osterman (Cloud Posse) that <https://github.com/cloudposse/github-action-atmos-get-setting> could potentially say something along the lines of "Could not parse atmos.yaml. Make sure Base_path is correctly set and the atmos.yaml file is accessible" instead of SyntaxError: Unexpected token ‘a’, “atmos.yaml”… is not valid JSON` because that implies that it was able to find it but somehow is malformed

jose.amengual avatar
jose.amengual

and please review @Igor Rodionov PRs since that will allow for support of azure blob storage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual TL;DR do we need to update anything in our documentation?

jose.amengual avatar
jose.amengual

I believe so, but I will like to test Igor PRs once they are merged and I can make a PR if there are doc changes needed(apply action needs the same )

2024-09-25

2024-09-26

Christof Bruyland avatar
Christof Bruyland

I’m currently testing the GitHub actions for Atmos, and they look and work very nicely. One thing I noticed is that they work on the apply-after-merge strategy. Is there a way to use them as apply-before-merge?

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

technically, nothing should be prohibiting you from doing that as the workflows are entirely in your control. just change the triggers and modify the workflows to accommodate that pattern.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it would be interesting if you shared whatever outcome you have. If you get stuck, let us know.

Igor Rodionov avatar
Igor Rodionov

@Christof Bruyland, I do not see underwater stones on that approach. The GHA for Atmos should work smoothly, but I never tested the pattern.

Christof Bruyland avatar
Christof Bruyland

Thanks, I will try it.

Csenge Papp avatar
Csenge Papp

Hello! I am trying out atmos for the first time. I want to use terraform-aws-components but I encountered an issue I can’t solve. I wanted to use the account and account map modules but I don’t have access to the organization’s root account. Is there any solution for that? I don’t need to create organization or accounts just a simple vpc. Thanks for the help!

Michal Tomaszek avatar
Michal Tomaszek

hey, I got in the same position recently. see this thread for reference: https://sweetops.slack.com/archives/CB6GHNLG0/p1724741636735349

hey, I was playing with VPC component from terraform-aws-components repo yesterday. it looks that account-map is prerequisite for it. account-map on the other hand needs account component. is there some workaround needed to use just VPC component without account-map?

Csenge Papp avatar
Csenge Papp

thank you! That is exactly the info I was looking for Did you end up using the static remote state workaround?

Michal Tomaszek avatar
Michal Tomaszek

my use case was related with single account in Oracle, not AWS, so I removed profile, dynamic block and iam_roles module from providers.tf file at all.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Csenge Papp did you get unblocked?

jose.amengual avatar
jose.amengual

Now that my actions are running, I have questions about this settings

integrations:
  github:
    gitops:
      opentofu-version: 1.7.3  
      terraform-version: 1.5.2
      infracost-enabled: false
      artifact-storage:
        region: us-east-2
        bucket: cptest-core-ue2-auto-gitops
        table: cptest-core-ue2-auto-gitops-plan-storage

is it possible to override this settings per stack? since this is at the atmos.yaml level I do not know if they could be set per stack instead of global

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor Rodionov bump

Igor Rodionov avatar
Igor Rodionov

@jose.amengual no. the settings are global

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual if you’re intent is to vary the terraform version that is still possible, but will require you to reinstall your workflow the other terraform versions as the get of action supports installing one version. The version of terraform is configurable per stack provided that the command called is different meaning that the terraform binary would be in a different path between versions.. then specify the command to the version you want.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Configure OpenTofu | atmos

Atmos natively supports OpenTofu, similar to the way it supports Terraform. It’s compatible with every version of opentofu and designed to work with multiple different versions of it concurrently, and can even work alongside with HashiCorp Terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See how command is overridden

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And that is possible per stack

jose.amengual avatar
jose.amengual

no, the intention is to pass to the github action setting per environment so that each stack uses it’s own blob/bucket dynmo/cosmos to dump the plans of the action in, instead of having 1 for all the environments

jose.amengual avatar
jose.amengual

I’m treating the temporary plan bucket storage just like the state, to much sensitive stuff in it

jose.amengual avatar
jose.amengual

I actually though on running a step to delete the plan from the storage after is merged

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I actually though on running a step to delete the plan from the storage after is merged
Fwiw, it was an audit requirement for our customer to preserve those

jose.amengual avatar
jose.amengual

I added a secondary step in my version of the action ( I forked it) to push the summary as a PR comment , using tfcmt and I created https://github.com/suzuki-shunsuke/tfcmt/issues/1412

#1412 Output to file and comment at the same time.

Feature Overview

We use the output to file and then pass it to the summary output and other steps happening later, but we need to comment on the PR simultaneously, but since `–output ${{ steps.vars.outputs.summary_file }} disables the PR comment. We had to run the plan twice, which is sometimes really slow.

It would be nice to be able to output to the file and comment to the PR at the same time or even pipe in the plan from a file to then comment on the PR ( although not as nice and requires two command)

reference https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L201 ( this one only outputs to file, so I had to duplicate the step to comment to the PR)

Why is the feature needed?

to comment to the PR and output to a file for further processing

note

nothing

jose.amengual avatar
jose.amengual

right now, I’m running two tfcmt commands, which will run atmos plan twice, so if not really fast

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You saw we use github Job Summaries instead? in my opinion, they are better because they have a one meg limit and do not spam every follower of a repo when there’s a comment added. Sometimes those comments can be added on every commit and that can get noisy.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we discussed supporting comments as I know this is a conventional way of doing it and a matter of opinion so if you would like to open a PR to our action so that it also supports comments, I think that might be OK. I would like to get Igor’s thoughts on it in case there’s some gotcha

jose.amengual avatar
jose.amengual

yes, but I’m an Atlantis guy so I want comments

jose.amengual avatar
jose.amengual

and yes, it will have to be flagged and disable by default

jose.amengual avatar
jose.amengual

tfcmt supports updating the same comment so it will not comment every time there is a push, it will just update the first comment

jose.amengual avatar
jose.amengual

which makes it much cleaner

Igor M avatar

I am testing out running cloudposse/github-action-atmos-terraform-plan@v3 for the first time via a manual workflow run, and I’m getting the workflow to the following point: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L118 At this point, it seems to terminate the job with “result returned successfully”. Here is my step:

      - name: Plan Atmos Component
        uses: cloudposse/github-action-atmos-terraform-plan@v3
        with:
          component: ${{ inputs.component }}
          stack: ${{ inputs.stack }}
          sha: ${{ inputs.sha }}
          atmos-version: ${{ vars.ATMOS_VERSION }}
          atmos-config-path: ${{ vars.ATMOS_CONFIG_PATH }}

Any ideas why it stops here?

    - name: Get atmos settings
1
Igor M avatar

Looks like I may be missing: settings.github.actions_enabled

    - name: Get atmos settings
jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

you need in the component

settings:
  github:
     actions_enabled: true
jose.amengual avatar
jose.amengual

you can make it global to the whole stack too

Igor M avatar

Yup, just did that, and it’s working

1
Igor M avatar

Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @jose.amengual !

1

2024-09-27

Brett Au avatar
Brett Au

I would like to expose delete protection support in the dynamodb component: https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dynamodb

https://github.com/cloudposse/terraform-aws-dynamodb/blob/0.36.0/variables.tf#L184-L188

I have a branch locally, is there a process to submit these ideas for review?

Michael avatar
Michael

I think this proposal is great! Feel free to review the Contributing section of the Terraform AWS Components repository (https://github.com/cloudposse/terraform-aws-components/tree/main?tab=readme-ov-file#-contributing) and let me know if you need help with a pull request or the general workflow. If you don’t feel comfortable opening the PR, feel free to reach out and I can help as needed

Brett Au avatar
Brett Au

Thank you! no argument in doing a PR, I have a branch locally, but can’t push it up and figured there is a process. Thank you again

Brett Au avatar
Brett Au
#1118 feat: support delete protection for dynamodb

what

terraform-aws-dynamodb v0.36.0 supports delete protection on the table. This Pull request exposes that upstream variable

why

Delete safe dynamodb tables in the dynamo component

references

https://github.com/cloudposse/terraform-aws-dynamodb/blob/0.36.0/variables.tf#L184-L188

Brett Au avatar
Brett Au

Let me know if I made any mistakes

1
Michael avatar
Michael

Looks good to me! Feel free to post it in the pr-reviews channels as well since I don’t believe I have approval authority on this one

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov @Ben Smith (Cloud Posse) @Dan Miller (Cloud Posse)

1
1
jose.amengual avatar
jose.amengual

another interesting issue with the actions :

atmos terraform plan github-actions -s sandbox
Error: Failed to select workspace: EOF
 exit status 1
jose.amengual avatar
jose.amengual

if I run atmos locally, no issues

jose.amengual avatar
jose.amengual

maybe this is a atmos caveat more than action, it is wailing because of this :

Initializing the backend...

The currently selected workspace (dev-github-actions-state) does not exist.
  This is expected behavior when the selected workspace did not have an
  existing non-empty state. Please enter a number to select a workspace:
  
  1. default
  2. sandbox
  3. sandbox-github-actions-state
  Enter a value: Initializing modules...
╷
│ Error: Failed to select workspace: EOF
│ 

`

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse)

jose.amengual avatar
jose.amengual

@Igor Rodionov

jose.amengual avatar
jose.amengual

the plan action does caching

 - name: Cache .terraform
      id: cache
      uses: actions/cache@v4
      if: ${{ fromJson(steps.component.outputs.settings).enabled }}
      with:
        path: |
          ./${{ steps.vars.outputs.component_path }}/.terraform
        key: ${{ steps.vars.outputs.cache-key }}
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@jose.amengual hm.. Nice catch. I will fix that today

1

2024-09-30

Dave Barrett avatar
Dave Barrett

Greetings again,

I have a question about atmos vendored terraform module versions

If I vendor a solution like this

# -------- <https://github.com/cloudposse/terraform-aws-components/tree/main/modules/lambda>
  - component: "lambda"
    source: "github.com/cloudposse/terraform-aws-components.git//modules/lambda?ref={{.Version}}"
    version: "1.498.0"
    targets: ["components/terraform/lambda/{{.Version}}"]
    tags:
    - lambda

This results in an import the lambda to:

components\terraform\lambda\1.498.0

In order to access use this new module, I’ve added the version to the component

components:
....
    lambda
      metadata:
        component: lambda/1.498.0
		

But this results in a new workspace_key_prefix for the component

{
   "terraform": {
      "backend": {
         "s3": {
...
            "workspace_key_prefix": "lambda-1.498.0"
         }
      }
   }
}

Is there a way to tell atmos to use lambda version 1.498.0 without creating a new workspace?

How do you generally approach this?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set it to a particular value in

  terraform:
    lambda:
      # Optional backend configuration for the component
      backend:
        s3:
          workspace_key_prefix: lambda
Dave Barrett avatar
Dave Barrett

Is this generally what you recommend? Or do you recommend workspace per version + migrate?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can override both workspace_key_prefix and terraform_workspace - if you set it to specific values, then even if you use another folder in components/terraform, or use a diff Atmos component name for the component, it will use the same workspace and key prefix

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Is this generally what you recommend? Or do you recommend workspace per version + migrate?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it depends. If you want to have clean config, you can use it per version and migrate the current one

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it’s a one time “fix”, you can override it in the YAML config (and add a comment describing why it was done)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to use the same Atmos component name and the same TF workspace, but many TF component versions, you can specify values for workspace_key_prefix and terraform_workspace in YAML config and always use those. It will allow you to deploy the same Atmos component into the same TF workspace, but using diff versions of TF component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to be able to provision more than one Atmos component using diff versions of the TF code at the same time, then you can create Atmos components with diff names and point them to diff versions of the TF component

Dave Barrett avatar
Dave Barrett

kk thanks for your help.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

np

    keyboard_arrow_up