#atmos (2024-09)

2024-09-02

github3 avatar
github3
05:33:07 PM

Update the Go YAML lib from [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) to [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3). Support YAML v1.2 (latest version) @aknysh (#690) ## what • Update the Go YAML lib from [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) to [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) • Support YAML v1.2 (latest version) • Support YAML explicit typing (explicit typing is denoted with a tag using the exclamation point (“!”) symbol) • Improve code, e.g. add YAML wrappers in one yaml_utils file (which imports [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3`)) to control all YAML marshaling and un-marshaling from one place

why

[gopkg.in/yaml.v3](http://gopkg.in/yaml.v3`*)

The main differences between [gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) and [gopkg.in/yaml.v2](http://gopkg.in/yaml.v2) include enhancements in functionality, bug fixes, and improvements in performance. Here’s a summary of key distinctions:

1. Better Conformance to YAML 1.2 Specification:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) offers improved support for the YAML 1.2 specification. This includes better handling of complex YAML features such as core schema, block styles, and anchors. • [gopkg.in/yaml.v2 is more aligned with YAML 1.1, meaning it might not fully support some of the YAML 1.2 features.

2. Node API Changes:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) introduced a new Node API, which provides more control and flexibility over parsing and encoding YAML documents. This API is more comprehensive and allows for detailed inspection and manipulation of YAML content. • [gopkg.in/yaml.v2 has a simpler node structure and API, which might be easier to use for simple use cases but less powerful for advanced needs.

3. Error Handling:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) offers improved error messages and better context for where an error occurs during parsing. This makes it easier to debug and correct YAML syntax errors. • [gopkg.in/yaml.v2 has less detailed error reporting, which can make debugging more challenging.

4. Support for Line and Column Numbers:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) includes support for tracking line and column numbers of nodes, which can be useful when dealing with large or complex YAML files. • [gopkg.in/yaml.v2 does not provide this level of detail in terms of tracking where nodes are located within the YAML document.

5. Performance:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) has various performance improvements, particularly in the encoding and decoding process. However, these improvements might not be significant in all scenarios. • [gopkg.in/yaml.v2 might be slightly faster in certain cases, particularly when dealing with very simple YAML documents, due to its simpler feature set.

6. Deprecation of Legacy Functions:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) deprecates some older functions that were available in v2, encouraging developers to use more modern and efficient alternatives. • [gopkg.in/yaml.v2 retains these older functions, which may be preferred for backward compatibility in some projects.

7. Anchors and Aliases:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) has better handling of YAML anchors and aliases, making it more robust in scenarios where these features are heavily used. • [gopkg.in/yaml.v2 supports anchors and aliases but with less robustness and flexibility.

8. API Changes and Compatibility:

• *gopkg.in/yaml.v3](http://gopkg.in/yaml.v3) introduces some API changes that are not backward-compatible with v2. This means that upgrading from v2 to v3 might require some code changes. • [gopkg.in/yaml.v2 has been widely used and is stable, so it may be preferable for projects where stability and long-term support are critical.

YAML v1.2

YAML v1.1 and YAML v1.2 differ in several key aspects, particularly in terms of specification, syntax, and data type handling. Here’s a breakdown of the most significant differences:

1. Specification and Goals:

YAML 1.1 was designed with some flexibility in its interpretation of certain constructs, aiming to be a human-readable data serialization format that could also be easily understood by machines. • YAML 1.2 was aligned more closely with the JSON specification, aiming for better interoperability with JSON and standardization. YAML 1.2 is effectively a superset of JSON.

2. Boolean Values:

YAML 1.1 has a wide range of boolean literals, including y, n, yes, no, on, off, true, and false. This flexibility could sometimes lead to unexpected interpretations. • YAML 1.2 standardizes boolean values to true and false, aligning with JSON. This reduces ambiguity and ensures consistency.

3. Integers with Leading Zeros:

YAML 1.1 interprets integers with leading zeros as octal (base-8) numbers. For example, 012 would be interpreted as 10 in decimal. • YAML 1.2 no longer interprets numbers with leading zeros as octal. Instead, they are treated as standard decimal numbers, which aligns with JSON. This change helps avoid confusion.

4. Null Values:

YAML 1.1 allows a variety of null values, including null, ~, and empty values (e.g., an empty string). • YAML 1.2 standardizes the null value to null (or an empty value), aligning with JSON’s null representation.

5. Tag Handling:

YAML 1.1 uses an unquoted !! syntax for tags (e.g., !!str for a string). The tag system is more complex and includes non-standard tags that can be confusing. • YAML 1.2 simplifies tag handling and uses a more JSON-compatible syntax, with less emphasis on non-standard tags. Tags are optional and less intrusive in most use cases.

6. Floating-Point Numbers:

YAML 1.1 supports special floating-point values like .inf, -.inf, and .nan with a dot notation. • YAML 1.2 aligns with JSON and supports Infinity, -Infinity, and NaN, which are the standard representations in JSON.

7. Direct JSON Compatibility:

YAML 1.2 is designed to be a strict superset of JSON, meaning any valid JSON document is also a valid YAML 1.2 document. This was not the case in YAML 1.1, where certain JSON documents could be interpreted differently.

8. Indentation and Line Breaks:

YAML 1.2 introduced more consistent handling of line breaks and indentation. While YAML 1.1 was flexible, it sometimes led to ambiguities in how line breaks and whitespace were interpreted. • YAML 1.2 has clearer rules, reducing the potential for misinterpretation of line breaks and indentation.

9. Miscellaneous Syntax Changes:

YAML 1.2 introduced some syntax changes for better clarity and alignment with JSON. For instance, YAML 1.2 removed support for single-quoted multiline strings, which were present in YAML 1.1.

10. Core Schema vs. JSON Schema:

YAML 1.2 introduced the concept of schemas, particularly the Core schema, which aims to be YAML’s native schema, and the JSON schema, which strictly follows JSON’s data types and structures. • YAML 1.1 did not have this formal schema distinction, leading to more flexible but sometimes less predictable data handling.

Summary:

YAML 1.2 is more standardized, consistent, and aligned with JSON, making it more predictable and easier to interoperate with JSON-based systems. • YAML 1.1 offers more flexibility and a wider range of literal values, but this flexibility can sometimes lead to ambiguities and unexpected behavior.

references

https://yaml.org/spec/1.2.2https://yaml.org/spec/1.1 • <https://pkg.go.dev/gopkg.in/yaml…

2

2024-09-04

Miguel Zablah avatar
Miguel Zablah

Hey I’m having an issue when using the github actions to apply atmos terraform where I get this error:

Error: invalid character '\x1b' looking for beginning of value
Error: Process completed with exit code 1.

the tf actually apply correctly but this error always comes up, any idea why is this happening?

this is the actions I use: https://github.com/cloudposse/github-action-atmos-terraform-apply/tree/main

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Igor Rodionov

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah could you pls send me full github actions log?

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov I will send it via DM

Igor Rodionov avatar
Igor Rodionov

could you try to run locally

 atmos terraform output iam --stack data-gbl-dev --skip-init -- -json
Igor Rodionov avatar
Igor Rodionov

Is there anything suspicious output

Igor Rodionov avatar
Igor Rodionov

?

Miguel Zablah avatar
Miguel Zablah

1 sec

Miguel Zablah avatar
Miguel Zablah

nop it looks good

Igor Rodionov avatar
Igor Rodionov

is it correct json ?

Miguel Zablah avatar
Miguel Zablah

yes it looks as correct json I will run jq to see if it fails there but it looks good

Miguel Zablah avatar
Miguel Zablah

I see the error I think

Miguel Zablah avatar
Miguel Zablah

jq type fails

Miguel Zablah avatar
Miguel Zablah

it’s bc that cmd will still echo the switch context

Miguel Zablah avatar
Miguel Zablah

and then it will give the json

Miguel Zablah avatar
Miguel Zablah

for example:

Switched to workspace "data-gbl-dev-iam".
{
...
}
Igor Rodionov avatar
Igor Rodionov

Hm.. what version of atmos are you using?

Miguel Zablah avatar
Miguel Zablah

1.88.0

Miguel Zablah avatar
Miguel Zablah

let me update to latest

Igor Rodionov avatar
Igor Rodionov

Yea. Try pls

Igor Rodionov avatar
Igor Rodionov

I have meeting now

Igor Rodionov avatar
Igor Rodionov

Will be back in 1 hour

Miguel Zablah avatar
Miguel Zablah

no worries I will put the update here in a bit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Miguel Zablah in your atmos.yaml, update the logs section to this

logs:
  # Can also be set using 'ATMOS_LOGS_FILE' ENV var, or '--logs-file' command-line argument
  # File or standard file descriptor to write logs to
  # Logs can be written to any file or any standard file descriptor, including `/dev/stdout`, `/dev/stderr` and `/dev/null`
  file: "/dev/stderr"
  # Supported log levels: Trace, Debug, Info, Warning, Off
  # Can also be set using 'ATMOS_LOGS_LEVEL' ENV var, or '--logs-level' command-line argument
  level: Info
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

/dev/stderr tells Atmos to log messages to /dev/stderr instead of /dev/stdout (which pollutes the output from terraform)

Miguel Zablah avatar
Miguel Zablah

same issue with the latest version of atmos

Miguel Zablah avatar
Miguel Zablah

and I have /dev/stderr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not related to Atmos version, please check the logs section

Miguel Zablah avatar
Miguel Zablah

I see okay let me change this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if yiu have already stderr, then it’s not related. This output is from TF itself

Switched to workspace "data-gbl-dev-iam"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not controlled by Atmos)

Miguel Zablah avatar
Miguel Zablah

I see so what about doing a sed '1d' before saving it to file maybe that will fix it

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah I suppose 1d is color special char that is part of “switch ” output

Igor Rodionov avatar
Igor Rodionov

@Miguel Zablah can you try full command

Igor Rodionov avatar
Igor Rodionov

atmos terraform output iam --stack data-gbl-dev --skip-init -- -json 1> output_values.json

Miguel Zablah avatar
Miguel Zablah

1d is just means delete first line haha

Igor Rodionov avatar
Igor Rodionov

what do you have in json file?

Miguel Zablah avatar
Miguel Zablah

yeah that will saved all the output including the first line: Switched to workspace "data-gbl-dev-iam".

Miguel Zablah avatar
Miguel Zablah

I think we can maybe validate it instead

Miguel Zablah avatar
Miguel Zablah

so you have this: https://github.com/cloudposse/github-action-atmos-terraform-apply/blob/main/action.yml#L332

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> output_values.json    

we can add some steps here like this:

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> tmb_output_values.json

cat tmb_output_values.json | (head -n 1 | grep -q '^[{[]' && cat tmb_output_values.json || tail -n +2 tmb_output_values.json) 1> output_values.json
Miguel Zablah avatar
Miguel Zablah

this will ensure that if the value dose not start with { or [ it will delete the first line

Igor Rodionov avatar
Igor Rodionov

the case is that we do not have the problem in our tests

Miguel Zablah avatar
Miguel Zablah

are your test running the latest atmos version?

Igor Rodionov avatar
Igor Rodionov

so I want to find the reason instead quick fixes

Igor Rodionov avatar
Igor Rodionov

1.81.0

Miguel Zablah avatar
Miguel Zablah

let me downgrade to that version to see if this is new

Miguel Zablah avatar
Miguel Zablah

and yeah I think your right lets find that out

Igor Rodionov avatar
Igor Rodionov

this is our tests

Miguel Zablah avatar
Miguel Zablah

same issue hmm

Miguel Zablah avatar
Miguel Zablah

let me check your test

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov this test is using openTofu right?

Igor Rodionov avatar
Igor Rodionov

we have both

Miguel Zablah avatar
Miguel Zablah

oh let me check the tf I think that one is opentofu

Igor Rodionov avatar
Igor Rodionov

terraform and opentofu

Miguel Zablah avatar
Miguel Zablah

ah I see them sorry

Miguel Zablah avatar
Miguel Zablah

haha

Miguel Zablah avatar
Miguel Zablah

okay I’m using a different tf version, your using: 1.5.2

Miguel Zablah avatar
Miguel Zablah

the strange thing is that when you do the plan on this test you have the: Switched to workspace "plat-ue2-sandbox". but on apply I don’t see that

Igor Rodionov avatar
Igor Rodionov

that’s fine

Miguel Zablah avatar
Miguel Zablah

@Igor Rodionov @Andriy Knysh (Cloud Posse) I have copy the action to my repo and added this 2 lines and this makes it work:

atmos terraform output ${{ inputs.component }} --stack ${{ inputs.stack }} --skip-init -- -json 1> tmb_output_values.json

cat tmb_output_values.json | (head -n 1 | grep -q '^[{[]' && cat tmb_output_values.json || tail -n +2 tmb_output_values.json) 1> output_values.json

can we maybe add this fix to the action? idk why for me it’s giving the msg Switched to workspace "plat-ue2-sandbox". have you guys configure the /dev/stderr to write to a file? bc by default it will aslo write to terminal or maybe if your running it docker it treats this differently?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov bumping this up

Michal Tomaszek avatar
Michal Tomaszek

hi, have you ever considered adding Atmos plugin for asdf/mise? I think it would be useful as other tools like Terragrunt, Terramate, etc. are available within asdf plugins.

RB avatar
#170 Switching between atmos versions (asdf plugin)

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Sometimes I need to switch between atmos versions in order to debug something or perhaps a client uses version X and another client uses version Y. I usually have to run another go install of a specific atmos release or use apt install atmos="<version>-*" from geodesic.

It would be nice to use a tool like asdf with an atmos plugin

https://asdf-vm.com/
https://github.com/asdf-vm/asdf
https://github.com/asdf-vm/asdf-plugins

Another advantage of supporting asdf is that we can pin versions of atmos using the asdf .tool-versions file for users that want to use atmos within geodesic and from outside.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As you may know, we predominantly use geodesic. Anyone open to working with us to implement it? I can create the asdf-atmos repo

Michal Tomaszek avatar
Michal Tomaszek

yep, I even use it myself. I use geodesic as a base image of my custom Docker image. AFAIK, there are some packages installed already in it, like OpenTofu. the others can be installed using cloudposse/packages repository - that’s probably the easiest way. however, that’s somehow limited to the set of packages that this repository offers. I started to explore the idea of installing only mise (tool like asdf) and then, copying .mise.toml config file from my repository into it. that way mise can install all the dependencies via plugins. Renovate supports mise, too, which makes upgrades collected within .mise.toml file smooth and very transparent.

Michal Tomaszek avatar
Michal Tomaszek

unless I’m not aware of something and trying to reinvent the wheel here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Understood, it can definitely be layered on. @Michal Tomaszek do you have experience writing plugins for asdf?

RB avatar

I do! I can help with this if Michael doesnt have the time

Michal Tomaszek avatar
Michal Tomaszek

sadly, I have no experience with plugins for asdf. however, I’m happy to contribute under someone’s guidance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB i created this from their template, and modified it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/asdf-atmos

Asdf plugin for atmos

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I can’t seem to get it working.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Michal Tomaszek

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nevermind, it works

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just had to run asdf global atmos latest after installing it

RB avatar

Cool! I’ll check this out tomorrow. Thanks Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
#1042 feat: Add Atmos by Cloud Posse

Summary

Description:

• Tool repo URL: https://github.com/cloudposse/atmos • Plugin repo URL: https://github.com/cloudposse/asdf-atmos

Checklist

• Format with scripts/format.bash • Test locally with scripts/test_plugin.bash --file plugins/<your_new_plugin_name> • Your plugin CI tests are green (see check)
Tip: use the plugintest action from asdf-actions in your plugin CI

1
Michael Rosenfeld avatar
Michael Rosenfeld

Does CloudPosse provide any Atmos pre-commit hooks? I was curious if there were any community maintained hooks for atmos validate and such

Michael Rosenfeld avatar
Michael Rosenfeld

Or even a GitHub Action that runs atmos validate stacks before other actions run?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s the setup atmos action, and after that you can just run any atmos command like validate stacks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The precommits would be smart, but I don’t believe we have any documentation on that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we should have a page on setting up precommiti hooks here https://atmos.tools/core-concepts/projects/

Setup Projects for Atmos | atmos

Atmos is a framework, so we suggest some conventions for organizing your infrastructure using folders to separate configuration from components. This separation is key to making your components highly reusable.

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we need a task for that?

Michael Rosenfeld avatar
Michael Rosenfeld

Feel free to vendor this action we created for implementing this functionality if you think it’ll benefit the community:

name: "Atmos Validate Stacks"

on:
  pull_request:
    paths:
      - "stacks/**"
      - "components/**"
    branches:
      - main

env:
  ATMOS_CLI_CONFIG_PATH: ./rootfs/usr/local/etc/atmos

jobs:
  validate_stacks:
    runs-on: [Ubuntu]
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Extract ATMOS_VERSION from Dockerfile
        run: |
          version=$(grep 'ARG ATMOS_VERSION=' Dockerfile | cut -d'=' -f2)
          echo "atmos_version=$version" >> "$GITHUB_ENV"

      - name: Setup Atmos
        uses: cloudposse/github-action-setup-atmos@v2
        with:
          atmos-version: ${{ env.atmos_version }}
          install-wrapper: false

      - name: Validate stacks
        run: atmos validate stacks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Michael Rosenfeld

github3 avatar
github3
07:09:08 PM

Enhancements

BUG: fix error with affected stack uploads @mcalhoun (#691) ## what • Fix an error with affected stack uploads to atmos pro • Fix an error with URL extraction for locally cloned repos • Add better JSON formatting for affected stacks • Add additional debugging info

Andrew Ochsner avatar
Andrew Ochsner

Hi. My team is leveraging github-action-terraform-plan-storage Should it also store the .terraform.lock.hcl file that gets generated as part of the plan?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe so

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Igor Rodionov

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun

Matt Calhoun avatar
Matt Calhoun

Yes, as without that you won’t be able to do an exact “apply” of what was planned.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andrew Ochsner are you not seeing this behavior?

Andrew Ochsner avatar
Andrew Ochsner

well i assumed it didn’t because our pipeline team built a pipeline that saves the plan then saves the hcl lockfile separately (via save plan… so they override the path)… any chance you can point me to the code that does it? or i can go back and validate w/ the team

Andrew Ochsner avatar
Andrew Ochsner

So… it appears we leveraged mostly from github-action-atmos-terraform-plan which saves the lock files as a separate step (implying the storePlan doesn’t by default, just the plan file that you give it). So i’m back to my connundrum of the getPlan not overwriting the lock file if it exists (because it’s committed to my repo)https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L276-L299

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun is OoO and won’t have a chance to look at it until next week

Andrew Ochsner avatar
Andrew Ochsner

ok

Andrew Ochsner avatar
Andrew Ochsner

Also, related to github-action-terraform-plan-storage why such an explicit check when getting the plan if a planfile already exists? I kinda expected default behavior to overwrite or at least be able to set an input var to overwrite… but maybe i’m missing something https://github.com/cloudposse/github-action-terraform-plan-storage/blob/main/src/useCases/getPlan/useCase.ts#L39-L42

  const planFileExists = existsSync(pathToPlan);
  if (planFileExists) {
    return left(new GetTerraformPlanErrors.PlanAlreadyExistsError(pathToPlan));
  }
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

  const planFileExists = existsSync(pathToPlan);
  if (planFileExists) {
    return left(new GetTerraformPlanErrors.PlanAlreadyExistsError(pathToPlan));
  }
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun

Matt Calhoun avatar
Matt Calhoun

The reason we are checking that is that we don’t expect that a plan file with the same name as the one we are retrieving to exist on disk at that point, so if it does, we’re considering that an error condition. We could consider adding flag for overwrite, but could you provide the use case for this so we can think it through a little bit more? Thanks!

Andrew Ochsner avatar
Andrew Ochsner

sure.. it’s a little bit related to the question above…. we have a savePlan that saves the plan and then another savePlan that saves the lockfile. I didn’t write it so id ont’ know if the first savePlan actually saves the lock file.. if so this is all moot.

it was the 2nd get plan for the lock file that we run into issues… and i definitely want it to overwrite whatever lock file is there and use the one that was generated or existed when the plan was built

Andrew Ochsner avatar
Andrew Ochsner

so i think i don’t care about this as much so long as the save plan saves the lock file then we can get rid of the extra step somoene put in there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Andrew Ochsner as a side note, just wanted to make sure you knew that the github-action-terraform-plan-storage action was written to support multiple clouds, with AWS and Azure presently implemented. It’s a pluggable implementation, so if you’re doing this for GCP, it would be ideal to contribute it back to this action)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Andrew Ochsner avatar
Andrew Ochsner

oh 100% would… we’re leveraging azure plans but storing in s3 buckets lol

2024-09-05

Ryan avatar

Running into a fun issue this AM in where I’m getting invalid stack manifest errors. I haven’t touched anything aside from trying to run one plan to test some work for next week. It started with me getting back an error that a var was defined twice in an older piece of work, which at first I’m like were these defined this way when committed and yep. The really strange thing for me though now is that it’s telling me there’s a variable thats already defined, but I removed it, saved it, tried to run my plan again but it says its still there.

1
Ryan avatar

Example error -

╷
│ Error: invalid stack manifest 'mgmt/demo/root.yaml'
│ yaml: unmarshal errors:
│   line 83: mapping key "s3_bucket_name" already defined at line 77
│
│   with module.iam_roles.module.account_map.data.utils_component_config.config,
│   on .terraform\modules\iam_roles.account_map\modules\remote-state\main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│
╵
exit status 1
Ryan avatar

VSCodes even pointing at the line and im like its not there lol.

Ryan avatar

Ok I got it back to functioning again. Essentially I had many old template files in where team members had put some double variables in the template. In the past it didn’t seem to care, but today it was finding everywhere one of those was and forcing me to correct. Not a huge issue other than I wish I know why it decided to do that now.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Ryan can you please confirm you don’t need any assistance here?

Ryan avatar

I am good now, apologies. Really strange come back from a few weeks OOO. Thanks for checking.

1
Yangci Ou avatar
Yangci Ou

Hey all, does anyone have experience with renaming an Atmos stack without recreating the resources. As far as I know, an Atmos stack corresponds to a Terraform workspace, so is it similar to renaming a workspace in Terraform and updating the Atmos stack name in the yaml file? I have the state in S3, so move the state/push the old workspace state to new workspace state?

1
RB avatar

You’d need to go into the base component directory switch to the correct workspace Dump the state create a new workspace switch to the new workspace push the state

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also override the workspace name in the stack configurations to maintain the original workspace

2
Yangci Ou avatar
Yangci Ou

Great info, thanks!

2024-09-06

Roman Orlovskiy avatar
Roman Orlovskiy

Hi all. I am considering using atmos and Cloudposse root components on my new project with an existing AWS org that was created using ControlTower before I came in. So I am wondering, if it is still possible to leverage the root components and atmos in this case, and are there any examples of this setup? As I understand, the main issue will be due to usage of account/account-map/sso components. Do I need to to import the existing AWS org setup into them, or implement some replacement root components instead of account/account-map to provide account-based data to other components?

Igor M avatar

I recently found myself in the same position, and I advocated to start with a fresh org/set of accounts. There is documentation in Atmos on Brownfield but if there is an option to rebuild/migrate, it might be easier to go down this road.

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

1
RB avatar

Take a look at the following prs. This is how I’ve personally gotten this to work.

https://github.com/cloudposse/terraform-aws-components/pull/945 - account map • https://github.com/cloudposse/terraform-aws-components/pull/943 - account These prs weren’t merged in based on Nurus comment. I haven’t had the time to address the needs in a separate data source component. On my eventual plate. But if you feel ambitious, please feel free to address it so we can have a maintained upstream component for this.

For the account component, i used the yaml to define my accounts which gets stored in its outputs. Usually this component will create the accounts too. (The original reason why i didn’t want to create a new component is because the logic of this component is fairly complex and i didn’t want to copy that logic and have it break between the existing and new flavor of the account components). This is a root level component meaning only super user can apply it.

the account-map is also a root level component and it reads the output of the account component and has its own set of outputs. These are read by all the downstream components and can be read by any role.

1
1
RB avatar

These flavors above are as-is and unofficial from me so if you decide to use, please use with caution

2024-09-09

Patrick avatar
Patrick

Hi CloudPosse Team I got an issue when I enabled vpc_flow_logs_enabled: true in the quick-start-advanced lab. I also create vpc-flow-logs-bucket before. I spend around 2 days for this issue but I can’t fix it. I dont know why. Can you help me check this? Thanks. Here is my code:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Using Remote State | atmos

Terraform natively supports the concept of remote state and there’s a very easy way to access the outputs of one Terraform component in another component. We simplify this using the remote-state module, which is stack-aware and can be used to access the remote state of a component in the same or a different Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when using remote-state, your atmos.yaml will not work if places in the root of the repo (since Terraform executes the providers from the component directory)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to place atmos.yaml into one of the known paths

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
System dir (/usr/local/etc/atmos/atmos.yaml on Linux, %LOCALAPPDATA%/atmos/atmos.yaml on Windows)

Home dir (~/.atmos/atmos.yaml)

Current directory

ENV variables ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH
Patrick avatar
Patrick

noted & thanks @Andriy Knysh (Cloud Posse) Let me check that.

Patrick avatar
Patrick

Hi @Andriy Knysh (Cloud Posse) I fixed this issue, thank you!

Miguel Zablah avatar
Miguel Zablah

Hey I have a question about this docs: https://atmos.tools/integrations/github-actions/atmos-terraform-drift-detection

it mentions a ./.github/workflows/atmos-terraform-plan-matrix.yaml but I don’t see that anywhere dose it exist?

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dan Miller (Cloud Posse) can you please take a look at this ?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Yes that shouldve been included on that page, but it’s missing. Here’s a link to our Cloud Posse reference architecture that has the same workflows

https://docs.cloudposse.com/layers/gitops/example-workflows/#atmos-terraform-plan-matrix-reusable

Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

Miguel Zablah avatar
Miguel Zablah

ah thanks! but I think this is outdated the atmos-terraform-drift-detection

dose not work for me

Atmos Terraform Drift Detection | atmos

Identify drift and create GitHub Issues for remediation

Miguel Zablah avatar
Miguel Zablah

I have solve the issue with this actions:

cloudposse/github-action-atmos-terraform-select-components

that is the same that @Igor Rodionov and @Yonatan Koren help solve for plan and apply here: https://sweetops.slack.com/archives/C031919U8A0/p1724845173352589

Hey guys I found a potential bug on the atmos plan workflow for github actions,

You guys have this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L110

It will run atmos describe <component> -s <stack> .. but this requires the authentication to happen and that is something that happens after this on this step: https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L268

I have fix this by doing the authentication before the actions runs on the same job that looks to do the trick but maybe is something that should be documented or fix on the action

Miguel Zablah avatar
Miguel Zablah

then the matrix for the plan that that job outputs is not valid when doing this plan

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

The latest version of all workflows are on that same page I linked above. This page is automatically updated with the latest version, whereas atmos.tools is manually updated with a point in time version. I recommend using the docs.cloudposse version for now, and I will update atmos.tools

https://docs.cloudposse.com/layers/gitops/example-workflows/

Example Workflows | The Cloud Posse Reference Architecture

Using GitHub Actions with Atmos and Terraform is fantastic because it gives you full control over the workflow. While we offer some opinionated implementations below, you are free to customize them entirely to suit your needs.

Miguel Zablah avatar
Miguel Zablah

yeah I think this needs some update this workflow is not working

Patrick avatar
Patrick

hi @Dan Miller (Cloud Posse) Do we have any document for Gitlab CICD?

Miguel Zablah avatar
Miguel Zablah

@Dan Miller (Cloud Posse) with the docs on the workflow that you provided I was able to make the drift detection work, thanks!

2
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

hi @Dan Miller (Cloud Posse) Do we have any document for Gitlab CICD? unfortunately no, we only use GitHub at this time

1
Patrick avatar
Patrick

Thank you!

2024-09-10

Patrick avatar
Patrick

Hi Team I’m using ec2-instance components for provisioning EC2 instance on AWS. But, I want to change several variables it don’t appear in ec2-instance component at root like: root_volume_type, root_volume_type. How can I invoke these variables in .terraform/modules/ec2-instance/variables.tf from catalogs config. Thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Some components do not have the variables defined. Not by design, but just we hadn’t needed them before. Feel free to open a PR against the components repo to add the missing variables and pass them through to the underlying module(s)

1
Patrick avatar
Patrick

@Erik Osterman (Cloud Posse) You mean I had to customize the component to suit my needs. Right?

RB avatar

You’d need to modify the component to expose the inputs of the module with copied and pasted variables.

E.g. ec2 instance module has a variable for ami so the ami variable needs to be set in 2 additional places, one at the module level within the component and it would need a variable definition

In component main.tf

# in component
module "ec2" {
  # ...
  ami = var.ami
  # ...
}

And in component variables.tf

variable "ami" {
  default = null
}

Now the input can be exposed to the Atmos yaml interface

components:
  terraform:
    ec2-instance/my-fav-singleton:
      # ...
      vars:
        ami: ami-0123456789
RB avatar

For one of the components, rds, i recall copying all the variables from the upstream module directly into the component (with convention {component}-[variables.tf](http://variables.tf)) and exposing each input in the module definition within the component which worked fairly well

https://github.com/cloudposse/terraform-aws-rds/blob/main/variables.tf

https://github.com/cloudposse/terraform-aws-components/blob/main/modules/rds/rds-variables.tf

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


You mean I had to customize the component to suit my needs. Right?
Yes, although if it’s just adding some variables I think it’s suitable to upstream for everyone

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think the method @RB is also a good technique, similar to monkeypatching or swizzling.

https://developer.hashicorp.com/terraform/language/files/override

Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

1
Patrick avatar
Patrick

Thanks @RB, @Erik Osterman (Cloud Posse) If I modify config in components like [main.tf](http://main.tf) and [variables.tf](http://variables.tf) , It works for me but If I run the command atmos vendor pull in the future, It will overwrite my change.

RB avatar

Yes that’s true. You could modify the component locally, make sure it works, if it does, upstream it

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


It works for me but If I run the command atmos vendor pull in the future, It will overwrite my change.
Unless you use terraform overrides pattern

1
RB avatar

That is pretty cool how the override will just merge the definitions together. So basically you can do something like this, is that right?

# components/terraform/ec2-instance/main_override.tf
variable "root_volume_type" {}

module "ec2_instance" {
  root_volume_type = var.root_volume_type
}

Strange that that page mentions almost every block except module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it works with module blocks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But even ChatGPT says I’m wrong, so

RB avatar

Let’s assume it does work, for convenience, would you folks be open to PRs for all components to follow a convention like [variables.tf](upstream-module>-<http://variables.tf) ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not following. From my POV, the whole point of overrides is you have full control and we don’t need a strict convention persay.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or in otherwords, the point is not to upstream override files

RB avatar

So let’s use the component ec2-instance as an example. We know that root_volume_type is missing from the component variables.tf file but exists in the upstream module terraform-aws-ec2-instance.

In the rds component example, we use [rds-variables.tf](http://rds-variables.tf) which is a copy of its upstream module’s variables.tf file. After implementing that change, the rds components variables.tf file only contains the region and other component-specific variables.

We can do the same for ec2-instance and other components too. This way the override method could be used as a stop-gap until all components are updated.

One more benefit of this approach is that cloudposse can also have a github action that monitors the upstream module so when its inputs are modified, it can auto copy those inputs directly into the component itself.

Difficulties here

• adding the input directly to the module definition, however, this can also be remediated using something like hcledit.

• If multiple upstream modules are used in a component, such as eks/cluster, then a copied file would also need to have each input var prefixed with the module name to avoid collisions

RB avatar

friendly ping ^

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure we will invest in automating those kind of updates. That said we plan to support JIT vendoring of modules, and using the existing support for provider and backend generation, you will be able to provision modules directly as components with state.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not exactly the same thing, but means fewer custom components are required

RB avatar

Oh wow so a lot of the components can be deleted if all they do is wrap an upstream module.

But there are some components that wrap a single module but also have some component-logic under the hood. I imagine those components would still be retained unless their logic gets some how moved into its upstream module

RB avatar

I think there might still need to be a stop gap in place for components that use multiple modules or for components that don’t have logic that can be easily moved into the modules it wraps, no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We will continue to mange/develop components, but adopt a hybrid model

1
RB avatar

For the modules that still need to be developed, would y’all consider the {upstream-module}-[variables.tf](http://variables.tf)?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I still don’t follow why such a convention is necessary. Can you elaborate? What problem is it addressing

RB avatar

Ah sorry, let me think of what i can add to https://sweetops.slack.com/archives/C031919U8A0/p1726067246212129?thread_ts=1725969302.837589&cid=C031919U8A0

It’s just all about scalability. If you have anything thats not DRY so in this case copying vars from module to component, would require a system to keep these vars in line. So if we maintain separate files, as opposed to a single variables.tf, between component vars and module vars then we can simplify upgrading to a newly exposed var by downloading the modules vars into the spec {module}-[variables.tf](http://variables.tf)

RB avatar

So instead of one component variables.tf file that contains

# e.g. component vars
variable region {}

# e.g. module vars
variable "engine_version" {}

So for example, we put var region in [variables.tf](http://variables.tf) and put var engine_version in [rds-variables.tf](http://rds-variables.tf)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha i see what you mean now. Let me think on it

1
Patrick avatar
Patrick
Igor M avatar

Is there a way to hide abstract components from the atmos UI / component list?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Igor M do you mean the the terminal UI when you execute the atmos command?

Igor M avatar

That’s right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we def should not show the abstract components in the UI (since we don’t provision them). I’ll check it, the fix will be in the next release

2
Igor M avatar

another minor observation for the UI - the up/down/left/right letter keys don’t work when within filter (/) mode

2024-09-11

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:wave: Hi everyone, I’m looking into the Atmos project and I want to say it’s awesome, it should make our lives easier. Can you please tell me if it is possible to specify a single component or a group of components and exclude one or more components during the deployment? That is, we need to be able to have multiple engineers working on the same stack and if someone makes a change, they can roll back the changes of another colleague. Of course, we can look at the plan, but is there a possibility to just specify what should be deployed? There are commands in terragrunt: --terragunt-include-dir, --terragrunt-exclude-dir, --terragrunt-strict-include, --terragrunt-ignore-external-dependencies, etc. We need the same in Atmos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In Atmos, when you break your infrastructure into components, you’re essentially working with one component at a time. The behavior you’re describing would, therefore, work out of the box.

When you run Atmos commands like atmos terraform apply $component_name --stack foo, you only target that specific component. This is somewhat analogous to the Terragrunt --include-dir behavior, but with Atmos, you’re naturally working with one component at a time. Atmos does not support a DAG at this time for automatic dependency order applies. For that we use workflows instead.

:wave: Hi everyone, I’m looking into the Atmos project and I want to say it’s awesome, it should make our lives easier. Can you please tell me if it is possible to specify a single component or a group of components and exclude one or more components during the deployment? That is, we need to be able to have multiple engineers working on the same stack and if someone makes a change, they can roll back the changes of another colleague. Of course, we can look at the plan, but is there a possibility to just specify what should be deployed? There are commands in terragrunt: --terragunt-include-dir, --terragrunt-exclude-dir, --terragrunt-strict-include, --terragrunt-ignore-external-dependencies, etc. We need the same in Atmos.

Petr Dondukov avatar
Petr Dondukov

Thank you so much for your reply! What do you mean by workflows?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

2024-09-12

Dennis DeMarco avatar
Dennis DeMarco

Hi. I’m trying to us private_subnets: ‘{{ (atmos.Component “vpc” .stack).outputs.private_subnets }}’

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please see some related threads in this channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

these two calls will execute terraform output just once for the same stack

vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
vpc_private_subnets: '{{ toRawJson ((atmos.Component "vpc" .stack).outputs.vpc_private_subnets) }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are working on improvements for this, using YAML explicit types

Dennis DeMarco avatar
Dennis DeMarco

Alright. I think this is closer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The key part is toRawJson

Dennis DeMarco avatar
Dennis DeMarco
"private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
Dennis DeMarco avatar
Dennis DeMarco

Almost, going to grab some food. I’m getting closer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Dennis DeMarco avatar
Dennis DeMarco

hmm Do I have a setting wrong, Does output need to do json? I put it on trace and saw Executing ‘terraform output vpc -s lab’

Dennis DeMarco avatar
Dennis DeMarco
Executing template function 'atmos.Component(vpc, lab)'
Found the result of the template function 'atmos.Component(vpc, lab)' in the cache
'outputs' section:
azs:
    - us-east-1c
    - us-east-1d
nat_public_ips:
    - 52.22.211.159
private_subnets:
    - subnet-00ec468ff0c78bf7c
    - subnet-029a6827865e98783
public_subnets:
    - subnet-02ab8a96c39abf71a
    - subnet-0b58b0578f1ec373f
stage: lab
vpc_cidr_block: 10.0.0.0/16
vpc_default_security_group_id: sg-0b124eb183c6b52ba
vpc_id: vpc-09ead1bd292122e33

ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'all-atmos-sections'

Variables for the component 'test' in the stack 'lab':
private_subnets: '["subnet-00ec468ff0c78bf7c","subnet-029a6827865e98783"]'
stage: lab
vpc_id: vpc-09ead1bd292122e33

Writing the variables to file:
components/terraform/test/lab-test.terraform.tfvars.json

Using ENV vars:
TF_IN_AUTOMATION=true

Executing command:
/opt/homebrew/bin/terraform init -reconfigure
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of cloudposse/utils from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed cloudposse/utils v1.26.0
- Using previously-installed hashicorp/aws v5.66.0

Terraform has been successfully initialized!

Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: test
Stack: lab
Working dir: components/terraform/test

Executing command:
/opt/homebrew/bin/terraform workspace select lab

Executing command:
/opt/homebrew/bin/terraform plan -var-file lab-test.terraform.tfvars.json -out lab-test.planfile

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Invalid value for input variable
│
│   on lab-test.terraform.tfvars.json line 2:
│    2:    "private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
│
│ The given value is not suitable for var.private_subnets declared at variables.tf:13,1-27: list of string required.
╵
exit status 1
Dennis DeMarco avatar
Dennis DeMarco

interestingt the tfvars look correct in the vpc, yet in the component not well

{
   "private_subnets": "[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]",
   "stage": "lab",
   "vpc_id": "vpc-09ead1bd292122e33"
}
Dennis DeMarco avatar
Dennis DeMarco

bug I guess

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dennis DeMarco this is a limitation that we have in Atmos using YAML and Go templates in the same files.

this is an invalid YAML (because { ... } is a map in YAML (same as in JSON), but double {{ breacks the YAML parser

a: {{ ... }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we have to quote it

a: "{{ ... }}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when we qoute it, it’s a string in YAML. And the string is sent to the TF component input

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I personally have not found a good way to deal with that. There is a thread with the same issue, and one person said they found a solution (I can try to find it)

Dennis DeMarco avatar
Dennis DeMarco

I tried it, but did not get good results today

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

another way to deal with that is to change the TF input type from list(string) to string and then do jsondecode on it (but that will require changing the TF components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, we are working on a new way to define atmos.Component function - not as a Go template, but as YAML custom type

a: !terraform.output <component> <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this should be done in the next week or so, we’ll let you know

Dennis DeMarco avatar
Dennis DeMarco

Ah alrighty. Thank you. I did a work around by just using data aws_subnets with the vpc id that came over.

Dennis DeMarco avatar
Dennis DeMarco

But I was like a dog with a bone trying to make it work today hah

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, and as I mentioned, you can change the var type to string and then do jsondecode on it

Dennis DeMarco avatar
Dennis DeMarco

Thank you very much

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF will get this as input

"[\"subnet-00ec468ff0c78bf7c\",\"subnet-029a6827865e98783\"]"
1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which is a string, and after jsondecode you will get list(string)

Dennis DeMarco avatar
Dennis DeMarco

And it’s returning this

Dennis DeMarco avatar
Dennis DeMarco

“private_subnets”: “[subnet-00ec468ff0c78bf7c subnet-029a6827865e98783]”,

Dennis DeMarco avatar
Dennis DeMarco

expected something like

Dennis DeMarco avatar
Dennis DeMarco

“private_subnets”: [ “subnet-00ec468ff0c78bf7c”, “subnet-029a6827865e98783” ],

Dennis DeMarco avatar
Dennis DeMarco

I suspect the template componet is doing something, with string lists, but can’t figure out how to stop it

2024-09-13

jose.amengual avatar
jose.amengual

I have a stack that has some core components in it, and the stack names are based on environment: us2 namespace: dev but now we need to move this core stuff to another stack file that we will call core, the problem is that the components are already deployed and using the context ( so all the names are using namespace and environment), so I guess the only way for me to be able to move teses components without having to redeploy ( due to the name change) is to override the namespace or environment variable at the component level, but I know that is not a recommended practice, is there a better way to do this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


but now we need to move this core stuff to another stack file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in Atmos, stack and stack file (stack manifest) are diff things

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use any file names, in any folder, and move the components b/w them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stack files are for people to organize the stack hierarchy (folder structure)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stack is a logical construct, and it can be defined in any stack manifest (or in more than one)

jose.amengual avatar
jose.amengual

yes, it can be pepe.yaml, but in this case we need to move it to another stack ( not file) to run the workflows at different times

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so oyu want to move some components into another Atmos stack (like from plat-ue2-dev to core-ue2-dev)?

jose.amengual avatar
jose.amengual

correct, because at some point they might be owned by someone else

jose.amengual avatar
jose.amengual

I remember you told me to avoid overriding the variables using to name stacks since it is a bad idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Import the components into the new stack manifest file
  2. (remove them from the old manifest file)
  3. Override terraform workspace for the components
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I remember you told me to avoid overriding the variables using to name stacks since it is a bad idea

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a bad pattern, but can be used in cases like that (and documented)

jose.amengual avatar
jose.amengual

documented, can you point me to the doc?

jose.amengual avatar
jose.amengual

when you say import you are talking about moving it to the new .yaml file , not using import: on the yaml file, right?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also override the workspace name in the stack configurations to maintain the original workspace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos component migration in YAML config | atmos

Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance.

jose.amengual avatar
jose.amengual

thanks guys

2024-09-14

    keyboard_arrow_up