#atmos (2024-11)

2024-11-01

jose.amengual avatar
jose.amengual

with the atmos github actions I can plan/apply no problem but now I want to destroy and I do not have a count = module.this.enabled ? 1 : 0, how do you guys destroy? I will like to be able to find the delete yaml from the stack and destroy that one if possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Interesting. So we are working on supporting an enabled flag for components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The scope is not currently to destroy. However, it might be worth considering that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the current implementation, enabled would make the component “invisible”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

an alternative to commenting it out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can see a component as having 3 states

• enabled

• disabled

• destroyed

jose.amengual avatar
jose.amengual

I think if the components are commented out or deleted from the stack file, describe affected should know what to do with it or output a destroyed flag or something like that for another job to use that matrix to destroy it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, that makes sense.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So enabled = true/false affects the visibility, but removal from the configuration is visible via atmos describe affected

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) what happens today in describe affected if the component is removed? is that surfaced?

jose.amengual avatar
jose.amengual

I renamed a component from pepetest to pepetest1, describe affected saw the new pepetest1 component got deployed, but the old one is still there in the cloud environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the key is do we have the information in describe affected JSON output

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The right behavior might not be implemented, but maybe we have the data there to act on

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if a component is removed, Atmos does not see it (it will not consider it affected) - this is the current implementation. This is b/c Atmos compares the current branch with a remote branch/tag/sha - if the current branch does not have the component, then it’s “not affected”.

Setting enabled: false is not “removal”, so describe affected sees that

jose.amengual avatar
jose.amengual

but with that, you need a two-step approach, one to say enable: false and then another PR to remove the yaml from the stack file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would say we didn’t consider a complete component removal with describe affected - we need to revisit this

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Added a task to the backlog

2
jose.amengual avatar
jose.amengual

and this one is becoming really important, more than the vendoring issue

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is it that you want to destroy if a component in a stack if it is removed (deleted or no longer inherited)?

jose.amengual avatar
jose.amengual

Yes

jose.amengual avatar
jose.amengual

Ideally I can run describe affected and maybe have two matrices, one with affected and one with deletions

jose.amengual avatar
jose.amengual

So that I can do something about it in my workflow

Andrew Chemis avatar
Andrew Chemis

Also interested in this feature. @Erik Osterman (Cloud Posse) I checked the backlog but I don’t see a task - is this the right place? https://github.com/orgs/cloudposse/projects/34/views/1

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh our internal tasks, are not published to GitHub issues

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The public roadmap shows tasks that originated from Issues opened by users

Andrew Chemis avatar
Andrew Chemis

Thanks!

I know docs are in a bit of flux right now - Do you know where the best practice docs ended up?

This URL does not resolve to anything useful https://atmos.tools/best-practices/terraform/#use-feature-flags-list-or-map-inputs-for-optional-functionality and https://docs.cloudposse.com/reference/best-practices/terraform-best-practices/ is dead

Im trying to find guidance on vars.enabled. I didn’t implement it properly in my components/modules and now I cant delete components using my CICD workflow. No worries, can delete locally for now while I convince team we need to go through our components to implement this.

2024-11-02

github3 avatar
github3
10:28:40 PM

Improve terraform and helmfile help. Enable Go templating in the command field. Clean Terraform workspace before executing terraform init @aknysh (#759)

what

• Improve terraform and helmfile help • Enable Go templating in the command field of stack config • Clean Terraform workspace before executing terraform init

why

• Improve the help messages. When a user executes atmos terraform --help or atmos helmfile --help (or help for a subcommand), print a message describing the command and how to execute the terraform and helmfile help command
atmos terraform –help

image

• Enable Go templating in the command stack config in addition to the already supported sections.
You can now use Go templates in the following Atmos sections to refer to values in the same or other sections:
vars
settings
env
providers
overrides
backend
backend_type
component
metadata.component
command
Enabling Go templates in the command section allows specifying different Terraform/OpenTofu/Helmfile versions per component/stack, and get the value from different Atmos sections or from external data sources • Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default
  2. The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD. The PR adds the logic that deletes the `.terraform/environment` file from the component directory before executing `terraform init`. The `.terraform/environment` file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing `terraform init` to prevent the Terraform prompt asking to select the default or the previously used workspace.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual another one for you

Improve terraform and helmfile help. Enable Go templating in the command field. Clean Terraform workspace before executing terraform init @aknysh (#759)

what

• Improve terraform and helmfile help • Enable Go templating in the command field of stack config • Clean Terraform workspace before executing terraform init

why

• Improve the help messages. When a user executes atmos terraform --help or atmos helmfile --help (or help for a subcommand), print a message describing the command and how to execute the terraform and helmfile help command
atmos terraform –help

image

• Enable Go templating in the command stack config in addition to the already supported sections.
You can now use Go templates in the following Atmos sections to refer to values in the same or other sections:
vars
settings
env
providers
overrides
backend
backend_type
component
metadata.component
command
Enabling Go templates in the command section allows specifying different Terraform/OpenTofu/Helmfile versions per component/stack, and get the value from different Atmos sections or from external data sources • Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default
  2. The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD. The PR adds the logic that deletes the `.terraform/environment` file from the component directory before executing `terraform init`. The `.terraform/environment` file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing `terraform init` to prevent the Terraform prompt asking to select the default or the previously used workspace.
jose.amengual avatar
jose.amengual

this feels like Xmas

1

2024-11-03

Kalman Speier avatar
Kalman Speier

hey folks, for some reason atmos generate wrong terraform workspace name, i have a component named nats and stack named dev and instead of dev-nats the workspace is simply dev what could cause that ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Something is wrong with your name_pattern or name_template

Kalman Speier avatar
Kalman Speier

i didn’t change those.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Share your atmos config, if you can

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Customize Stack Behavior | atmos

Use the atmos.yaml to configure where Atmos will discover stack configurations.

Kalman Speier avatar
Kalman Speier
base_path: .

components:
  terraform:
    command: tofu
    base_path: components/terraform
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true

stacks:
  base_path: stacks
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "**/_defaults.yaml"
  name_pattern: "{stage}"

workflows:
  base_path: stacks/workflows

templates:
  settings:
    enabled: true
    sprig:
      enabled: true

logs:
  file: /dev/stderr
  level: Info
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, I think I initially misunderstood.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you setting workspace key prefix anywhere?

Kalman Speier avatar
Kalman Speier

nope

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) any ideas

Kalman Speier avatar
Kalman Speier

is the workspaces are port of the terraform state right?

Kalman Speier avatar
Kalman Speier

i’ve tried to clean all tf files and deleted this workspace, but still got back named as dev so maybe the default workspace holds some information?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, in atmos we use one workspace for each instance of a component deployed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you updated to the latest atmos? We just fixed a problem related to workspaces and changing backends

Kalman Speier avatar
Kalman Speier

yes, i’m using the very latest

Kalman Speier avatar
Kalman Speier
Workspace "dev" doesn't exist.

You can create this workspace with the "new" subcommand
or include the "-or-create" flag with the "select" subcommand.
Created and switched to workspace "dev"!
Kalman Speier avatar
Kalman Speier

but i bet it’s atmos what is switching to the workspace so the name dev comes from atmos not from the state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, atmos dynamically computes the workspace name and switches to it

Kalman Speier avatar
Kalman Speier

i see.

Kalman Speier avatar
Kalman Speier
vars:
  stage: dev

import:
  - deploy/_defaults
  - catalog/do/project
  - catalog/do/doks
  - catalog/nats

components:
  terraform:
    project:
      vars:
        name: "platform-{{ .stack }}"
        environment: Development
    cluster:
      vars:
        name: doks-cluster-1
        project: '{{ (atmos.Component "project" .stack).outputs.id }}'
    nats:
      vars:
        kube_host: '{{ (atmos.Component "cluster" .stack).outputs.kube_host }}'
        kube_token: '{{ (atmos.Component "cluster" .stack).outputs.kube_token }}'
        kube_cert: '{{ (atmos.Component "cluster" .stack).outputs.kube_cert }}'
Kalman Speier avatar
Kalman Speier

this is my dev stack file

Kalman Speier avatar
Kalman Speier

and interestingly for the project and the cluster the names are generated correctly

Kalman Speier avatar
Kalman Speier
❯ tofu -chdir=components/terraform/nats workspace list
  default
* dev
  dev-cluster
  dev-project
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


i have a component named nats and stack named dev and instead of dev-nats the workspace is simply dev

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the correct behavior for Atmos components that don’t inherit from other components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, the TF workspace is simply the stack name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

only if you have a derived component (inherited from a base component), then TF workspace will be <stack>+<component>

Kalman Speier avatar
Kalman Speier

hmm. what you mean by “inherit” the do cluster and project name is correct and didn’t inherit from anything. or i miss something here.

Kalman Speier avatar
Kalman Speier
components:
  terraform:
    nats:
      metadata:
        component: nats
      vars:
        ...

vs

components:
  terraform:
    cluster:
      metadata:
        component: do/doks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Inherit Configurations in Atmos Stacks | atmos

Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.

Kalman Speier avatar
Kalman Speier

ok i read that before. but i didn’t us that in any catalog so far.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the two examples above, both nats and cluster Atmos components do not inherit from any other Atmos components, so the TF workspaces for both of them will be dev

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so what you have is 100% correct, the workspaces for these two components are just dev - the stack name

Kalman Speier avatar
Kalman Speier
❯ tofu -chdir=components/terraform/do/doks workspace list
  default
  dev
* dev-cluster
  dev-project
Kalman Speier avatar
Kalman Speier

maybe those were generated wrongly because i made a lot changes back and forth since.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, looks like it

Kalman Speier avatar
Kalman Speier

anyhow i’ fine with a single dev workspace named after the stack. as long as the states are correctly separated.

Kalman Speier avatar
Kalman Speier

i’m not fully familiar with tf workspaces, but that means if all these components are in the same workspace they are sharing the state or not ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each component has workspace_key_prefix - it’s usually generated by Atmos, but you can override it per component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

workspace_key_prefix is, if you look at the backend s3 bucket, the top-level folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so each component will have it’s own top-level folder in the bucket, and each stack, in a separate TF worksapce, will have its own subfolder in the folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, each component state is separated from any other component state (diff folders in the state bucket)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that it’s still in the same backend (same S3 bucket). If you want to separate backends (e.g. per tenant/OU, per account, etc.), you need to create and configure multiple backends

Kalman Speier avatar
Kalman Speier

sure. but i don’t see workspace_key_prefix generated anywhere.

Kalman Speier avatar
Kalman Speier
{
  "terraform": {
    "backend": {
      "gcs": {
        "bucket": "mw-tf-state",
        "encryption_key": "...",
        "prefix": "platform/infra"
      }
    }
  }
}
Kalman Speier avatar
Kalman Speier

it’s gcs, not s3 actually.

Kalman Speier avatar
Kalman Speier

_defaults.yaml:

terraform:
  backend_type: gcs
  backend:
    gcs:
      bucket: mw-tf-state
      prefix: platform/infra
      encryption_key: '{{ env "GCS_ENCRYPTION_KEY" }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

GCP has prefix

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which is the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
If the prefix is not specified for a component, Atmos will use the component name (my-component in the example above) to auto-generate the prefix. In the component name, all occurrences of / (slash) will be replaced with - (dash).
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t need to hardcode it here

backend:
    gcs:
      bucket: mw-tf-state
      prefix: platform/infra
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, all components will use the same prefix

Kalman Speier avatar
Kalman Speier

hmm ok, let me check that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review the doc, if you don’t specify prefix, Atmos will auto-generate it

Kalman Speier avatar
Kalman Speier

thank you!

Kalman Speier avatar
Kalman Speier

so if i understand it correctly, without setting the prefix atmos will generate it and because of that my component states will end up in separate folders in the gcs bucket, so even they share a workspace states are separated.

1
Kalman Speier avatar
Kalman Speier

good to know that.:)

Kalman Speier avatar
Kalman Speier

only problem is that i prefer to store them in some folder instead of the root of the bucket but i can leave with that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you can specify the prefix per component, in which case Atmos will just use it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i can review your config, let me know

Kalman Speier avatar
Kalman Speier

it’s fine. this bucket is solely for tf states so it’s ok even in the root. i prefer to leave it to atmos to generate.

1
Kalman Speier avatar
Kalman Speier

on a different topic while we chat.. any chance to add support for command like this in atmos.yaml :

components:
  terraform:
    command: xy command -- tofu
Kalman Speier avatar
Kalman Speier

so support command with double dash

Kalman Speier avatar
Kalman Speier

it would be perfect that way i could load secrets as env vars. and i won’t need custom commands.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t remember if command: xy command -- tofu is supported now (need to look at the code). Did you test it?

Kalman Speier avatar
Kalman Speier

yes. i just tested unfortunately it’s not working.

Kalman Speier avatar
Kalman Speier
atmos terraform plan cluster --stack dev

template: all-atmos-sections:100:35: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exec: "op run --no-masking --env-file=.env -- tofu": executable file not found in $PATH
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have done that with asdf and it works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, we’ll create a task for this, thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you try to quote it?

Kalman Speier avatar
Kalman Speier

thanks a lot!!

Kalman Speier avatar
Kalman Speier

@Erik Osterman (Cloud Posse) do you have an example maybe with asdf?

Kalman Speier avatar
Kalman Speier

i’ve just tried with quote but not working.

Kalman Speier avatar
Kalman Speier

trying with a small shell script:

command: ./optofu.sh

but still not working

Kalman Speier avatar
Kalman Speier
#762 add support to exec shell commands with args

what

Add support for shell commands.

why

To support complex commands, for example:

components: terraform: command: op run –no-masking –env-file=.env – tofu

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks, i’ll review it today. Did you test it?

Kalman Speier avatar
Kalman Speier

roughly. is there any go test(s) i can run?

Kalman Speier avatar
Kalman Speier

strange but for some reason it’s not working with my atmos config. it’s working fine with atmos.yaml in the repository. i will dig into it.

Kalman Speier avatar
Kalman Speier

problem is that when the template executed it uses tfexec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah, yes, tfexec doesn’t understand those commands, it needs terraform

2024-11-04

Kalman Speier avatar
Kalman Speier

possible to organize a few smaller components into one catalog?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Of course… this is what we frequently do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can create a catalog stack file for a solution.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. here’s how we do “EKS” and all related components

Kalman Speier avatar
Kalman Speier

ok. is there any related example in the repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here you can see we created a default cluster config, that imports a bunch of other componetns

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Those could be inline, but we chose to import them

Kalman Speier avatar
Kalman Speier

thanks!

1

2024-11-05

Kalman Speier avatar
Kalman Speier

whats the best way to share vars between some components but not all of them? if i place in the stack yaml vars section, i got warnings from the components which are not using them.
Warning: Value for undeclared variable

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, please don’t use globals

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are a few ways to share vars b/w components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. create a base abstract component (with the default values) and inherit it in the other components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Abstract Component | atmos

Abstract Component Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And multiple inheritance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Inherit Configurations in Atmos Stacks | atmos

Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.

Kalman Speier avatar
Kalman Speier

thx!

1

2024-11-06

Dennis Bernardy avatar
Dennis Bernardy

Hey, when using helmfile with atmos it requires to have helm_aws_profile_pattern and cluster_name_pattern set. Is there a way to use a name_template like in the stack configuration?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s actually not required and we improved the demo and examples here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This uses k3s

Dennis Bernardy avatar
Dennis Bernardy

But if I want to use it with eks I have to set it, no? Your examples all set use_eks to false

Dennis Bernardy avatar
Dennis Bernardy

Or other question: if I use use_eks: false how does atmos now to which kubernetes to deploy to? Will it use my current context? Can I dynamically change the context in atmos in the stack configuration?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s only if you want the automatic kubeconfig creation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use_eks could probably be better named. Its true, it’s if you use eks, but it’s not required to use EKS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the end, all you need is a kubeconfig. With use_eks set to false, it just means you need to manage the kubeconfig yourself.

1
Hao Wang avatar
Hao Wang

Atmos may need a RAG application for QA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would love a bot that could auto answer with links to threads

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and TL;DR

Hao Wang avatar
Hao Wang

exactly, RAG can do with its metadata

Hao Wang avatar
Hao Wang

and need an custom integration with slack, there should be existing SaaS service on the market to do the similar thing

Hao Wang avatar
Hao Wang

I’m looking into RAG recently, and it is not hard to write one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And want to train it on atmos docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and examples

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you also checked out danswer?

Hao Wang avatar
Hao Wang

just heard of it, saw some similar projects, will take a look into it

Hao Wang avatar
Hao Wang

after a quick review, danswer uses langchain and fast api, should be a reliable project to be used

Hao Wang avatar
Hao Wang
Customer Support - Danswer Documentationattachment image

Help your customer support team instantly answer any question across your entire product.

Hao Wang avatar
Hao Wang

dived into the project and gave it a test, seems it is not easy to make it fully up, e.g. I tried with a public #atoms web page, but QA failed, I used local LLM so I guess it may work with public LLM service. side note: the project has a big vision to be a platform, so the codes are abstracted very well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On what way did the QA fail?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You had it index the atmos slack channel in the public web archive?

Hao Wang avatar
Hao Wang
api_server-1              | Traceback (most recent call last):
api_server-1              |   File "/app/danswer/chat/process_message.py", line 731, in stream_chat_message_objects
api_server-1              |     for packet in answer.processed_streamed_output:
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 280, in processed_streamed_output
api_server-1              |     for processed_packet in self._get_response([llm_call]):
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 245, in _get_response
api_server-1              |     yield from response_handler_manager.handle_llm_response(stream)
api_server-1              |   File "/app/danswer/llm/answering/llm_response_handler.py", line 69, in handle_llm_response
api_server-1              |     for message in stream:
api_server-1              |   File "/app/danswer/llm/chat_llm.py", line 386, in _stream_implementation
api_server-1              |     for part in response:
api_server-1              |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/ollama.py", line 427, in ollama_completion_stream
api_server-1              |     raise e
api_server-1              |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/ollama.py", line 403, in ollama_completion_stream
api_server-1              |     response_content = "".join(content_chunks)
api_server-1              |                        ^^^^^^^^^^^^^^^^^^^^^^^
api_server-1              | TypeError: sequence item 19: expected str instance, NoneType found
Hao Wang avatar
Hao Wang

I used one of archived page, https://archive.sweetops.com/atmos/2024/08/

SweetOps #atmos for August, 2024

SweetOps Slack archive of #atmos for August, 2024.

Hao Wang avatar
Hao Wang

should be related to ollama python lib

Hao Wang avatar
Hao Wang

litellm’s ollama lib

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha understand.. so got stuck on even just indexing one page

Hao Wang avatar
Hao Wang

indexing is ok, should be in the retrieve answer part

Hao Wang avatar
Hao Wang

side note: found this one, https://ollama.com/blog/continue-code-assistant looks useful for code refactoring

An entirely open-source AI code assistant inside your editor · Ollama Blogattachment image

An entirely open-source AI code assistant inside your editor

Hao Wang avatar
Hao Wang

Danswer got renamed, and found the other similar project, https://r2r-docs.sciphi.ai/introduction.

Introduction — The most advanced AI retrieval system. Containerized, Retrieval-Augmented Generation (RAG) with a RESTful API.

The most advanced AI retrieval system. Containerized, Retrieval-Augmented Generation (RAG) with a RESTful API.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I find their new marketing materials confusing. I understood Danswer. If I landed on their new site, I would have bounced right away.

https://www.sciphi.ai/

SciPhiattachment image

SciPhi Cloud is powered by R2R, the Elasticsearch for RAG. Features include user auth and permissions, hybrid search, advanced RAG, observability and more.

Hao Wang avatar
Hao Wang

this morning I got an email from R2R team about sciphi.ai product, but the links inside are not reachable, I hope I’m not phished, and then I found the above page and their project on Github

Ryan avatar

Good morning gents, just checking in here - is this the module usually used to automate remote backend stand-up - https://github.com/cloudposse/terraform-aws-tfstate-backend

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Erik Osterman (Cloud Posse)

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, but in the context of atmos, we use our component

Ryan avatar

ty thats perfect, never got a chance to walk through the process using automation. appreciate the quick response.

Ryan avatar

coming back here this is cool, sorry i was stuck in my head in the chicken/egg scenario of the backend and the cold start stuff helped alot.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Glad to hear!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Initializing the Terraform State S3 Backend | The Cloud Posse Reference Architecture

Follow these steps to configure and initialize the Terraform state backend using Atmos, ensuring proper setup of the infrastructure components and state management.

Ryan avatar

ooo thank you.

Ryan avatar

I will say I think our internal Atmos is on an older ver

Ryan avatar

I’m kind of at a design decision with regards to a second updated atmos for my new region + new backend, idk yet

Derrick Hammer avatar
Derrick Hammer

Hello, I just found this project and trying to plan how im going to design things. Would like input on how atmos required git repos structured in respect to monorepo vs multirepo? I intend to create modules in 1 repo and create environments in another. Also curious about submitting to terraform/opentudfu registry and I think they require 1 repo per module or something. Would appreciate input from others with experience!

Kudos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey @Derrick Hammer great to hear you’re checking it out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an example mono repo structured here: https://github.com/cloudposse-examples/infra-demo-atmos-pro/tree/main

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I intend to create modules in 1 repo and create environments in another
Perfect, checkout vendoring: https://atmos.tools/core-concepts/vendor/

Vendoring | atmos

Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: demo-vendoring
  description: Atmos vendoring manifest for Atmos demo component library
spec:
  # Import other vendor manifests, if necessary
  imports: []

  sources:
    - component: "github/stargazers"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      included_paths:
        - "**/*.tf"
        - "**/*.tfvars"
        - "**/*.md"
      tags:
        - demo
        - github

    - component: "weather"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      tags:
        - demo

    - component: "ipinfo"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      tags:
        - demo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also provide a well-maintained reference architecture for AWS (which is how Cloud Posse makes money). The docs are public here: https://docs.cloudposse.com

The Cloud Posse Reference Architecture

The turnkey architecture for AWS, Datadog & GitHub Actions to get up and running quickly using the Atmos open source framework.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, we also have some design patterns that might be helpful.

https://atmos.tools/design-patterns/organizational-structure-configuration

Organizational Structure Configuration | atmos

Organizational Structure Configuration Atmos Design Pattern

2024-11-07

jose.amengual avatar
jose.amengual

Hello, atmos describe affected always output the component that changed , or that is a somewhat recent change?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure the question

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The whole point is to output the components and stacks that changed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(so it’s always done that)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It also conveys what triggered the change as some metadata in the JSON

jose.amengual avatar
jose.amengual

ok, for some weird reason I thought it was only yaml updates to stacks

jose.amengual avatar
jose.amengual

it has been a coincidence that I have always done stack.yaml changes together with component changes

jose.amengual avatar
jose.amengual

and since now is the first time I use describe affected on a pipeline, I just realized yesterday that a component change will show as a change

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command checks a lot of things: atmos stack for the component, terraform code in the component folder, and terraform modules in any other folder that the current component uses in terraform (atmos detects it by using terraform metadata about the component)

jose.amengual avatar
jose.amengual

ahhh interesting

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, it checks the depends_on attributes in the stacks, if a component depends on an external file or folder, and the file or folder changes, the component will be considered affected

1
github3 avatar
github3
08:52:08 PM

Add .coderabbit.yaml for CodeRabbit integration configuration settings @osterman (#758)

what

• Add coderrabit config • Tune the prompt • Enable linear integration

why

• Want to work towards a config that is less noisy (although this is probably not the PR that solves that)

Enhancements

handle invalid command error @pkbhowmick (#766)

what

• Improved error handling for command arguments, providing clearer feedback when invalid commands are used • Enhanced logging to include a list of available commands when an error occurs due to invalid arguments.

why

• Better user experience

working example

Before:

Screenshot 2024-11-08 at 1 56 30 AM

After fix:

Screenshot 2024-11-08 at 1 57 12 AM

2024-11-08

github3 avatar
github3
02:06:28 PM

Skip component if metadata.enabled is set to false @pkbhowmick (#756)

what

• Skip component if metadata.enabled is set to false • Added documentation on using the metadata.enabled parameter to conditionally exclude components in deployment

why

• Allow disabling Atmos components from being processed and provisioned by setting metadata.enabled to false in the stack manifest w/o affecting/changing/disabling the Terraform components (e.g. w/o setting the enabled variable to false)

demo

image

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) if we set the metadata.enabled to false will it also be ignore on CI/CD? or do we have to have both settings.github.actions_enabled to false?

Skip component if metadata.enabled is set to false @pkbhowmick (#756)

what

• Skip component if metadata.enabled is set to false • Added documentation on using the metadata.enabled parameter to conditionally exclude components in deployment

why

• Allow disabling Atmos components from being processed and provisioned by setting metadata.enabled to false in the stack manifest w/o affecting/changing/disabling the Terraform components (e.g. w/o setting the enabled variable to false)

demo

image

jose.amengual avatar
jose.amengual

related to the question: describe affected will show it as change?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess the second question relates to the first one

@Pulak Kanti Bhowmick can you please confirm what atmos describe affected will return if a component is disabled using metadata.enabled: false

jose.amengual avatar
jose.amengual

if I add anything to metadata like sdfsfdsafdsdf: ff it shows on atmos describe affected output

jose.amengual avatar
jose.amengual

using atmos latest

Pulak Kanti Bhowmick avatar
Pulak Kanti Bhowmick

Hi @Andriy Knysh (Cloud Posse), let me check and get back here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should be affected because it’s changed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s up to the caller to decide what to do with the disabled component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual I believe you were asking for this functionality (metadata.enabled) ^

1
1

2024-11-10

github3 avatar
github3
08:23:36 PM

Wrapper for long lines in help @Cerebrovinny (#770)

what

• Implemented a new terminal-aware text wrapping system for CLI help output • Added responsive width handling based on terminal size with fallback values • Introduced custom usage template handling for consistent help text formatting • Created dedicated terminal writer component for automatic text wrapping

why

• Improves readability of CLI help text by ensuring content fits within terminal width • Provides better user experience with dynamic text wrapping based on terminal size • Standardizes help text formatting across all commands • Fixes potential issues with text overflow in narrow terminal windows

references

Before:
Screenshot 2024-11-09 at 16 15 26
Screenshot 2024-11-09 at 16 31 54

After:
Screenshot 2024-11-09 at 18 19 56

1
NotWillFarrell avatar
NotWillFarrell

Hi, I’ve been reading up on your documentation on Atmos and the reference architecture for the past 1 or 2 weeks and some things are not clicking in my head. I hope you can give me some pointers in the right direction because it is not easy to find this on the Internet.

For AWS, I see references in the mixins to regions, tenants and stages but the sample info only gives me a name, but I’m not seeing how it relates to let’s say an Account ID. This for example:

vars:
  stage: sandbox

# Other defaults for the `sandbox` stage/account

Am I overlooking some documentation part where I can see what can be a default for OUs/Accounts? There should be some relationship to AWS terminology right?

I hope you can give me a hint. Thanks in advance!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For additional context, do you mean these docs:

doc.cloudposse.com (cloud posse’s reference architecture)

Or https://atmos.tools?

That will help me address any confusion.

NotWillFarrell avatar
NotWillFarrell

Hi, also there I see not the relationship made between let’s say a stage and the account ID.

It’s one of those pieces that would make Atmos click in my head. Architecture wise I can relate Catalogs/mixins/stacks etc but somewhere down the line you can’t apply to a ‘sandbox’ as long that ‘sandbox’ has an account id…right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So just to disambiguate some things, in cloudposse’s refarch (docs.cloudposse.com), we by convention tie a stage (dev, staging, production) to an AWS Account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our refarch, we have something called the account-map https://docs.cloudposse.com/components/library/aws/account-map/

account-map | The Cloud Posse Reference Architecture

This component is responsible for provisioning information only: it simply populates Terraform state with data (account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what handles that mapping

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what allows us to refer to everything by name, instead of thinking of accounts account IDs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But it’s also what can make using the Cloud Posse refarch harder with brownfield environments https://atmos.tools/core-concepts/components/terraform/brownfield/

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All that said, I want to point out that these are not Atmos conventions, these are Cloud Posse conventions in our reference architecture for AWS that uses Atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The account-map returns an IAM role that is used to access a given account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the IAM role will have the AWS account ID information

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(that’s the missing gap, I believe in the understanding)

NotWillFarrell avatar
NotWillFarrell

Thanks for your answers.

I saw on GH a file (see below) and it gives me some direction. But maybe it’s me and then I think “what are all the possible defaults?

ah..that last one.

# Global variables used for account maps, role maps and other global values
vars:
  account_map:
    dev: 222222222222
    staging: 333333333333
    automation: 111111111111
    prod: 444444444444
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, good question. I’ll answer that in a sec.

(Also, in a future release of our refarch, we intend to move away from the account-map convention to make it easier to use in brownfield environments)

NotWillFarrell avatar
NotWillFarrell

Ah…that’s where my head indeed is: How on earth would I use this in brownfield situations? We have a lot of those; Customers that tried first themselves and then start looking for a partner.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Understandable… so what we have as our refarch was born from 99% of our engagements which are what we call “cold starts”, so brownfields considerations we seldom a concern. As we now reach a much wider audience, that’s coming up more and more often.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re doing some groundwork changes first, related to https://github.com/cloudposse/terraform-aws-components/issues/1177

#1177 :loudspeaker: Upcoming Migration of Components to a New GitHub Organization (CODE FREEZE 11/12 - 11/17)

Hello, Cloud Posse Community!

We’re excited to announce that starting on November 12, 2024, we will begin migrating each component in the cloudposse/terraform-aws-components repository to individual repositories under a new GitHub organization. This change aims to improve the stability, maintainability, and usability of our components.

Why This Migration?

Our goal is to make each component easier to use, contribute to, and maintain. This migration will allow us to:

• Leverage terratest automation for better testing. • Implement semantic versioning to clearly communicate updates and breaking changes. • Improve PR review times and accelerate community contributions. • Enable Dependabot automation for dependency management. • And much more!

What to Expect Starting November 12, 2024

Migration Timeline: The migration will begin on November 12 and is anticipated to finish by the end of the following week. • Code Freeze: Starting on November 12, this repository will be set to read-only mode, marking the beginning of a code freeze. No new pull requests or issues will be accepted here after that date. • New Contribution Workflow: After the migration, all contributions should be directed to the new individual component repositories. • Updated Documentation: To support this transition, we are updating our documentation and cloudposse-component updater. • Future Archiving: In approximately six months, we plan to archive this repository and transfer it to the cloudposse-archives organization.

Frequently Asked Questions

Does this affect Terraform modules? No, only the terraform-aws-components repository is affected. Our Terraform modules will remain where they are.


We are committed to making this transition as seamless as possible. If you have any questions or concerns, please feel free to post them in this issue. Your feedback is important to us, and we appreciate your support as we embark on this new chapter!

Thank you,
The Cloud Posse Team

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And once we have that in place, we can start making more breaking changes to our components to support brownfield - while keeping our ecosystem stable.

NotWillFarrell avatar
NotWillFarrell

Sounds really promising! I’m going to do some dishes over here (CET) and then just start with this afterwards.

Thanks for all your time!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) where do we have a full definition of the static backend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t think we have a full definition anywhere. In part, that is because it is too simple. See the examples here. You just set the remote_state_backend_type to static and set the outputs as a map under remote_state_backend.static. Although the example shows setting backend_type: static, that is a bit misleading because you cannot run terraform plan etc. with a static backend.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, what about a static account-map? @NotWillFarrell is working in a brownfield

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

My recommendation for brownfield is to use a static backend for account to create the list of accounts, and use the real account-map. Account map is an information collector and processor, it it does not manage any cloud resources. If account-map contains information for accounts you are not using, that should not cause problems.

The example @NotWillFarrell cited:

# Global variables used for account maps, role maps and other global values
vars:
  account_map:
    dev: 222222222222
    staging: 333333333333
    automation: 111111111111
    prod: 444444444444

is from before account-map got that information from account. Convert that to a static backend for account and I think account-map will work fine. If not, we should fix account-map so that it does.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example of the static backend for account

components:
  terraform:
    account:
      backend:
        s3:
          role_arn: null
      vars:
        enabled: false
      # Use `static` remote state to configure the attributes (outputs) for the existing organization, OUs and accounts
      remote_state_backend_type: static
      remote_state_backend:
        static:
          account_arns:
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
          account_ids:
            - "xxxxxxxxxxx"
            - "xxxxxxxxxxx"
            - "xxxxxxxxxxx"
          account_info_map: {}
          account_names_account_arns: {}
          account_names_account_ids: {}
          organization_arn: "arn:aws:organizations::xxxxxxxxxxx:organization/o-xxxxxxxxxxx"
          organization_id: "o-xxxxxxxxxxx"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all the outputs returned from the account component https://github.com/cloudposse/terraform-aws-components/blob/main/modules/account/outputs.tf. can be added to the static backend and then used in other components as if they were returned from the account remote state

output "account_arns" {
  value       = local.all_account_arns
  description = "List of account ARNs (excluding root account)"
}

output "account_ids" {
  value       = local.all_account_ids
  description = "List of account IDs (excluding root account)"
}

output "organizational_unit_arns" {
  value       = local.organizational_unit_arns
  description = "List of Organizational Unit ARNs"
}

output "organizational_unit_ids" {
  value       = local.organizational_unit_ids
  description = "List of Organizational Unit IDs"
}

output "account_info_map" {
  value       = local.account_info_map
  description = <<-EOT
    Map of account names to
      eks: boolean, account hosts at least one EKS cluster
      id: account id (number)
      stage: (optional) the account "stage"
      tenant: (optional) the account "tenant"
    EOT
}

output "account_names_account_arns" {
  value       = local.account_names_account_arns
  description = "Map of account names to account ARNs (excluding root account)"
}

output "account_names_account_ids" {
  value       = local.account_names_account_ids
  description = "Map of account names to account IDs (excluding root account)"
}

output "organizational_unit_names_organizational_unit_arns" {
  value       = local.organizational_unit_names_organizational_unit_arns
  description = "Map of Organizational Unit names to Organizational Unit ARNs"
}

output "organizational_unit_names_organizational_unit_ids" {
  value       = local.organizational_unit_names_organizational_unit_ids
  description = "Map of Organizational Unit names to Organizational Unit IDs"
}

output "organization_id" {
  value       = local.organization_id
  description = "Organization ID"
}

output "organization_arn" {
  value       = local.organization_arn
  description = "Organization ARN"
}

output "organization_master_account_id" {
  value       = local.organization_master_account_id
  description = "Organization master account ID"
}

output "organization_master_account_arn" {
  value       = local.organization_master_account_arn
  description = "Organization master account ARN"
}

output "organization_master_account_email" {
  value       = local.organization_master_account_email
  description = "Organization master account email"
}

output "eks_accounts" {
  value       = local.eks_account_names
  description = "List of EKS accounts"
}

output "non_eks_accounts" {
  value       = local.non_eks_account_names
  description = "List of non EKS accounts"
}

output "organization_scp_id" {
  value       = join("", module.organization_service_control_policies.*.organizations_policy_id)
  description = "Organization Service Control Policy ID"
}

output "organization_scp_arn" {
  value       = join("", module.organization_service_control_policies.*.organizations_policy_arn)
  description = "Organization Service Control Policy ARN"
}

output "account_names_account_scp_ids" {
  value       = local.account_names_account_scp_ids
  description = "Map of account names to SCP IDs for accounts with SCPs"
}

output "account_names_account_scp_arns" {
  value       = local.account_names_account_scp_arns
  description = "Map of account names to SCP ARNs for accounts with SCPs"
}

output "organizational_unit_names_organizational_unit_scp_ids" {
  value       = local.organizational_unit_names_organizational_unit_scp_ids
  description = "Map of OU names to SCP IDs"
}

output "organizational_unit_names_organizational_unit_scp_arns" {
  value       = local.organizational_unit_names_organizational_unit_scp_arns
  description = "Map of OU names to SCP ARNs"
}

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The only outputs of account used by account-map are • eks_accountsnon_eks_accountsaccount_info_map The first 2 are just lists of account names, indicating which accounts are expected to have EKS deployments and which are not. Every account should be in one of those lists.

account_info_map is a map of account name to various pieces of information. In the real account output, there is more information, but I think you only need to fill in: • eks: boolean, account hosts at least one EKS cluster • id: account id (number) • stage: the account “stage” • tenant: (optional) the account “tenant”, or null if you are not using tenant Example:

account_info_map:
  artifacts:
    eks: false
    id: "123456789012"
    stage: "artifacts"
    tenant: null
  dev:
    eks: true
    id: "210987654321"
    stage: "dev"
    tenant: null

The other outputs of account are used by components that manage the accounts and organizations, but you can skip all that if you already that set up.

cc: @Erik Osterman (Cloud Posse)

2024-11-11

github3 avatar
github3
05:25:12 PM

feat: additional atmos docs parameters for specifying width, using auto-styling and color profile, and preserving new lines @RoseSecurity (#757)

what

atmos_docs

• Add an additional atmos docs flag for specifying the width of markdown output • Utilizing auto-styling based on light or dark mode preferences instead of hardcoding to dark • Preserving new lines with rendered markdown

why

• Enhance the user experience for interacting with documentation. The width parameter is useful for users who prefer seeing wider output for Terraform docs-generated tables and is defined in the atmos.yaml:

settings: docs: max-width: 200

references

glow docs

1
github3 avatar
github3
05:51:44 PM

Change PS1 to show that Atmos is in the atmos terraform shell mode @pkbhowmick (#761)

what

• Change PS1 to show that Atmos is in the atmos terraform shell mode • Customized command prompt for the interactive shell with the addition of the “atmos>” prefix • Enhanced shell behavior by removing the unnecessary -l flag for non-Windows systems and implementing a fallback to sh if bash is unavailable. • Improved handling for the /bin/zsh shell with additional flags

why

• Improve user experience

test

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB this closes something you asked for a long time ago

Change PS1 to show that Atmos is in the atmos terraform shell mode @pkbhowmick (#761)

what

• Change PS1 to show that Atmos is in the atmos terraform shell mode • Customized command prompt for the interactive shell with the addition of the “atmos>” prefix • Enhanced shell behavior by removing the unnecessary -l flag for non-Windows systems and implementing a fallback to sh if bash is unavailable. • Improved handling for the /bin/zsh shell with additional flags

why

• Improve user experience

test

image

RB avatar

I saw that! Thank you very much

2024-11-12

tretinha avatar
tretinha

does anybody have thoughts on how to manage ECS image tags? we are used to creating different image tags whenever something is ready to be tested or to go to production on each project’s pipeline. So let’s say I have an application that just generated a new docker tag corresponding to some new changes, and that this tag is now saved in ECR, how can I reflect this image tag in my atmos/infrastructure repository? at first I thought about the app opening a PR to atmos, changing the line that corresponds to the image, but I’m unsure if this is the best way. How do you typically deal with this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our ECS components solve this using SSM parameters

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
ECS with ecspresso | The Cloud Posse Reference Architecture

We use the ecspresso deployment tool for Amazon ECS to manage ECS services using a code-driven approach, alongside reusable GitHub Action workflows. This setup allows tasks to be defined with Terraform within the infrastructure repository, and task definitions to reside alongside the application code. Ecspresso provides extensive configuration options via YAML, JSON, and Jsonnet, and includes plugins for enhanced functionality such as Terraform state lookups.

tretinha avatar
tretinha

I’ll take a look. Thank you!

github3 avatar
github3
01:46:02 PM

Clean Terraform workspace before executing terraform init in the atmos.Component template function @aknysh (#775)

what

• Clean Terraform workspace before executing terraform init in the atmos.Component template function

why

When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default

The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD.

This PR adds the logic that deletes the .terraform/environment file from the component directory before executing terraform init when executing the atmos.Component template function. It allows executing the atmos.Component function for a component in different Terraform workspaces without Terraform asking to select a workspace. The .terraform/environment file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure.

party_parrot1
1
Stephan Helas avatar
Stephan Helas

Hi,

i’ve found, that the validate output differs between 1.98 and 1.99 (and ever since). I don’t know if i am doing anything wrong or if its a bug:

old behavior (1.98.0)

❯ atmos validate component wms-base -s wms-xe02-sandbox
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfully

new behavior (1.99.0)

❯ atmos validate component wms-base -s wms-xe02-sandbox
'atmos' supports native ' wms-base' command with all the options, arguments and flags.

In addition, 'component' and 'stack' are required in order to generate variables for the component in the stack.

atmos  wms-base <component> -s <stack> [options]
atmos  wms-base <component> --stack <stack> [options]
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfully
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a regression from the many new PRs, we’ll fix it, thank you @Stephan Helas

2024-11-13

RB avatar

Is there already prior art using github actions and atmos to use a readonly for the plan and an admin role for the apply ?

RB avatar

That way if the plan was compromised without the ability to apply, no one could do any funny business with a local exec data source

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This relates to the recent work by @jose.amengual

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual where did we leave off with this?

jose.amengual avatar
jose.amengual

Waiting for @Andriy Knysh (Cloud Posse) to have some time to go over the failing tests on my PR

jose.amengual avatar
jose.amengual

in the convo we have in the other slack

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Andriy should have some time now to look

1
1
1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Quick update: @Igor Rodionov will review the GH action PR (prob tomorrow), and @Andriy Knysh (Cloud Posse) will check what @jose.amengual said about Atmos (not related to the GH action)

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
From the Terraform community on Redditattachment image

Explore this post and more from the Terraform community

1
github3 avatar
github3
08:57:01 PM

Add support for vendor path setting in atmos.yaml @Cerebrovinny (#737)

what

• Add support for vendor path setting in atmos.yaml • Add support for vendor files under folders or multiple vendor files to be processed in lexicographic order

why

• Users now should be able to use new variable vendor in atmos.yaml and process different vendor files at different locations

2024-11-14

toka avatar

I need to share my local submodules/child modules with every component that I will define in atmos. I’d like to move to atmos, but I have a codebase that I need to migrate to atmos with many modules. At this point I cannot afford to rewrite each small module into a component, but I’d like to move my root modules into atmos components as a starting point. I’d like to build my components out of the existing modules. Any advice how to approach?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@toka please show the current file system layout

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use your TF modules as components (there is nothing special about components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you put your modules into components/terraform and define Atmos stacks in stacks, it should work. Atmos will generate the backend and varfile for the modules and execute Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, “components” are really just a philosophy. You don’t need to rewrite anything.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…it won’t be worse off than it is now, but it won’t benefit from the way of thinking about architectures as made up of components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, components typically will comprise a solution, so they shouldn’t be as small as a small module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Thinking Like Atmos | atmos

Atmos can change how you think about the Terraform code you write to build your infrastructure.

Component Best Practices | atmos

Learn the opinionated “Best Practices” for using Components with Atmos

2024-11-15

RB avatar

Hi all, if you folks have a second, i have a couple questions on the component migration. Very excited for the component testing

https://github.com/cloudposse/terraform-aws-components/issues/1177#issuecomment-2474148290

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah thanks, I will respond later today

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Comment on #1177 :loudspeaker: Migration of Components to a New GitHub Organization (CODE FREEZE 11/12 - 11/17)

@nitrocode thanks for raising these questions.

  1. Will all the same components be available after the migration?

• Yes, we’re migrating the components as is, bit-for-bit, to facilitate a switch. However, we anticipate promptly doing major releases on many of the components after that point, to introduce new functionality and improve brownfield infra (a large driver for the initiative). Those breaking changes will likely require local changes in your configurations. That won’t happen immediately, but is the reason we’re doing all this.

  1. Will components still be open source?

Yes, we have no plans to change license.

  1. What will the new organization be named?
    Looked through the source code of the component updater and saw this https://github.com/cloudposse-terraform-components

Correct, you found it!

https://github.com/cloudposse-terraform-components

As part of our transition to GitHub Enterprise (GHE), we are reorganizing our open-source projects into more purpose-built organizations. This allows us to better manage repository rulesets, GitHub Apps, and other configurations specific to each organization’s purpose. This approach enhances our security posture and improves discoverability. Additionally, keeping our components separate from less opinionated child modules avoids confusion and ensures clearer organization.

  1. Will the component updater be updated to allow overriding the above org to use different sources if needed?

The component updater uses the sources as define in the vendor and component manifests. Thus, that’s supported today.

One thing we’ve added to the component updater to make this switch less painful, is the ability for it to rewrite the sources to their new homes. So if it see’s references to components in cloudposse/terraform-aws-components it will rewrite those to the new locations.

  1. Would you folks consider opening up the codeowners for components once they are all in their own repositories like you folks do with terraform modules ?

Yes, so we’ll be able to accept more contributions of components and delegate ownership of components with this move. Note, CODEOWNERS only works with paid GitHub seats, so I think we’ll continue to look for solutions that work better for non-org members, such as “allow lists” that we’ve implemented elsewhere.

Kalman Speier avatar
Kalman Speier

hey folks, is there a way to generate kubernetes provider blocks for different cloud providers? scenario: stack-1 is ecs, stack-2 is gke, and i’d like to use the same components for kubernetes resources.

i can output host, cert and token from the cluster components, however i’d like to configure kubernetes provider with oauth2 access token. using google_client_config data source for example.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you familiar with the atmos provider generation?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Providers | atmos

Configure and override Terraform Providers.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kalman Speier the question is, how do you handle the auth token. In terraform, you can get it from a secret storage (SSM/ASM, GCP vault, etc.), and send it as an input to the other resources/modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the token should def not be hardcoded in Atmos stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I guess you are asking about how to do it in Terraform (get the token from data sources and then use it as input to other modules, per cloud)

Kalman Speier avatar
Kalman Speier

what i’d like to achieve is simply like this: https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/gke/main.tf

so, the kubernetes provider is configured outside of the component which is responsible to deploy k8s resources.

Kalman Speier avatar
Kalman Speier

if the cluster outputs the token and i set as a variable to my k8s component that works fine, however these tokens are expiring, hence i’d like to use a provider specific datasource but i couldn’t place that in the component obviously.

Kalman Speier avatar
Kalman Speier
  1. create a cluster on gke for example

  2. run cloud specific kubernetes provider configuration ``` data “google_client_config” “default” {}

provider “kubernetes” { token = data.google_client_config.default.access_token … } ```

  1. deploy my k8s resources using the configured provider without the component knowing about which cloud provider in use

the #2 needs to run each time #3 is running, because outputs from state isn’t working as tokens expiring.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would put all those providers and the data sources in separate files per cloud (obe for AWS/ECS, one for GKE) Then add a variable defining the cloud

variable "cloud" {
   description = "Cloud provider"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in each provider "kubernetes", use count depending on the cloud variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in [main.tf](http://main.tf), use https://developer.hashicorp.com/terraform/language/functions/coalesce to read the token from all the data sources

coalesce - Functions - Configuration Language | Terraform | HashiCorp Developerattachment image

The coalesce function takes any number of arguments and returns the first one that isn’t null nor empty.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

similar to

token = coalesce(data.google_client_config.default.xxxx, data.xxxx.xxxx.xxxx)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

too bad that the provider block still does not support count and for_each (people have been asking for that for many years). So oyu will need to make sure your TF code has access to all clouds at once for the provider blokcs to work and not throw errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://support.hashicorp.com/hc/en-us/articles/6304194229267-Using-count-or-for-each-in-Provider-Configuration#<i class="em em-~"</i>text=While%20a%20longtime%20requested%20feature,provider%20configuration%20block%20in%20Terraform>.

Using count or for_each in Provider Configuration

  Current Status While a longtime requested feature in Terraform, it is not possible to use count or for_each in the provider configuration block in Terraform.   Background Much of the reasoning be…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would create a common TF module to deal with the clusters, then a few root modules (components) for the specific clouds

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the common module would accept the token as a variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the parent/root modules would read the token using the corresponding cloud provider and provide it in the variable to the child module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and if you are using Atmos, it’s easy to configure your components pointing to the diff root modules per cloud

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    ecs-component:
      metadata:
        component:
          ecs/xxxxx. # Point to the Terraform component (root module)
    gke-component:
      metadata:
        component:
          gke/xxxxx. # Point to the Terraform component (root module)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ecs/xxxxx terraform component uses a data source to read the token from SSM/ASM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

gke/xxxxx terraform component uses data "google_client_config" "default" {} to read the token from GCP

Kalman Speier avatar
Kalman Speier

ok, thanks a lot, i will think about these.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tha main point is to create a common TF child module (with almost all the code except the code to read the token from the data sources), then reuse it in the root modules using diff providers

Miguel Zablah avatar
Miguel Zablah

Hey guys I was looking for a way to read secrets from 1password and saw this PR: https://github.com/cloudposse/atmos/pull/762 What are the plans for this?

This will actually solved a loot of issues and simplify my work on some projects hehe

#762 Add support to load config values and secrets from external sources

what

Integrate vals as a template function.

why

Loading configuration values and secrets from external sources, supporting various backends.

Summary by CodeRabbit

New Features
• Introduced the atmos.Vals template function for loading configuration values and secrets from external sources.
• Added a logging mechanism for improved tracking of value operations. • Updates
• Updated various dependencies to newer versions, enhancing compatibility with cloud services and improving overall performance. • Documentation
• Added comprehensive documentation for the atmos.Vals template function, including usage examples and security best practices.

1
Miguel Zablah avatar
Miguel Zablah

@Erik Osterman (Cloud Posse) any updates on this?

#762 Add support to load config values and secrets from external sources

what

Integrate vals as a template function.

why

Loading configuration values and secrets from external sources, supporting various backends.

Summary by CodeRabbit

New Features
• Introduced the atmos.Vals template function for loading configuration values and secrets from external sources.
• Added a logging mechanism for improved tracking of value operations. • Updates
• Updated various dependencies to newer versions, enhancing compatibility with cloud services and improving overall performance. • Documentation
• Added comprehensive documentation for the atmos.Vals template function, including usage examples and security best practices.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our plan is leaning more towards implementing a pluggable way of storing/retrieving values from directly within Atmos, but supporting many of these same backends.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll have a more conclusive answer by the end of this week.

Miguel Zablah avatar
Miguel Zablah

oh nice thanks!

1
github3 avatar
github3
07:25:33 PM

Update gettings started, add $schema directive at the top of files @osterman (#769)

what

• Remove unimplemented commands • $schema directive at the top of files

why

• Not everyone will have $schema validation enabled by default in their editor Enhance WriteToFileAsJSON with pretty-printing support @RoseSecurity (#783)

what

• Used the ConvertToJSON utility with json.MarshalIndent to produce formatted JSON • Indentation is set to two spaces (“ “) for consistent readability

why

• This PR improves the WriteToFileAsJSON function by introducing pretty-printing for JSON outputs. Previously, the function serialized JSON using a compact format, which could make the resulting files harder to read. With this change, all JSON written by this function will now be formatted with indentation, making it easier for developers and users to inspect and debug the generated files • This specifically addresses #778 , which previously rendered auto-generated backends as:

{ “terraform”: { “backend”: { “s3”: { “acl”: “bucket-owner-full-control”, “bucket”: “my-tfstate-bucket”, “dynamodb_table”: “some-dynamo-table”, “encrypt”: true, “key”: “terraform.tfstate”, “profile”: “main”, “region”: “us-west-2”, “workspace_key_prefix”: “something” } } } }

With this addition, the output appears as:

{ “terraform”: { “backend”: { “s3”: { “acl”: “bucket-owner-full-control”, “bucket”: “my-tfstate-bucket”, “dynamodb_table”: “some-dynamo-table”, “encrypt”: true, “key”: “terraform.tfstate”, “profile”: “main”, “region”: “us-west-2”, “workspace_key_prefix”: “something” } } } }

references

Stack Overflow • Closes #778

2024-11-16

github3 avatar
github3
09:00:39 PM

Add support for custom atmos terraform shell prompt @pkbhowmick (#786)

what

• Add support for custom atmos terraform shell prompt • Allow specifying custom prompt for atmos terraform shell command in atmos.yaml. Supports Go templates

why

• Improve user experience • Make the prompt customizable

Working demo

With custom prompt:

Screenshot 2024-11-16 at 11 20 14 PM

Without custom prompt:

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Michael this should restore the behavior in #geodesic

Add support for custom atmos terraform shell prompt @pkbhowmick (#786)

what

• Add support for custom atmos terraform shell prompt • Allow specifying custom prompt for atmos terraform shell command in atmos.yaml. Supports Go templates

why

• Improve user experience • Make the prompt customizable

Working demo

With custom prompt:

Screenshot 2024-11-16 at 11 20 14 PM

Without custom prompt:

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We disable the prompt formatting by default, and instead allow it to be customized in atmos.yaml

Michael avatar
Michael

Awesome stuff, thank you for such a quick turnaround on it! Excited to try it out

2024-11-18

shirkevich avatar
shirkevich

Hey guys, thanks for awesome project! Trying to recreate multi workspace project in terraform.io with atmos.

My use case is provisioning pretty same infra for multiple tenants. Need your advise on how to proper organise variables.

Each component in tenant share a list of variables like project_id and region which I put to mixin with the same name as tenant.

Then for each component I’m passing project_number with atmos.Component (tenants are named as pokemons):

deploy/bulbasaur-stg.yaml

vars:
  tenant: bulbasaur
  stage: stg

import:
  - path: "deploy/_defaults.yaml.tmpl"
    context:
      stack: bulbasaur

components:
  terraform:
    tenant:
      vars:
        foo: bar

    db:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}' <-- this I also want to DRY somehow

    cloudrun:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'

    jobs:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'

All good for now, then for cloudrun and jobs components I need same list of ENV variables that are used to provision docker image. The problem here is that I want to use previously defined project_id and pokemon_name in templating of those envs…

I tried to create mixin and thought that it can be templated like that:

mixins/tenants/bulbasaur-stg.yaml

vars:
  region: europe-west3
  env_vars:
    TENANT: '{{ .vars.tenant }}'
    DATABASE_USER: 'user@{{ .vars.project_id }}.iam'
    BIGQUERY_PROJECTID: '{{ .vars.project_id }}'
    ...

deploy/_defaults.yaml.tmpl

import:
  - mixins/tenants/{{ .stack }}

terraform:
  backend_type: gcs
  backend:
    gcs:
      bucket: "tf-state"

It is not working giving me <no-value> for TENANT Clearly I’m doing it wrong. Should I create a component that just outputs env_vars instead and pass it to cloudrun and jobs?

P.S. I have name_pattern: "{tenant}-{stage}" in atmos.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve not yet had a chance to read through the entire message

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, you will likely need to commit the varfiles for it, for it to work with TFC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, dynamic backend generation will not work well, if you use multiple backends for the same component (E.g. by region)

RB avatar

Opentofu is considering deprecating workspaces

https://github.com/opentofu/opentofu/issues/2160

Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?

Junk avatar

Currently, Atmos relies on the concept of workspaces for managing unique configurations and state per stack. However, with the introduction of OpenTofu’s Early Evaluation feature, there is potential to move away from workspaces and instead use unique keys per stack to manage state more flexibly.

While this functionality is not natively supported yet, I believe it could be feasible to implement. By leveraging Early Evaluation, we could dynamically configure the backend state storage, using variables to differentiate each stack’s state, rather than depending on separate workspaces. This approach would allow us to specify unique keys based on the stack name or environment and ensure proper isolation of state per stack.

In essence, although Atmos doesn’t currently support a workspace-less setup, utilizing unique keys per stack with Early Evaluation could be a similar concept and an effective alternative worth exploring.

Therefore, I don’t anticipate that the CloudPosse team will find it impossible to adapt to the deprecation of workspaces. They are an incredibly talented group, and I’m confident in their ability to develop a robust solution or an alternative approach. With their expertise, I believe they will be able to leverage features like Early Evaluation effectively to maintain or even improve the functionality of Atmos without relying on workspaces.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Actually, I think it’s already supported, depending on your backend. With S3, it’s just a different path. Since the backends are entirely configurable in Atmos, I think it’s just about configuring the right path to match the workspace path, and dropping the workspace parameter.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We might need to add a parameter in atmos to disable workspace operations, but the lift on that is trivial.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We talk about the file structure of the S3 backend here https://docs.cloudposse.com/layers/accounts/tutorials/terraform-s3-state/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So with that it should be as simple as updating the key to the fully qualified path to the workspace tfstate file

https://developer.hashicorp.com/terraform/language/backend/s3

Backend Type: s3 | Terraform | HashiCorp Developerattachment image

Terraform can store state remotely in S3 and lock that state with DynamoDB.

RB avatar

Thanks, ill review!

Yes i think the workspace option would be needed.

You’re right, all the other stuff is there to override the backend key per component using some yaml magic (use stack name as the unique key, as you said)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Yes i think the workspace option would be needed.
Just to confirm, an option to disable usage of workspace operations?

1
github3 avatar
github3
05:54:47 AM

:rocket: Enhancements

Handle empty stack YAML file configurations @haitham911 (#791)

what

• Handle empty stack YAML file configurations

why

atmos validate stacks should not error on empty stack manifest files

2024-11-19

Junk avatar

For root modules that do not use the terraform-null-label module (e.g., modules from terraform-aws-modules instead of CloudPosse), I find it challenging to maintain a consistent naming convention for resources. Specifically, I use a mix of CloudPosse-provided modules and other third-party modules as needed, but ensuring uniformity in the naming and tagging of provisioned resources (not just the stack’s name_pattern, but the actual resource names) is difficult.

I’ve tried using the Component Vendor’s Mixin feature to blend context.tf, but it proved to be inconvenient.

Does anyone have ideas or alternative methods for achieving a uniform naming and tagging convention across all resources? Any suggestions would be greatly appreciated!

Miguel Zablah avatar
Miguel Zablah

what I do is save the other third-party module in another directory and use that with CloudPosse context.tf file to create the naming.

For example: components/vendor/aws/vpc -> AWS Module compoents/aws/vpc -> custom module using the vendor aws vpc with CloudPosse [context.tf](http://context.tf) file for naming and enable ENV

and I will use the compoents/aws/vpc module in catalog

1
Junk avatar

@Miguel Zablah Thanks! I understand, but to help, could you give me a simple example? I get the general picture, but the ‘custom module using the vendor aws vpc with CloudPosse context.tffile for naming and enableENV’ part doesn’t come to mind specifically.

Junk avatar

If what you mean by custom module is ‘combine the newly attached context.tf with the “aws/vpc” component in the vendor directory to create a new Root Module (Component)’, this seems like it would be complicated to configure and maintain the component every time. Am I not understanding this correctly?

Miguel Zablah avatar
Miguel Zablah

not really bc both are going to be manage by atmos vendor file, so using this example I use this two modules: https://github.com/cloudposse/terraform-null-label https://github.com/terraform-aws-modules/terraform-aws-vpc

so I add both of them to the Atmos vendor file and in a different directory on components in this example it will be something like this: components/terraform/vendor/cloudposse/tf-null-label -> https://github.com/cloudposse/terraform-null-label components/terraform/vendor/aws/vpc -> https://github.com/terraform-aws-modules/terraform-aws-vpc

than in I will create a new root module with this modules here: components/terraform/aws/vpc

where I will have a this files: [main.tf](http://main.tf) -> to call the components/terraform/vendor/aws/vpc [context.tf](http://context.tf) -> to call the components/terraform/vendor/cloudposse/tf-null-label

so essentially what I do is copy there context file but reference the module internally kind of what they do in this example: https://github.com/cloudposse/terraform-null-label/blob/main/examples/autoscalinggroup/context.tf

and then I can use module.this.id for naming, module.this.tag for tagging and module.this.enabled to enable/disable the module like CloudPosse dose on there modules all of this I will use it on [main.tf](http://main.tf) when calling the VPC module

Miguel Zablah avatar
Miguel Zablah

hopefully this explains it a bit better

Junk avatar

@Miguel Zablah Thanks for the detailed explanation, I understood it perfectly. My only further question is, so what I need to do is to actually create a components/terraform/vendor/aws/vpc module block in main.tf in the components/terraform/vendor/aws/vpc directory and declare the required variables one by one inside the module block so that they are assignable (ex: azs, cidr, private_subnes, etc…)?

Miguel Zablah avatar
Miguel Zablah

so this is how the root component will look like: components/terraform/aws/vpc : • [main.tf](http://main.tf) -> here you will put the module block calling the components/terraform/vendor/aws/vpc[context.tf](http://context.tf) -> use this example but reference your vendor module components/terraform/vendor/cloudposse/tf-null-label[outputs.tf](http://outputs.tf) -> you will expose the vpc module outputs (you can use a loop for this) • [variables.tf](http://variables.tf) -> this will be almost the same as the aws-vpc module but removing name and stuff that is manage by [context.tf](http://context.tf) example [main.tf](http://main.tf) :

module "vpc" {
  count = module.this.enabled ? 1 : 0
  source = "../../vendor/aws/vpc"

  name = module.this.id
...
}

after this is setup correctly made you can use it as normally on your atmos catalog and what not

# DO NOT COPY THIS FILE
#
# This is a specially modified version of this file, since it is used to test
# the unpublished version of this module. Normally you should use a
# copy of the file as explained below.
#
# ONLY EDIT THIS FILE IN github.com/cloudposse/terraform-null-label
# All other instances of this file should be a copy of that one
#
#
# Copy this file from <https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf>
# and then place it in your Terraform module to automatically get
# Cloud Posse's standard configuration inputs suitable for passing
# to Cloud Posse modules.
#
# Modules should access the whole context as `module.this.context`
# to get the input variables with nulls for defaults,
# for example `context = module.this.context`,
# and access individual variables as `module.this.<var>`,
# with final values filled in.
#
# For example, when using defaults, `module.this.context.delimiter`
# will be null, and `module.this.delimiter` will be `-` (hyphen).
#

module "this" {
  source = "../.."

  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit

  context = var.context
}

# Copy contents of cloudposse/terraform-null-label/variables.tf here

variable "context" {
  type = object({
    enabled             = bool
    namespace           = string
    environment         = string
    stage               = string
    name                = string
    delimiter           = string
    attributes          = list(string)
    tags                = map(string)
    additional_tag_map  = map(string)
    regex_replace_chars = string
    label_order         = list(string)
    id_length_limit     = number
    label_key_case      = string
    label_value_case    = string
  })
  default = {
    enabled             = true
    namespace           = null
    environment         = null
    stage               = null
    name                = null
    delimiter           = null
    attributes          = []
    tags                = {}
    additional_tag_map  = {}
    regex_replace_chars = null
    label_order         = []
    id_length_limit     = null
    label_key_case      = null
    label_value_case    = null
  }
  description = <<-EOT
    Single object for setting entire context at once.
    See description of individual variables for details.
    Leave string and numeric variables as `null` to use default value.
    Individual variable settings (non-null) override settings in context object,
    except for attributes, tags, and additional_tag_map, which are merged.
  EOT

  validation {
    condition     = var.context["label_key_case"] == null ? true : contains(["lower", "title", "upper"], var.context["label_key_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }

  validation {
    condition     = var.context["label_value_case"] == null ? true : contains(["lower", "title", "upper", "none"], var.context["label_value_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}

variable "enabled" {
  type        = bool
  default     = null
  description = "Set to false to prevent the module from creating any resources"
}

variable "namespace" {
  type        = string
  default     = null
  description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}

variable "environment" {
  type        = string
  default     = null
  description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}

variable "stage" {
  type        = string
  default     = null
  description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}

variable "name" {
  type        = string
  default     = null
  description = "Solution name, e.g. 'app' or 'jenkins'"
}

variable "delimiter" {
  type        = string
  default     = null
  description = <<-EOT
    Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
    Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
  EOT
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}

variable "additional_tag_map" {
  type        = map(string)
  default     = {}
  description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}

variable "label_order" {
  type        = list(string)
  default     = null
  description = <<-EOT
    The naming order of the id output and Name tag.
    Defaults to ["namespace", "environment", "stage", "name", "attributes"].
    You can omit any of the 5 elements, but at least one must be present.
  EOT
}

variable "regex_replace_chars" {
  type        = string
  default     = null
  description = <<-EOT
    Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
    If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
  EOT
}

variable "id_length_limit" {
  type        = number
  default     = null
  description = <<-EOT
    Limit `id` to this many characters.
    Set to `0` for unlimited length.
    Set to `null` for default, which is `0`.
    Does not affect `id_full`.
  EOT
}

variable "label_key_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
    Possible values: `lower`, `title`, `upper`. 
    Default value: `title`.
  EOT

  validation {
    condition     = var.label_key_case == null ? true : contains(["lower", "title", "upper"], var.label_key_case)
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }
}

variable "label_value_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of output label values (also used in `tags` and `id`).
    Possible values: `lower`, `title`, `upper` and `none` (no transformation). 
    Default value: `lower`.
  EOT

  validation {
    condition     = var.label_value_case == null ? true : contains(["lower", "title", "upper", "none"], var.label_value_case)
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}

#### End of copy of cloudposse/terraform-null-label/variables.tf

Miguel Zablah avatar
Miguel Zablah

btw this just how I do it there might be a better way to do this haha

github3 avatar
github3
02:25:21 PM

Enhancements

Set Default Schema to Remote Schema @haitham911 (#777)

what

• Set Default Validation Schema to Remote Schema

why

• We should set the default schema to the remote atmos schema so that atmos validate work even if the user does not configure a validation schema

John Seekins avatar
John Seekins

:wave: We’re experimenting with Atmos and seeing a strange behavior with templating:

$ atmos describe stacks --process-templates | grep Component
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'

It seems like templates just…aren’t being processed and I’m not really sure how to debug this… The docs imply this should “just work”. I’m clearly missing something obvious, and would love some help. (Atmos 1.107.1 on darwin/arm64)

John Seekins avatar
John Seekins

Some more context:

components:
  terraform:
    vpc:
      vars:
        enabled: true
        name: "compute"
        ipv4_primary_cidr_block: "10.1.0.0/16"
        vpc_flow_logs_enabled: false
        nat_gateway_enabled: true
        public_subnets_enabled: true
    vpc-flow-logs-bucket:
      vars:
        name: "vpc-flow-logs"
    internal-domain-and-cert:
      settings:
        depends_on:
          1:
            component: vpc
      vars:
        create_wildcard_cert: true
        vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

By default, templating is disabled. We originally implemented this as an escape hatch, but it’s become very popular.

John Seekins avatar
John Seekins

It is (theoretically) super useful!

John Seekins avatar
John Seekins

Ooo…I probably just want inheritance, huh?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So at Cloud Posse, in our refarch, we almost never use templating, and instead use inheritance and imports 99% of the time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, we acknowledge the usefulness of the template functions. We’re also working on improvements, which involve moving towards what YAML calls “explicit types”, which are basically first-class functions in YAML.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos.Component | atmos

Read the remote state or configuration of any Atmos component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The docs imply this should “just work”. I’m clearly missing something obvious, and would love some help.
Yes, that might be the case. We’re working on 2 things.

  1. Warning if you’re using templates and have it disabled
  2. Improving the docs to call it out
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you could share a screenshot or link to the page/chapter that you encountered it, I’ll fix it

John Seekins avatar
John Seekins

Jumps right out at you in the docs. Root page and all…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, that could definitely need some TLC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks!!

John Seekins avatar
John Seekins

I appreciate the context, but do you have any tips on how I can reference the vpc_id from vpc in internal-domain-and-cert in my example above? All the links you’ve passed along seem to talk about sharing data between stacks, and I just need to pass between components in a single stack here.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This, no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Share Data Between Components | atmos

Share data between loosely-coupled components in Atmos

John Seekins avatar
John Seekins

Yep. That is what isn’t working.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But do you have templating enabled in atmos.yaml

John Seekins avatar
John Seekins

That looks like what I was missing!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the flow of the documentation, we have this

1
John Seekins avatar
John Seekins

Awesome. Thanks, Eric.

John Seekins avatar
John Seekins

Definitely just saw that root page before I read more deeply about templating.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m going to update “Share Data Between Components”, to reference back to Template Configurations, to help avoid this snafu

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and call out that templating needs to be enabled.

John Seekins avatar
John Seekins

That’s great. The docs are generally pretty robust, which may have lulled me into a false sense of “well…if they say this just works…”

1
1
karel_alfonso avatar
karel_alfonso

Hi, I’m assessing Atmos in a use case that needs to provision a set of infrastructure components that are deployed separately (their own TF root module), in different AWS accounts. I also have to use Atlantis to apply changes. To simplify if I have three components C1, C2 and C3. C2 and C3 depend on the result of applying C1. I want to orchestrate the plan/apply flow of C1, C2 and C3 in that order. With Atmos, do I need to define a Workflow and how would it be used from Atlantis?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A workflow is one way to do that. We haven’t looked into solving ordered dependencies in a way that provides first-class support for Atlantis. But to your point, you could create a custom workflow in atmos that you call via Atmos. The issue is you really want to review/approve each individual component’s plan. So for that, a more elaborate approach is necessary.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual is one of the #atlantis maintainers and might have some other ideas

karel_alfonso avatar
karel_alfonso

Thanks for the reply. I was initially confused with Atmos stacks thinking that I could deploy in an ordered way the components listed in it. But reading the documentation I realised is probably a workflow. Another option I’ve been thinking is to do this via CI/CD and build the dependencies in each stage of the CI/CD pipeline. So, what is atmos solution to be able to orchestrate multiple Terraform root modules that depend on each other?

jose.amengual avatar
jose.amengual

Atlantis supports dependencies

jose.amengual avatar
jose.amengual

we could potentially ask CloudPosse to add component dependencies to the automatic workflow generation that atmos can create

karel_alfonso avatar
karel_alfonso

That would be great. We’re not using Atlantis dependencies at the moment. It would be good to find a away to orchestrate the plan/apply flow of multiple components (TF root modules) that depend on each other.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh that’s interesting, I think I forgot about that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since we already represent dependencies in stack configs, it’s maybe a simple mapping

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@karel_alfonso did you see the atmos Atlantis generation?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s templatized so it might already be possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

jose.amengual avatar
jose.amengual
Repo Level atlantis.yaml Config | Atlantis

Atlantis: Terraform Pull Request Automation

karel_alfonso avatar
karel_alfonso


@karel_alfonso did you see the atmos Atlantis generation?
No, haven’t looked into that yet. Will the generation take into account the dependencies in the stack config?

jose.amengual avatar
jose.amengual

no, I do not think it will do that

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But the entire Atlantis config is one big template

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And atmos stacks define dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The templates have the full context of the stack configs, so it should be possible to define the Atlantis dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way to think about it is you are transforming the shape of one YAML configuration to the shape of another

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Configure Dependencies Between Components | atmos

Atmos supports configuring the relationships between components in the same or different stacks. You can define dependencies between components to ensure that components are deployed in the correct order.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jose.amengual avatar
jose.amengual

that section, I believe is not free form Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What do you mean by free form?

jose.amengual avatar
jose.amengual

I think if you add a line that is not predefined, atmos does not add it to the generated atlantis,yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, that is plausible- but and easy change

jose.amengual avatar
jose.amengual

yes, I remember Andry adding a few lined when I was testing the integration pretty quickly

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, so @karel_alfonso if you end up pursuing this route, we may need to make some tweaks, but I don’t think it would be anything radical.

1
karel_alfonso avatar
karel_alfonso


Atmos supports configuring the relationships between components in the same or different stacks. You can define dependencies between components to ensure that components are deployed in the correct order.
Is there a way to apply an entire stack with its dependencies using atmos CLI (without Atlantis) for demonstration purposes? I want to show to my team that a tool like atmos can help organise a large Terraform codebase with multiple tenants and AWS accounts. I can then look into Atlantis

karel_alfonso avatar
karel_alfonso

All examples I’ve seen deploy a specific component in a stack

karel_alfonso avatar
karel_alfonso

A concrete example of what I want to achieve is that I have a TF component that provisions a Kafka cluster. Whenever I change the number of brokers I want to deploy/update two other components that require the bootstrap servers returned after applying the first component. I thought/wished that atmos could help achieve something like that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the CLI we have not implemented that. The main reason is that it’s dangeous to “apply all”, however, we do have it on our roadmap but no ETA.

However, it is possible using custom commands. Others have done the same.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.

karel_alfonso avatar
karel_alfonso

excellent! thanks so much for the help and support. I’ll proceed with a demo I’m preparing using custom commands and later on will look into the integration with Atlantis

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Drift detection will pick up the changes in the scenario you described, but the way it works is not based on dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It work based on replanning all components

karel_alfonso avatar
karel_alfonso

Oh, hadn’t seen that feature. It’s something we can run in a scheduled CI/CD job

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Exactly!

karel_alfonso avatar
karel_alfonso

Ultimately, I want to prove that we don’t need TACOS, just Terraform, best practices framework and existing tools to improve all the issues you’ve listed that teams run into when scaling TF to a large org.

Bob avatar

Nothing helpful to add here, but want to just say thank you for asking the question and folks for answering swiftly. I was about to go on a rabbit hole reading atmos docs for the same exact use case (just no atlantis requirement). @karel_alfonso Curious on what you come up with.

Before reading this, I was also under the impression atmos can deploy all components in the stack in order of dependency. I’ll start investigating custom command and play around with our use case as well

Seeing HCP Terraform stacks (and deferred changes/plan capability) got me wanting to see what is available out there outside of the usual TACOS. Saw Terragrunt Stacks RFC, exciting, but timing of its release is unknown

1
jose.amengual avatar
jose.amengual

you can create an atmos workflow to deploy components in order using the CLI

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Bob it would be helpful to know if you are primarily going to use this on the terminal/console, and if the expectation is to review each plan before apply, or want to automatically “apply all” in dependency order without reviewing the plan.

Bob avatar

Primarily going to review the plan before applying, but I know that’s a complicated ask especially if components have dependents that had not been applied yet within the stack. I don’t believe atmos has mocking/placeholder outputs for components today?

2024-11-20

Samuel Than avatar
Samuel Than

Hi, i’m at the stage of initialising the tfstate-backend in a brownfield environment context.

I was successful in creating the s3 and dynamodb part and migrating all the workspace to s3.

However, i hit a wall when it comes to the process of enabling the access_roles_enabled flag.

The following is the error i recieved

Error: 
│ Could not find the component 'account-map' in the stack 'cs-core-gbl-root'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?

My namespace is cs however the stack name -core-gbl-root is foreign to me as i’ve not declare any of that. Not sure how that came about.

This was the stack yaml i’m using to deploy the tfstate-backend. I used the output of the iam role created by the access_role flag and pass it into the role_arn prior to turning on the access_role_enable flag.

Is there some sort of mapping i have miss configured ?

tfstate-backend:
      backend:
        s3:
          role_arn: null
      vars:
        access_roles_enabled: true # Set to false initially, and only used for cold start. 
        enable_server_side_encryption: true
        enabled: true
        force_destroy: false
        name: terraformstate
        prevent_unencrypted_uploads: true
        label_order: ["namespace", "tenant", "environment", "stage", "name"]
        access_roles:
          default: &tfstate-access-template
            write_enabled: true
            allowed_roles: {}
            denied_roles: {}
            allowed_permission_sets: {}
            denied_permission_sets: {}
            allowed_principal_arns: [
              "arn:aws:iam::XXXXXXXX:role/XXXXXXXX"
            ]
            denied_principal_arns: []
        tags:
          component: "tfstate-backend"
          expense-class: "storage"

My folder structure is stacks/orgs/cs/xxx/dev/ap-southeast-2/tfstate-backend.yaml and my stack name is cs-xxx-apse2-dev.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Ben Smith (Cloud Posse) maybe an easy one for you

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

your backend configuration is likely configured at a higher level so that it can be reused. It’s most likely under stacks/orgs/cs/_defaults.yaml

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

you can also always check the final result of stack configuration with atmos. For example,

atmos describe component tfstate-backend -s your-stack-name

https://atmos.tools/cli/commands/describe/component/

atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Samuel Than in this stack name cs-core-gbl-root

cs - is namespace core - is tenant gbl - is environment (region, global region in this case) root - is the stage (account)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as Dan mentioned, you probably have these defined in the higher level stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

search for tenant: core and environment: gbl and stage: root in your stacks folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also check the account-map component and what values are provided to the module (see the [context.tf](http://context.tf) file for the context variables like namespace, tenant, stage , environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if still not working, DM me your config to take a look

Samuel Than avatar
Samuel Than

I’ve DM you @Andriy Knysh (Cloud Posse) with some more details.

github3 avatar
github3
06:59:41 PM

Filter out empty results from describe stacks @Cerebrovinny (#764)

what

• Introduced a new command-line flag --include-empty-stacks to include stacks without components in the output • Enhanced stack processing logic to support filtering based on the new flag • Changed the default behavior of atmos describe stacks to filter empty stacks by default unless user pass the flag --include-empty-stacks • Added new test cases to validate the behavior of the ExecuteDescribeStacks function with empty stacks

why

This was causing stacks with empty results or no components/imports components to be displayed.

Test Results

Screenshot 2024-11-05 at 11 36 39

Screenshot 2024-11-05 at 11 36 53

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

:wave: Atmos newbie here, loving the product so far and enjoying being able to consume your open source modules! They’re helping make quick work of our new environmental standup and the hiera-style hierarchy is a beautiful thing to see working with TF.

I’m currently working my way through a little bit of a side questing exercise (importing our existing resources into the foundational account, account-map, and tfstate-backendmodules) and am trying to get the account-map module to work correctly (I’m very close, having already gotten account and tfstate-backend functional against existing accounts and OU mappings for our AWS Org).

Currently, I’m running into an issue with account-map where it seems to be trying to load different atmos.yaml settings from some default than what I have specified at my top level and where I run the atmos commands from. Specifically, I’m seeing the following error when trying to run atmos terraform plan account-map --stack sre-bootstrap :

│ Error: failed to find a match for the import '/atmos/components/terraform/account-map/stacks/orgs/**/*.yaml' ('/atmos/components/terraform/account-map/stacks/orgs' + '**/*.yaml')

The plan technically succeeds, producing the desired plan output, but then atmos itself seems unhappy about the result. Looking at the pathing above, I’m running my commands from within /atmos and my atmos.yaml is located at /atmos/atmos.yaml (and I don’t define or expect any stacks to be defined within the vendored account-map module, nor do we organize based on org at our toplevel).

stacks:
  base_path: "stacks"
  included_paths:
    - "**/*"
  excluded_paths:
    - "**/_defaults.yaml"
    - "mixins/**/*"
    - "catalog/**/*"
  name_pattern: "{environment}-{stage}"

Any guidance ya’ll can provide would be tremendously helpful as to where or how to address this!

1
1

2024-11-21

2024-11-22

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyone in opposition to changing the default behavior of running “atmos” to displaying help, rather than entering the UI mode? Then moving the UI to “atmos ui”?

2
Miguel Zablah avatar
Miguel Zablah

why the change? like I don’t think is a big deal but I think the default to UI is good since well almost all cli tools will have the –help flag for this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s a good point

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe if there are no stacks configured, we should show help and don’t load the UI, but otherwise keep the current behavior

1
Miguel Zablah avatar
Miguel Zablah

yeah that sounds great

RB avatar

I didn’t realize that was a feature, neat. Usually when i run cli commands without args, it returns a help screen so I’ve learned to expect that.

Would it be easier to allow the default behavior to be modified in the yaml config?

Miguel Zablah avatar
Miguel Zablah

I think this applied more for CLI that do not have a UI or required at least 1 arg for it to work, for CLI that have UI or do not required args to work it always have a default run for example atmos will run UI at least this is the case in my experience but I can be wrong hehe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, maybe we make it configurable. I think we’ll not change anything for the time being.

1
github3 avatar
github3
12:42:22 AM

Expose ExecuteDescribeComponent function in pkg to make it publicly accessible from other Go programs @aknysh (#801)

what

• Expose ExecuteDescribeComponent function in pkg to make it publicly accessible from other Go programs

why

• in Go, packages placed inside an internal folder are considered private, meaning they can only be accessed within the same module or project and cannot be imported by other external projects, effectively acting as a way to hide implementation details and control the public API surface of the code. Everything in pkg is public • The ExecuteDescribeComponent function will be used in other Go modules to programmatically execute atmos describe component <component> -s <stack>, similar to ExecuteDescribeStacks (which is already public)
https://github.com/cloudposse/terraform-provider-utils/blob/main/go.mod#L6
https://github.com/cloudposse/terraform-provider-utils/blob/main/internal/provider/data_source_describe_stacks.go#L12
https://github.com/cloudposse/terraform-provider-utils/blob/main/internal/provider/data_source_describe_stacks.go#L170

1

2024-11-23

Raymond Schippers avatar
Raymond Schippers

Apologies for the stupid question, but I am using the aws-teams, aws-teams-roles, aws-saml modules with the REF architecture. There is a few roles like gitops that won’t be accessible via SSO but only via assume role. However, then trying to apply the IAM policies for these roles the following IAM condition is generated:

  • Principal = { + Federated = “” } Which results in a AWS API error as it’s not a valid ARN or domain, has anyone worked around this?
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Ben Smith (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

a few roles, such as the gitops role, are meant to be assumed by GitHub Actions. These require the GitHub OIDC provider to be deployed first, since the policies will attempt to look up that ARN (which is what you see as empty).

You can deploy that with the github-oidc-provider component: https://docs.cloudposse.com/layers/github-actions/github-oidc-with-aws/

How to use GitHub OIDC with AWS | The Cloud Posse Reference Architecture

This is a detailed guide on how to integrate GitHub OpenID Connect (OIDC) with AWS to facilitate secure and efficient authentication and authorization for GitHub Actions, without the need for permanent (static) AWS credentials, thereby enhancing security and simplifying access management. First we explaini the concept of OIDC, illustrating its use with AWS, and then provide the step-by-step instructions for setting up GitHub as an OIDC provider in AWS.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

As a TLDR of that page to deploy gitops in particular, deploy github-oidc-provider into your identity account (core-gbl-identity). Then try again

The other steps will be used for GitHub Actions later

Raymond Schippers avatar
Raymond Schippers

Awesome thank you

np1
Raymond Schippers avatar
Raymond Schippers

I suspect I am doing something wrong. I have checked the stacks/catalog/ has the github-oidc-role component enabled (as per this), that the repo’s are configured as per option 2 in the doc you linked to, and I have run (with no errors) atmos workflow deploy/github-oidc-provider -f gitops & atmos terraform apply github-oidc-provider -s core-gbl-identity but the error still persists when running atmos workflow deploy/all -f identity Running atmos terraform plan github-oidc-provider --stack core-gbl-identity shows there’s nothing outstanding

github-oidc-provider | The Cloud Posse Reference Architecture

This component is responsible for authorizing the GitHub OIDC provider as an Identity provider for an AWS account

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Hmm I’m not sure. Could you paste a larger snippet from Terraform plan of the policy with the empty federated identity ARN?

Raymond Schippers avatar
Raymond Schippers

Here’s one of the roles that are erroring. I suspect the issue is the OIDC provider is missing in in IAM Identity Centre, and I think I’ve read through the doco on the TF modules and the ref architecture and it doesn’t seem to which modules to create it via TF, so does it have to be click ops?

 # aws_iam_role.default["gitops"] will be created
  + resource "aws_iam_role" "default" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = [
                          + "sts:TagSession",
                          + "sts:SetSourceIdentity",
                          + "sts:AssumeRole",
                        ]
                      + Condition = {
                          + ArnLike      = {
                              + "aws:PrincipalArn" = [
                                  + "arn:aws:iam::491085392849:role/sit-core-gbl-identity-devops",
                                  + "arn:aws:iam::491085392849:role/sit-core-gbl-identity-managers",
                                  + "arn:aws:iam::491085392849:role/aws-reserved/sso.amazonaws.com*/AWSReservedSSO_IdentityDevopsTeamAccess_*",
                                  + "arn:aws:iam::491085392849:role/aws-reserved/sso.amazonaws.com*/AWSReservedSSO_IdentityManagersTeamAccess_*",
                                ]
                            }
                          + StringEquals = {
                              + "aws:PrincipalType" = "AssumedRole"
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + AWS = "arn:aws:iam::491085392849:root"
                        }
                      + Sid       = "RoleAssumeRole"
                    },
                  + {
                      + Action    = [
                          + "sts:TagSession",
                          + "sts:SetSourceIdentity",
                          + "sts:AssumeRole",
                        ]
                      + Condition = {
                          + ArnLike = {
                              + "aws:PrincipalArn" = [
                                  + "arn:aws:iam::491085392849:role/aws-reserved/sso.amazonaws.com*/AWSReservedSSO_IdentityViewerTeamAccess_*",
                                  + "arn:aws:iam::491085392849:role/sit-core-gbl-identity-viewer",
                                  + "arn:aws:iam::*:user/*",
                                ]
                            }
                        }
                      + Effect    = "Deny"
                      + Principal = {
                          + AWS = "arn:aws:iam::491085392849:root"
                        }
                      + Sid       = "RoleDenyAssumeRole"
                    },
                  + {
                      + Action    = [
                          + "sts:TagSession",
                          + "sts:SetSourceIdentity",
                          + "sts:AssumeRoleWithWebIdentity",
                        ]
                      + Condition = {
                          + StringEquals = {
                              + "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
                            }
                          + StringLike   = {
                              + "token.actions.githubusercontent.com:sub" = "repo:Sustainabil-IT/infrastructure:ref:refs/heads/main"
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + Federated = ""
                        }
                      + Sid       = "OidcProviderAssume"
                    },
                ]
              + Version   = "2012-10-17"
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Yes the OIDC Provider ARN is missing, but it will never need to be click ops with refarch. Something is missing, but it’s hard to determine. Can you please walk through these steps to debug?

First check core-identity account in the web console. Do you see the Federated Identity there? If not then that’s the issue. Otherwise the issue is passing the output between Terraform.

If the Federated Identity doesnt exist:

  1. (You already did this before, but for the sake of thoroughness) Check the component itself. Make sure this returns no changes: atmos terraform apply github-oidc-provider -s core-gbl-identity. Also confirm that the output, oidc_provider_arn exists.
    oidc_provider_arn: "arn:aws:iam::491085392849:oidc-provider/token.actions.githubusercontent.com" 
    
  2. Make sure github-oidc-provider is “enabled” with enabled: true

If the Federated Identity does exist:

  1. Ensure the component exists where aws-teams expects it to exist. Is the component called exactly “github-oidc-provider” and is it in the the gbl environment of the same stack, core-identity?
  2. Check the aws-teams stack configuration for trusted_github_repos. Is gitops mapped to your infrastructure repo’s name? (this shouldnt be related, but it’s worth checking)

There’s most likely something minor missing. We create this role very frequently, and there is never any click ops necessary.

Raymond Schippers avatar
Raymond Schippers

My sincere apologies for the long delay in response but I decided to start from the start re-read everything and double check I did it all right, it would appear there is either me not reading issue, a doc issue or unexpected depedency in the workflow, but running atmos terraform output github-oidc-provider -s core-gbl-identity produced no outputs, so reading through all the workflows etc I noticed that it appears that atmos workflow deploy/github-oidc-provider -f github needs to be executed prior to the atmos workflow deploy/all -f identity , as once I ran the github-oidc workflow I had the output you mentioned and the atmos workflow deploy/all -f identity went through fine

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

no worries at all. I’m glad you were able to resolve it!

2024-11-26

shirkevich avatar
shirkevich

Guys, we’ve decided to adopt atmos, but we’re on Google Cloud.

Here are several PR’s to create usable workflow in GitHub. GCS is used to store/retrieve the plan and firestore for metadata.

https://github.com/cloudposse/github-action-terraform-plan-storage/pull/35 https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/93 https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64 https://github.com/cloudposse/github-action-atmos-affected-stacks/pull/55

It is also using GitHub OIDC with workload identity provider on google side. Those PR’s are tested with one of our tenants that we’ve migrated already.

If you’ll find it reasonable to integrate this please help me with the naming of the fields for metadata.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wonderful! Thanks for these. @Igor Rodionov will review

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Gabriela Campana (Cloud Posse)

jose.amengual avatar
jose.amengual

this will need my PR to be merged before hand I think

Igor Rodionov avatar
Igor Rodionov

@jose.amengual your PRs are next top priority in my backlog

1
2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@shirkevich bear with us, we will need to merge @jose.amengual ‘s PRs that add/improve Azure support first. That should be this week or early next week.

1
1
Igor Rodionov avatar
Igor Rodionov

@shirkevich hello. I’v started reviewing your PRs.

shirkevich avatar
shirkevich

great to hear that! should I merge master? you can ping me if you want me to change something

Igor Rodionov avatar
Igor Rodionov

I’ll message you if there will be any actions required

2
Igor Rodionov avatar
Igor Rodionov

@shirkevich I check your PRs overall they looks good.

  1. I’ll merge https://github.com/cloudposse/github-action-terraform-plan-storage/pull/35 today. I merged the PR to an intermediate branch, but you can think this is done.

  2. When I release GitHub-action-terraform-plan-storage, we will have to pin the new version in the plan and apply actions.

https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/93 https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64

  1. Also these two ^ have conflicts that have to be solved. The main difference is that yesterday, we released useful feature - you can now specify integration configs per stack level, So, the settings we used to get with
    - name: config
      shell: bash
      id: config
      run: |-
        echo "opentofu-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["opentofu-version"]')" >> $GITHUB_OUTPUT
        echo "terraform-version=$(atmos describe config -f json | jq -r '.integrations.github.gitops["terraform-version"]')" >> $GITHUB_OUTPUT
        echo "enable-infracost=$(atmos describe config -f json | jq -r '.integrations.github.gitops["infracost-enabled"]')" >> $GITHUB_OUTPUT        
        echo "backend=$(atmos describe config -f json | jq -r '.integrations.github.gitops["artifact-storage"].backend')" >> $GITHUB_OUTPUT
....

now we get https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L88

    - name: Get atmos settings
      id: atmos-settings
      uses: cloudposse/github-action-atmos-get-setting@v2
      with:
        settings: |
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.github.actions_enabled
            outputPath: enabled
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: component_info.component_path
            outputPath: component-path
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: atmos_cli_config.base_path
            outputPath: base-path
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: command
            outputPath: command
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.opentofu-version
            outputPath: opentofu-version

....

while the old way is still true for affected-stacks action

  1. affected-stacks action PR still have one comment https://github.com/cloudposse/github-action-atmos-affected-stacks/pull/55
Igor Rodionov avatar
Igor Rodionov
  1. We probably need PR for <https://github.com/cloudposse/github-action-atmos-terraform-select-components/blob/main/action.yml> with equivalent changes that @shirkevich have to affected-stacks
Igor Rodionov avatar
Igor Rodionov
  1. When we cut a new release for the apply action, we will need PR for this action https://github.com/cloudposse/github-action-atmos-terraform-drift-remediation/blob/main/action.yml#L83 to pin the new version
Igor Rodionov avatar
Igor Rodionov

@shirkevich I released https://github.com/cloudposse/github-action-terraform-plan-storage/releases/tag/v2.0.0 you can use [github.com/cloudposse/github-action-terraform-plan-storage@v2](http://github.com/cloudposse/github-action-terraform-plan-storage@v2) in plan and apply actions

Igor Rodionov avatar
Igor Rodionov


• do you find word backend to choose between aws and google in plan and apply correct?
Nice catch. really no. https://github.com/cloudposse/github-action-atmos-terraform-plan/blob/main/action.yml#L143

            settingsPath: settings.integrations.github.gitops.artifact-storage.plan-repository-type
Igor Rodionov avatar
Igor Rodionov

this is the setting we should rely on

Igor Rodionov avatar
Igor Rodionov
            settingsPath: settings.integrations.github.gitops.artifact-storage.metadata-repository-type
Igor Rodionov avatar
Igor Rodionov


I copied step with lockfile creation from aws, is it really needed? can you please describe how it is used
The plan result is plan file and lock file with hashes of all providers and terraform. The apply step needs a lock file to pull binary equals providers and terraform to guarantee the protection of bugs because of different versions. How where to store lock files is an opinionated theme. Some teams commit lock files into their repos, but we decided to store them the same way as we store plan. Because the provider’s binary depends on OS. Lock file created on Mac could lead to failure on Linux (in theory, I did not test that)

Igor Rodionov avatar
Igor Rodionov

@shirkevich I added couple few comments to resolved code https://github.com/cloudposse/github-action-atmos-terraform-plan/pull/93

shirkevich avatar
shirkevich

addressed your comments

shirkevich avatar
shirkevich

somehow get-settings wasn’t working without tofu installation when I was testing, maybe it was something broken in my atmos.yaml config, now it is green again

shirkevich avatar
shirkevich

yep it is failing again:

template: all-atmos-sections:176:46: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exec: "tofu": executable file not found in $PATH
Error: Error: Command failed: atmos describe component jobs -s foo-stg --format=json --process-templates=true
template: all-atmos-sections:176:46: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exec: "tofu": executable file not found in $PATH



Error: Error: Command failed: atmos describe component jobs -s foo-stg --format=json --process-templates=true
template: all-atmos-sections:176:46: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exec: "tofu": executable file not found in $PATH
Igor Rodionov avatar
Igor Rodionov

hm…

Igor Rodionov avatar
Igor Rodionov

intresting

Igor Rodionov avatar
Igor Rodionov

I wil research that today

shirkevich avatar
shirkevich

the problem is in dependency on the output of the other module:

this foo module have this:

        env_vars: '{{ toRawJson ((atmos.Component "bar" .stack).outputs.env_vars) }}'
shirkevich avatar
shirkevich

looks like chicken and egg problem ))

Igor Rodionov avatar
Igor Rodionov

yea. That was my guess

shirkevich avatar
shirkevich

we need tofu version before get-config

shirkevich avatar
shirkevich

@Igor Rodionov do you need something from my side to proceed with PRs? have some spare time

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov bumping this up

shirkevich avatar
shirkevich

any news on those PRs?

Igor Rodionov avatar
Igor Rodionov

@shirkevich I will check tomorrow. hard week

1
Igor Rodionov avatar
Igor Rodionov

@shirkevich Hello. This PR https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64 looks good. The only comment is that we need 2 lines to example in readme.

#64 Google backend

what

Use google services to apply state

why

For those who use google cloud it is hard to adopt atmos as all the GH tooling is built around AWS. This PR and several other fixes that.

references

See also related PRs in:

cloudposse/github-action-terraform-plan-storage#35cloudposse/github-action-atmos-terraform-plan#93cloudposse/github-action-atmos-affected-stacks#55

need help

To proper name the fields in metadata for google cloud.

Igor Rodionov avatar
Igor Rodionov

Need conflict resolution. it would lead to the same updates you did for the plan action https://github.com/cloudposse/github-action-atmos-terraform-apply/pull/64

#64 Google backend

what

Use google services to apply state

why

For those who use google cloud it is hard to adopt atmos as all the GH tooling is built around AWS. This PR and several other fixes that.

references

See also related PRs in:

cloudposse/github-action-terraform-plan-storage#35cloudposse/github-action-atmos-terraform-plan#93cloudposse/github-action-atmos-affected-stacks#55

need help

To proper name the fields in metadata for google cloud.

shirkevich avatar
shirkevich

thanks, will fix those, what about problem with get-settings in terraform-plan workflow?

Igor Rodionov avatar
Igor Rodionov

oh,,, you mean templating chicken-egg problem

shirkevich avatar
shirkevich

yeeep

Igor Rodionov avatar
Igor Rodionov

also we should probably call the cloudposse/github-action-atmos-get-setting@v2 two times

Igor Rodionov avatar
Igor Rodionov
  1. get terraform and opentofu version with process-templates: false
Igor Rodionov avatar
Igor Rodionov
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.opentofu-version
            outputPath: opentofu-version
          - component: ${{ inputs.component }}
            stack: ${{ inputs.stack }}
            settingsPath: settings.integrations.github.gitops.terraform-version
            outputPath: terraform-version
Igor Rodionov avatar
Igor Rodionov
  1. After install terraform / opentofu call cloudposse/github-action-atmos-get-setting@v2 once again to get all other settings with process-templates: true
shirkevich avatar
shirkevich

if it will work with process-templates: false will do as you suggesting

shirkevich avatar
shirkevich

was thinking to switch to parsing the old way, but it will be ugly…

Igor Rodionov avatar
Igor Rodionov

I’m not sure we need google auth here

shirkevich avatar
shirkevich

possibly yes let me test this with my repo

shirkevich avatar
shirkevich

it is failing without github auth

Writing the backend config to file:
infra/atmos/components/terraform/cloudrun/backend.tf.json

Wrote the backend config to file:
infra/atmos/components/terraform/cloudrun/backend.tf.json

Deleting Terraform environment file:
'infra/atmos/components/terraform/cloudrun/.terraform/environment'

Executing 'terraform init cloudrun -s foo-prd'
template: describe-stacks-all-sections:47:35: executing "describe-stacks-all-sections" at <atmos.Component>: error calling Component: exit status 1

Error: storage.NewClient() failed: dialing: google: could not find default credentials. See <https://cloud.google.com/docs/authentication/external/set-up-adc> for more information
shirkevich avatar
shirkevich

@Igor Rodionov :point_up:

The google auth is needed there as terraform init doing backend initialisation and failing without auth, or am I missing smth?

jose.amengual avatar
jose.amengual

@shirkevich is that a local test you run? or that is on the action tests?

jose.amengual avatar
jose.amengual

When I added the azure support none of the tests used google auth, only aws test will run correctly

shirkevich avatar
shirkevich

nope it’s my local test with real env

shirkevich avatar
shirkevich

how do you run tests? are you spawning real infra for that?

jose.amengual avatar
jose.amengual

the test uses a test account with real infra

jose.amengual avatar
jose.amengual

but this tests uses local resources

shirkevich avatar
shirkevich

without credentials and when using google backend this is failing:

atmos terraform init cloudrun -s foo-stg

│ Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: [email protected] does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist)., forbidden

and therefore github-action-atmos-affected-stacks is also failing

how is it working for azure ?

shirkevich avatar
shirkevich

or you’re storing state in aws?

jose.amengual avatar
jose.amengual

no no

jose.amengual avatar
jose.amengual

state is stored in blobstore

jose.amengual avatar
jose.amengual

but I pass the credentials to the provider using ENV vars in my case

jose.amengual avatar
jose.amengual

after the plan is successful the https://github.com/cloudposse/github-action-terraform-plan-storage/pull/35 should use the same way of authenticating to store the plan, which means that by the time the action is being called, the login to google from the pipeline is successful

jose.amengual avatar
jose.amengual

so you need to use a cicd/automation/machine/githubactions user for the pipeline to be able to authenticate so the action can sore the plan

shirkevich avatar
shirkevich

can pass credentials same way for sure, but why not to use GH oidc everywhere?

jose.amengual avatar
jose.amengual

we use OIDC in azure

jose.amengual avatar
jose.amengual

that is just one ENV variable to be setup

github3 avatar
github3
03:39:43 PM

Implement atmos list command for listing stacks and components @RoseSecurity (#797)

what

• Implement atmos list commands for listing stacks and components

atmos_list

• Incorporates custom list commands into Atmos • Updates documentation and website • Removes atmos.yaml references to custom list command

why

• While the custom Atmos commands for listing stacks and components are great, incorporating the command into Atmos is far more efficient and parallelized, achieving similar or better results in 0.741 seconds compared to 8.131 seconds for the custom command

testing

• Listing all stacks in quick-start-advanced

❯ atmos list stacks plat-ue2-dev plat-ue2-prod plat-ue2-staging plat-uw2-dev plat-uw2-prod plat-uw2-staging

• Listing all stacks by component

❯ ./atmos list stacks -c vpc-flow-logs-bucket plat-ue2-dev plat-ue2-prod plat-ue2-staging plat-uw2-dev plat-uw2-prod plat-uw2-staging

• Listing stacks for non-existent component

❯ ./atmos list stacks -c test No stacks found for component ‘test’

• Listing all components

❯ ./atmos list components vpc vpc-flow-logs-bucket

• Listing components by stack

❯ ./atmos list components -s plat-ue2-prod vpc vpc-flow-logs-bucket

• Listing components by invalid stack

❯ ./atmos list components -s invalid-stack Error: stack ‘invalid-stack’ not found

references

Atmos Custom Command Docs

4
Miguel Zablah avatar
Miguel Zablah

Hey guys I have a question I need a way to get some creds from 1password and I will like to use the cli for this, is there a way to do this with atmos before a component run? like a pre-hook where I can set the ENV? I know we have datasources but can this run cli cmd? or is there a better way to do this?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are working on an interface to support this. It will be pluggable. We are not implementing 1Password in the initial release but it will be very easy to add other backends. At this time, can you use ENV vars instead? That is supported.

Miguel Zablah avatar
Miguel Zablah

oh nice got it I will do that then thanks!

1
Miguel Zablah avatar
Miguel Zablah

@Erik Osterman (Cloud Posse) another question is there a way to maybe run a cli cmd before a component runs? like a pre-hook for that component?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hooks may be coming. Can you describe what you want to accomplish?

Miguel Zablah avatar
Miguel Zablah

yes, on some clients sometimes we need to set and ENV that requieres som custom cli cmd to get it although sometimes is possible to do with datasource sometimes we need a cli cmd to run or it will just be simpler for us, so I will like a pre-hook to run so we can get that ENV and then I can set that or use it on the provider config or pass it as an ENV

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That use-case will be addressed with the exec function

https://github.com/cloudposse/atmos/pull/810

#810 Introduce Atmos YAML functions

what

• Introduce Atmos YAML functions • Update docs
https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/
https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/template/
https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/exec/
https://pr-810.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/stacks/yaml-functions/terraform.output/

why

Atmos YAML Functions are a crucial part of Atmos stack manifests. They allow you to manipulate data and perform operations on the data to customize the stack configurations.

Atmos YAML functions are based on YAML Explicit typing and user-defined Explicit Tags (local data types). Explicit tags are denoted by the exclamation point (”!”) symbol. Atmos detects the tags in the stack manifests and executes the corresponding functions.

NOTE: YAML data types can be divided into three categories: core, defined, and user-defined. Core are ones expected to exist in any parser (e.g. floats, ints, strings, lists, maps). Many more advanced data types, such as binary data, are defined in the YAML specification but not supported in all implementations. Finally, YAML defines a way to extend the data type definitions locally to accommodate user-defined classes, structures, primitives, and functions.

Atmos YAML functions

• The !template YAML function can be used to handle template outputs containing maps or lists returned from the atmos.Component template function • The !exec YAML function is used to execute shell scripts and assign the results to the sections in Atmos stack manifests • The !terraform.output YAML function is used to read the outputs (remote state) of components directly in Atmos stack manifests

NOTE; You can use Atmos Stack Manifest Templating and Atmos YAML functions in the same stack configurations at the same time. Atmos processes the templates first, and then executes the YAML functions, allowing you to provide the parameters to the YAML functions dynamically.

Examples

components: terraform: component2: vars: # Handle the output of type list from the atmos.Component template function test_1: !template ‘{{ toJson (atmos.Component “component1” “plat-ue2-dev”).outputs.test_list }}’

    # Handle the output of type map from the `atmos.Component` template function
    test_2: !template '{{ toJson (atmos.Component "component1" .stack).outputs.test_map }}'

    # Execute the shell script and assign the result to the `test_3` variable
    test_3: !exec echo 42

    # Execute the shell script to get the `test_label_id` output from the `component1` component in the stack `plat-ue2-dev`
    test_4: !exec atmos terraform output component1 -s plat-ue2-dev --skip-init -- -json test_label_id

    # Execute the shell script to get the `test_map` output from the `component1` component in the current stack
    test_5: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_map

    # Execute the shell script to get the `test_list` output from the `component1` component in the current stack
    test_6: !exec atmos terraform output component1 -s {{ .stack }} --skip-init -- -json test_list

    # Get the `test_label_id` output of type string from the `component1` component in the stack `plat-ue2-dev`
    test_7: !terraform.output component1 plat-ue2-dev test_label_id

    # Get the `test_label_id` output of type string from the `component1` component in the current stack
    test_8: !terraform.output component1 {{ .stack }} test_label_id

    # Get the `test_list` output of type list from the `component1` component in the current stack
    test_9: !terraform.output component1 {{ .stack }} test_list

    # Get the `test_map` output of type map from the `component1` component in the current stack
    test_10: !terraform.output component1 {{ .stack }} test_map

Summary by CodeRabbit

Release Notes

New Features
• Introduced new YAML functions: !exec, !template, and !terraform.output for enhanced stack manifest capabilities.
• Added support for custom YAML tags processing in Atmos configurations.
• Enhanced configuration options for Atlantis integration, allowing for more flexible setups. • Documentation Updates
• Enhanced documentation for using remote state in Terraform components.
• Updated guides for the atmos.Component function and the new YAML functions.
• Clarified Atlantis integration setup options and workflows.
• Improved explanations on handling outputs and using the new YAML functions.
• Added documentation for new functions and updated existing guides for clarity. • Dependency Updates
• Upgraded various dependencies to their latest versions for improved performance and security.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
        # Execute the shell command
        Bar: !exec echo foo
Miguel Zablah avatar
Miguel Zablah

oh yeah that is perfect! that PR looks awesome specially !terraform.output !! really nice work!

2024-11-27

Samuel Than avatar
Samuel Than

After some mucking around + head banging.

I think, I come to understand that, if i did not deploy the account-map component in using the {namespace}-core-glb-root naming convention as the Stackname, a lot of my deployment of components that relies of the account-map component behind the scene, I.E s3-buckets, will fail as it tries to look for the account-map component based on the default value (ref here https://github.com/cloudposse-terraform-components/aws-s3-bucket/blob/e19d3e7adb38805553246e740627fbef6c8b52a6/src/variables.tf#L6)

Coming from a brownfield implementation, if the account-map component is deployed outside of the naming convention of {namespace}-core-glb-root. Say for example i deployed account-map component’s to stack name cs-abc-apse2-prod

I should be needing to declare variables like to following to overide the default value, for this example the component for aws-s3-bucket ?

"account_map_environment_name"
"account_map_stage_name"
"account_map_tenant_name"

Unless i start writing my own component instead of using cloudposse’s https://github.com/cloudposse-terraform-components, am i right to assume that, to reduce any headache , it is better to deploy the account-map component under the {tenant}-core-glb-root stack name ?

Cloud Posse Terraform Components
1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Cloud Posse Terraform Components
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Yes, you can deploy account-map wherever you’d like, but we have several places where the default values of where to discover the component are defined. Like s3-bucket as you point out

You can overwrite those for your org. For example, we have several customers that dont use the name core or have other unique organizational units. All you would need to do is to define the value for your own org in your stack catalog for the component. Or you can define that variables once at a higher level in stacks as a common value for all Terraform

For example with s3-bucket in the README: https://github.com/cloudposse-terraform-components/aws-s3-bucket/blob/e19d3e7adb38805553246e740627fbef6c8b52a6/README.md?plain=1#L49

        account_map_tenant_name: core
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, this is more a refarch topic than an atmos topic.

Samuel Than avatar
Samuel Than

Thanks @Dan Miller (Cloud Posse) for help in clarification and the suggestion on high level declaration of variables.

1

2024-11-28

dgokcin_slack avatar
dgokcin_slack

Hi All!

Atmos newbie here. I’m working on adopting the quickstart-advanced tutorial for my organization’s AWS structure, and I have some questions about providers and backend configuration during the initial bootstrapping phase. I would appreciate any guidance as it is quite a lot to take in :slightly_smiling_face:

  1. Provider Configuration: During the initial bootstrapping, what’s the recommended approach for declaring provider configurations to maximize reusability and minimize duplication? I noticed the awsutils component needs providers, but I’m unsure about the best way to structure this.

  2. State Management: I want to maintain separate state buckets per stage(account). What’s the recommended approach for creating and managing the tfstate-backend component in this scenario?(which account/region)

  3. Backend Configuration: How should the backend configuration be structured across different accounts and stages? Is it possible to create the backend configuration with parameters instead of hardcoded bucket names/dynamo table names?

  4. Account Configuration: I saw the account component which is used to define the OUs etc. I believe this needs to be deployed to the management account?

Current Atmos Repo structure:

.
├── README.md
├── atmos.yaml
├── components
│   └── terraform
│       ├── account
│       │   ├── context.tf
│       │   ├── main.tf
│       │   ├── outputs.tf
│       │   ├── variables.tf
│       │   └── versions.tf
│       ├── account-map
│       │   ├── context.tf
│       │   ├── dynamic-roles.tf
│       │   ├── main.tf
│       │   ├── modules
│       │   │   ├── iam-roles
│       │   │   │   ├── README.md
│       │   │   │   ├── context.tf
│       │   │   │   ├── main.tf
│       │   │   │   ├── outputs.tf
│       │   │   │   ├── variables.tf
│       │   │   │   └── versions.tf
│       │   │   ├── roles-to-principals
│       │   │   │   ├── README.md
│       │   │   │   ├── context.tf
│       │   │   │   ├── main.tf
│       │   │   │   ├── outputs.tf
│       │   │   │   └── variables.tf
│       │   │   └── team-assume-role-policy
│       │   │       ├── README.md
│       │   │       ├── context.tf
│       │   │       ├── github-assume-role-policy.mixin.tf
│       │   │       ├── main.tf
│       │   │       ├── outputs.tf
│       │   │       └── variables.tf
│       │   ├── outputs.tf
│       │   ├── remote-state.tf
│       │   ├── variables.tf
│       │   └── versions.tf
│       └── tfstate-backend
│           ├── context.tf
│           ├── iam.tf
│           ├── main.tf
│           ├── outputs.tf
│           ├── variables.tf
│           └── versions.tf
├── stacks
│   ├── catalog
│   │   └── tfstate-backend.yaml
│   ├── mixins
│   │   ├── region
│   │   │   ├── eu-west-1.yaml
│   │   │   └── global-region.yaml
│   │   ├── stage
│   │   │   └── root.yaml
│   │   ├── tenant
│   │   │   └── core.yaml
│   │   └── tfstate-backend.yaml
│   └── orgs
│       └── dunder-mifflin
│           ├── _defaults.yaml
│           └── core
│               ├── _defaults.yaml
│               └── root
│                   ├── _defaults.yaml
│                   └── eu-west-1.yaml
└── vendor.yaml

19 directories, 48 files

Desired AWS Organization Structure

  [ACC] Root/Management Account (account name: dunder-mifflin-root)
    │
    ├── [OU] Security
    │   ├── [ACC] Log-Archive
    │   └── [ACC] Security-Tooling
    │
    ├── [OU] Core
    │   ├── [ACC] Monitoring
    │   └── [ACC] Shared-Services
    │
    └── [OU] Workloads
        ├── [OU] Production
        │   └── [ACC] Prod
        │
        └── [OU] Non-Production
            └── [ACC] Non-Prod
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

2024-11-29

Dennis Bernardy avatar
Dennis Bernardy

Hey, I’m currently running into problems with the gomplate datasources. I defined this:

settings:
  templates:
    settings:
      env:
        AWS_PROFILE: "{{ .vars.aws_profile }}"
      gomplate:
        timeout: 5
        datasources:
          certificate:
            url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url"

and enabled templating in atmos.yaml. (It’s kind of confusing, that it’s just “template.settings” in atmos.yaml, but “settings.templates.settings” in the stack confguration, while atmos.yaml also has a “settings” key for merge behaviour)

But my problem is, that the templating does not get evaluated here:

template: all-atmos-sections:151:30: executing "all-atmos-sections" at <datasource "certificate">: error calling datasource: Couldn't read datasource 'certificate': Error reading aws+smp from AWS using GetParameter with input {
  Name: "/ai/infra/acm/certificate/{{ .vars.account }}.url",
  WithDecryption: true
}: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, {{ .vars.aws_profile }}.
EC2RoleRequestError: no EC2 instance role found
caused by: RequestError: send request failed
caused by: Get "<http://169.254.169.254/latest/meta-data/iam/security-credentials/>": read tcp 127.0.0.1:56391->127.0.0.1:10011: read: connection reset by peer

I tried adding the env in the templates settings in atmos.yaml like specified in the documentation but there it is simply ignored.

1
Dennis Bernardy avatar
Dennis Bernardy

Also, here it specifies “settings.templates.settings” as configurable in atmos.yaml which other documentations are invalidating: https://atmos.tools/core-concepts/stacks/templates/datasources

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You might need to mess with the template evaluations, as your are using templating with the template data sources

Dennis Bernardy avatar
Dennis Bernardy

I’m not sure what you mean by that. I use 2 evaluations and already tried 1 and 3. The error stays the same

Dennis Bernardy avatar
Dennis Bernardy

This is my current atmos.yaml

base_path: "."

components:
  terraform:
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true
  helmfile:
    base_path: "components/helmfile"
    use_eks: false
    cluster_name_pattern: "{environment}-cluster"

stacks:
  base_path: "stacks"
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "**/_defaults.yaml"
  name_template: "{{.vars.tenant}}-{{.vars.account}}-{{.vars.account_id}}"

workflows:
  base_path: "stacks/workflows"

schemas:
  jsonschema:
    base_path: "stacks/schemas/jsonschema"

templates:
  settings:
    enabled: true
    evaluations: 2
    sprig:
      enabled: false
    gomplate:
      enabled: true

settings:
  list_merge_strategy: merge

logs:
  file: "/dev/stderr"
  level: Trace
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Dennis Bernardy please run the following command (it will show you the final config for the component in the stack, and if the templates got evaluated)

atmos describe component <your-component> -s <your-stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command also accepts the ----process-templates flag to enable/disbale template procesing (so you can see the values with and wo template evaluation)

https://atmos.tools/cli/commands/describe/component/#flags

atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(note that templates are not evaluated in atmos.yaml, so if ou want to use url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url", it needs to be in Atmos stack manifests)

Dennis Bernardy avatar
Dennis Bernardy

The command does not work:

-> % atmos describe component -s stack argocd

Found stack manifests:
- deploy/accounts/dev.yaml
- deploy/accounts/sandbox.yaml
- deploy/accounts/tools.yaml

panic: assignment to entry in nil map

goroutine 1 [running]:
github.com/cloudposse/atmos/internal/exec.ProcessStacks({{0x108fc0ef0, 0x1}, {{{0x14000c3f620, 0x14}, 0x0, {0x1400092eec0, 0x31}, 0x1, 0x1, 0x1, ...}, ...}, ...}, ...)
	github.com/cloudposse/atmos/internal/exec/utils.go:426 +0xd4c
github.com/cloudposse/atmos/internal/exec.ExecuteDescribeComponent({0x16ba9f592?, 0x10641a958?}, {0x16ba9f57c?, 0x1063c369c?}, 0x1)
	github.com/cloudposse/atmos/internal/exec/describe_component.go:75 +0x174
github.com/cloudposse/atmos/internal/exec.ExecuteDescribeComponentCmd(0x1091ac2a0, {0x14000cb0e10, 0x0?, 0x0?})
	github.com/cloudposse/atmos/internal/exec/describe_component.go:46 +0x1c8
github.com/cloudposse/atmos/cmd.init.func5(0x1091ac2a0, {0x14000cb0e10, 0x1, 0x3})
	github.com/cloudposse/atmos/cmd/describe_component.go:21 +0x54
github.com/spf13/cobra.(*Command).execute(0x1091ac2a0, {0x14000cb0db0, 0x3, 0x3})
	github.com/spf13/[email protected]/command.go:989 +0x81c
github.com/spf13/cobra.(*Command).ExecuteC(0x1091aedc0)
	github.com/spf13/[email protected]/command.go:1117 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/[email protected]/command.go:1041
github.com/cloudposse/atmos/cmd.Execute()
	github.com/cloudposse/atmos/cmd/root.go:107 +0x32c
main.main()
	github.com/cloudposse/atmos/main.go:10 +0x24
Dennis Bernardy avatar
Dennis Bernardy

The templating happens inside a catalog file. for my understanding this is part of the stack and should work, no?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, templates work in the catalog files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please show the config for argocd (the error is not related to the templates, something wrong with the tack configs)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also , is your stack name stack?

Dennis Bernardy avatar
Dennis Bernardy

No, the name is based on the template I have in atmos.yaml. I just replaced it here for something generic as it contains our aws account id

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sorry, not sure I understand

atmos describe component -s stack argocd.  - the command needs a real stack name
Dennis Bernardy avatar
Dennis Bernardy

Yea, sure. I run it on my machine with the real stack name. I just copied it over here with a generic name to not expose anything

Dennis Bernardy avatar
Dennis Bernardy

This is the full catalog file of argocd (with redacted values)

settings:
  templates:
    settings:
      gomplate:
        timeout: 5
        datasources:
          certificate:
            url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url"
          argocd-iam-role-arn:
            url: "aws+smp:///ai/infra/argocd/iam"

components:
  terraform:
    argo-sso-secret:
      metadata:
        component: secrets
      vars:
        name: "name"
        secret_value:
          client_id: ""
          client_secret: ""
    argo-gitlab-secret-token-name2:
      metadata:
        component: secrets
      vars:
        name: "name"
        secret_value:
          url: "url"
          type: "git"
          username: "gitlab-bot"
          token: ""
    argo-gitlab-secret-token-argocd:
      metadata:
        component: secrets
      vars:
        name: "name"
        secret_value:
          url: "url"
          type: "git"
          username: "gitlab-bot"
          token: ""
    argocd-base:
      metadata:
        component: argocd
      vars:
        cluster_name: "{{ .vars.account }}-cluster"
        secrets:
          - "name1"
          - "name2"
          - "name3"
  helmfile:
    argocd:
      metadata:
        component: argocd
      vars:
        log_level: "info"
        sso_provider_url: "<https://sso>"
        sso_argo_secret_name: "name"
        gitlab_secrets:
          - name: name
            secret: "name"
        certificate_arn: '{{ (datasource "certificate").Value }}'
        eks_role_arn: '{{ (datasource "argocd-iam-role-arn").Value }}'
        enable_redis_ha: false
    argocd-apps:
      metadata:
        component: argocd-apps
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to debug this, I would replace name_template: "{{.vars.tenant}}-{{.vars.account}}-{{.vars.account_id}}" with something static (any stack name), and then run

atmos describe component argocd -s <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

make the command work and show you the evaluated template tokens

Dennis Bernardy avatar
Dennis Bernardy

That doesn’t work either. Same error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Replace the stack name with a static value
  2. Make sure the command atmos describe component works and shows the final values for the component in the stack with the templates evaluated
  3. Then make sure certificate_arn: '{{ (datasource "certificate").Value }}' works and returns the the correct value (since it depends on the credentials to access the datasource, so we need to test those credentials)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also run atmos describe stacks to see what stacks do you have and all the components in them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please run this command first

Dennis Bernardy avatar
Dennis Bernardy
-> % atmos describe stacks
The Atmos JSON Schema file is not configured. Using the default schema '<https://atmos.tools/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json>'
Validating all YAML files in the 'stacks' folder and all subfolders

ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 2
ProcessTmplWithDatasources(): processed template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): processing template 'describe-stacks-all-sections'
ProcessTmplWithDatasources(): template 'describe-stacks-all-sections' - evaluation 1
template: describe-stacks-all-sections:33:26: executing "describe-stacks-all-sections" at <datasource "certificate">: error calling datasource: Couldn't read datasource 'certificate': Error reading aws+smp from AWS using GetParameter with input {
  Name: "/ai/infra/acm/certificate/{{ .vars.account }}.url",
  WithDecryption: true
}: ValidationException: Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
	status code: 400, request id: 2863e1c5-c314-460e-851f-8441d8688278
Dennis Bernardy avatar
Dennis Bernardy

If I comment this line “certificate_arn: ‘{{ (datasource “certificate”).Value }}’” in the catalog file it works and it also renders the templates correctly

Dennis Bernardy avatar
Dennis Bernardy

But it still errors when describing only the component

Dennis Bernardy avatar
Dennis Bernardy

Is the aws parameter store maybe fetched before the templating is done?

I also noticed that if I describe a stack, that is a terroform component it works just fine (but it also does not have a datasource) but the argocd one, which is a helmfile, does fail

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i think this is what’s happening. The templates here

        datasources:
          certificate:
            url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url"
          argocd-iam-role-arn:
            url: "aws+smp:///ai/infra/argocd/iam"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and here

certificate_arn: '{{ (datasource "certificate").Value }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

get evaluated at the same time (Atmos sends it to the Go template engine)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this one url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url" needs to be evaluated first (in the first evaluation step)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then this one certificate_arn: '{{ (datasource "certificate").Value }}' in the second evaluation step

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to do it, you need to set evaluations: 2 and the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
certificate_arn: '{{`{{ (datasource "certificate").Value }}`}}'
Dennis Bernardy avatar
Dennis Bernardy

Ah, that did the trick. Thank you very much!

Dennis Bernardy avatar
Dennis Bernardy

I just noticed that the datasource is used in every component. Can I limit this somehow?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this

settings:
  templates:
    settings:
      gomplate:
        timeout: 5
        datasources:
          certificate:
            url: "aws+smp:///ai/infra/acm/certificate/{{ .vars.account }}.url"
          argocd-iam-role-arn:
            url: "aws+smp:///ai/infra/argocd/iam"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does not need to be global section

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

settings is the same first class section as vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it can be defined at the component level

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it will be deep-merged with all the global settings sections and with the base component’s settings section, so you can define parts of it globally, and parts related to the component in the component’s settings)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Inherit Configurations in Atmos Stacks | atmos

Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.

Dennis Bernardy avatar
Dennis Bernardy

Nice. That works

2
Dennis Bernardy avatar
Dennis Bernardy

Now, one last question: is it possible to make these datasources optional? If I have multiple stacks and I want to describe them, but they have not been rolled out yet, I can not view them as the parameter store is not yet filled with data

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can override the settings section for the same component in the other stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# In the other stacks
components:
  helmfile:
    argocd:
      vars:
        certificate_arn: 'foo'
        eks_role_arn: 'bar'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this way, it will take the hardcoded values just in the other stacks where you don’t have them in ASM yet

Bob avatar

Hello!

Wondering if there’s a way to dynamically generate stack names based on directory and file name? Been trying templating, but been lost. For context, I am just following the simple tutorial, and have the following structure:

├── atmos.yaml
├── components
│   └── terraform
│       └── weather
│           ├── main.tf
│           ├── outputs.tf
│           ├── variables.tf
│           └── versions.tf
└── stacks
    ├── catalog
    │   ├── station.yaml
    ├── deploy
    │   └── core
    │       └── dev-eus2.yaml

I have the following on atmos.yaml name_template: name_template: ‘{{ .vars.namespace }}-{{ .vars.stage }}-{{ .vars.region }}’

vars:
  stage: dev
  namespace: core
  region: eus2

import:
  - catalog/station

components:
  terraform:
    station:
      vars:
        location: Stockholm
        lang: se

For dev-eus2.yaml, I just want to be able to have the stack name “core-dev-eus2” auto generated somehow without having to define the vars for each yaml. I started looking into “templating/sprig” and imports. I can do imports, but that looked so many duplicated variables that may result into human error, so want to figure out a way to dynamically do it somehow. Thanks!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Inheritance is what enables you to avoid defining it in each file

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We made a deliberate design decision to not use filesystem paths as meta data, and the name template you defined looks good

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you describe the problem you are encountering

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And share the output of atmos list stacks and atmos list components

Bob avatar

My goal is just to minimize the need to put the same “labels” twice. For this instance, the namespace core is already the folder name, the stage / region are on the name of the yaml (dev-eus2.yaml). I know this makes it a little inflexible if we need to change the folder/file names, but so far I found the most mistakes are due to copy paste and the need to explicitly define the vars or imports. I may be thinking of this incorrectly though, so open to suggestions

atmos list components
station

atmos list stacks    
core-dev-eus2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There might be a way. I can play with it on Monday. The gist of it is that if you run atmos describe stacks you should see a parameter that represents the file name. Using gomplate functions that can be split on a delimiter like / and then you can use parts in the name template.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse), I tried to set this:

stacks:
  name_template: '{{ (split "/" .atmos_stack_file | index 2) }}-{{ (split "/" .atmos_stack_file | index 3) }}-{{ (split "/" .atmos_stack_file | index 1) }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But when I run:

atmos list stacks

I get:

Error describing stacks: template: describe-stacks-name-template:1:14: executing "describe-stacks-name-template" at <.atmos_stack_file>: map has no entry for key "atmos_stack_file"%                    
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I should get:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos/examples/quick-start-advanced DEV-270-n*​​ ⇣≡
❯ atmos list stacks
plat-ue2-dev
plat-ue2-prod
plat-ue2-staging
plat-uw2-dev
plat-uw2-prod
plat-uw2-staging
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And when I run atmos describe component, I see I have the atmos_stack_file

❯ atmos describe component vpc --stack plat-ue2-dev|grep atmos_stack_file
atmos_stack_file: orgs/acme/plat/dev/us-east-2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Bob regarding
My goal is just to minimize the need to put the same “labels” twice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use mixins for that, for example:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Catalog with Mixins | atmos

Component Catalog with Mixins Atmos Design Pattern

Bob avatar

Thanks for looking into it and trying a few things!

I was just hoping to minimize the copy-paste issue that may happen. I can probably leverage OPA to prevent it, was just thinking of a dynamic way to do it.

For instance, on this file: https://github.com/cloudposse/atmos/blob/main/examples/quick-start-advanced/stacks/orgs/acme/core/_defaults.yaml#L3

Someone can put “mixins/tenant/magic” instead of “mixins/tenant/core”

import:
  - orgs/acme/_defaults
  - mixins/tenant/core

Then to have the “core-dev-eus2” stack, I’ll need to do 3 imports:

import:
  - mixins/namespace/core
  - mixins/stage/dev
  - mixins/regions/eus2

Would have been nice if I can set defaults somehow where I don’t have to import 3 things for each of the “deployment” yaml.

Nothing wrong with this approach, and it’s very flexible, just error-prone for some (maybe just me - I did mixins originally, but had a typo where I imported mixins/stage/qa instead of dev, so had to troubleshoot why core-dev-eus2 was not showing up)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i understand what you are saying, but it would be “a configuration for the stack configurations”, which is another abstraction to support and manage

maybe using OPA policies is the way to go to check what is imported in the stacks as you mentioned

1

2024-11-30

    keyboard_arrow_up