#atmos (2024-11)

2024-11-01

jose.amengual avatar
jose.amengual

with the atmos github actions I can plan/apply no problem but now I want to destroy and I do not have a count = module.this.enabled ? 1 : 0, how do you guys destroy? I will like to be able to find the delete yaml from the stack and destroy that one if possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Interesting. So we are working on supporting an enabled flag for components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The scope is not currently to destroy. However, it might be worth considering that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the current implementation, enabled would make the component “invisible”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

an alternative to commenting it out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I can see a component as having 3 states

• enabled

• disabled

• destroyed

jose.amengual avatar
jose.amengual

I think if the components are commented out or deleted from the stack file, describe affected should know what to do with it or output a destroyed flag or something like that for another job to use that matrix to destroy it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, that makes sense.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So enabled = true/false affects the visibility, but removal from the configuration is visible via atmos describe affected

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) what happens today in describe affected if the component is removed? is that surfaced?

jose.amengual avatar
jose.amengual

I renamed a component from pepetest to pepetest1, describe affected saw the new pepetest1 component got deployed, but the old one is still there in the cloud environment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the key is do we have the information in describe affected JSON output

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The right behavior might not be implemented, but maybe we have the data there to act on

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if a component is removed, Atmos does not see it (it will not consider it affected) - this is the current implementation. This is b/c Atmos compares the current branch with a remote branch/tag/sha - if the current branch does not have the component, then it’s “not affected”.

Setting enabled: false is not “removal”, so describe affected sees that

jose.amengual avatar
jose.amengual

but with that, you need a two-step approach, one to say enable: false and then another PR to remove the yaml from the stack file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would say we didn’t consider a complete component removal with describe affected - we need to revisit this

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Added a task to the backlog

2

2024-11-02

github3 avatar
github3
10:28:40 PM

Improve terraform and helmfile help. Enable Go templating in the command field. Clean Terraform workspace before executing terraform init @aknysh (#759)

what

• Improve terraform and helmfile help • Enable Go templating in the command field of stack config • Clean Terraform workspace before executing terraform init

why

• Improve the help messages. When a user executes atmos terraform --help or atmos helmfile --help (or help for a subcommand), print a message describing the command and how to execute the terraform and helmfile help command
atmos terraform –help

image

• Enable Go templating in the command stack config in addition to the already supported sections.
You can now use Go templates in the following Atmos sections to refer to values in the same or other sections:
vars
settings
env
providers
overrides
backend
backend_type
component
metadata.component
command
Enabling Go templates in the command section allows specifying different Terraform/OpenTofu/Helmfile versions per component/stack, and get the value from different Atmos sections or from external data sources • Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default
  2. The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD. The PR adds the logic that deletes the `.terraform/environment` file from the component directory before executing `terraform init`. The `.terraform/environment` file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing `terraform init` to prevent the Terraform prompt asking to select the default or the previously used workspace.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual another one for you

Improve terraform and helmfile help. Enable Go templating in the command field. Clean Terraform workspace before executing terraform init @aknysh (#759)

what

• Improve terraform and helmfile help • Enable Go templating in the command field of stack config • Clean Terraform workspace before executing terraform init

why

• Improve the help messages. When a user executes atmos terraform --help or atmos helmfile --help (or help for a subcommand), print a message describing the command and how to execute the terraform and helmfile help command
atmos terraform –help

image

• Enable Go templating in the command stack config in addition to the already supported sections.
You can now use Go templates in the following Atmos sections to refer to values in the same or other sections:
vars
settings
env
providers
overrides
backend
backend_type
component
metadata.component
command
Enabling Go templates in the command section allows specifying different Terraform/OpenTofu/Helmfile versions per component/stack, and get the value from different Atmos sections or from external data sources • Clean Terraform workspace before executing terraform init. When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default
  2. The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD. The PR adds the logic that deletes the `.terraform/environment` file from the component directory before executing `terraform init`. The `.terraform/environment` file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure. We delete the file before executing `terraform init` to prevent the Terraform prompt asking to select the default or the previously used workspace.
jose.amengual avatar
jose.amengual

this feels like Xmas

1

2024-11-03

Kalman Speier avatar
Kalman Speier

hey folks, for some reason atmos generate wrong terraform workspace name, i have a component named nats and stack named dev and instead of dev-nats the workspace is simply dev what could cause that ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Something is wrong with your name_pattern or name_template

Kalman Speier avatar
Kalman Speier

i didn’t change those.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Share your atmos config, if you can

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Customize Stack Behavior | atmos

Use the atmos.yaml to configure where Atmos will discover stack configurations.

Kalman Speier avatar
Kalman Speier
base_path: .

components:
  terraform:
    command: tofu
    base_path: components/terraform
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true

stacks:
  base_path: stacks
  included_paths:
    - "deploy/**/*"
  excluded_paths:
    - "**/_defaults.yaml"
  name_pattern: "{stage}"

workflows:
  base_path: stacks/workflows

templates:
  settings:
    enabled: true
    sprig:
      enabled: true

logs:
  file: /dev/stderr
  level: Info
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, I think I initially misunderstood.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you setting workspace key prefix anywhere?

Kalman Speier avatar
Kalman Speier

nope

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) any ideas

Kalman Speier avatar
Kalman Speier

is the workspaces are port of the terraform state right?

Kalman Speier avatar
Kalman Speier

i’ve tried to clean all tf files and deleted this workspace, but still got back named as dev so maybe the default workspace holds some information?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, in atmos we use one workspace for each instance of a component deployed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you updated to the latest atmos? We just fixed a problem related to workspaces and changing backends

Kalman Speier avatar
Kalman Speier

yes, i’m using the very latest

Kalman Speier avatar
Kalman Speier
Workspace "dev" doesn't exist.

You can create this workspace with the "new" subcommand
or include the "-or-create" flag with the "select" subcommand.
Created and switched to workspace "dev"!
Kalman Speier avatar
Kalman Speier

but i bet it’s atmos what is switching to the workspace so the name dev comes from atmos not from the state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, atmos dynamically computes the workspace name and switches to it

Kalman Speier avatar
Kalman Speier

i see.

Kalman Speier avatar
Kalman Speier
vars:
  stage: dev

import:
  - deploy/_defaults
  - catalog/do/project
  - catalog/do/doks
  - catalog/nats

components:
  terraform:
    project:
      vars:
        name: "platform-{{ .stack }}"
        environment: Development
    cluster:
      vars:
        name: doks-cluster-1
        project: '{{ (atmos.Component "project" .stack).outputs.id }}'
    nats:
      vars:
        kube_host: '{{ (atmos.Component "cluster" .stack).outputs.kube_host }}'
        kube_token: '{{ (atmos.Component "cluster" .stack).outputs.kube_token }}'
        kube_cert: '{{ (atmos.Component "cluster" .stack).outputs.kube_cert }}'
Kalman Speier avatar
Kalman Speier

this is my dev stack file

Kalman Speier avatar
Kalman Speier

and interestingly for the project and the cluster the names are generated correctly

Kalman Speier avatar
Kalman Speier
❯ tofu -chdir=components/terraform/nats workspace list
  default
* dev
  dev-cluster
  dev-project
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


i have a component named nats and stack named dev and instead of dev-nats the workspace is simply dev

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the correct behavior for Atmos components that don’t inherit from other components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, the TF workspace is simply the stack name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

only if you have a derived component (inherited from a base component), then TF workspace will be <stack>+<component>

Kalman Speier avatar
Kalman Speier

hmm. what you mean by “inherit” the do cluster and project name is correct and didn’t inherit from anything. or i miss something here.

Kalman Speier avatar
Kalman Speier
components:
  terraform:
    nats:
      metadata:
        component: nats
      vars:
        ...

vs

components:
  terraform:
    cluster:
      metadata:
        component: do/doks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Inherit Configurations in Atmos Stacks | atmos

Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.

Kalman Speier avatar
Kalman Speier

ok i read that before. but i didn’t us that in any catalog so far.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the two examples above, both nats and cluster Atmos components do not inherit from any other Atmos components, so the TF workspaces for both of them will be dev

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so what you have is 100% correct, the workspaces for these two components are just dev - the stack name

Kalman Speier avatar
Kalman Speier
❯ tofu -chdir=components/terraform/do/doks workspace list
  default
  dev
* dev-cluster
  dev-project
Kalman Speier avatar
Kalman Speier

maybe those were generated wrongly because i made a lot changes back and forth since.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, looks like it

Kalman Speier avatar
Kalman Speier

anyhow i’ fine with a single dev workspace named after the stack. as long as the states are correctly separated.

Kalman Speier avatar
Kalman Speier

i’m not fully familiar with tf workspaces, but that means if all these components are in the same workspace they are sharing the state or not ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

each component has workspace_key_prefix - it’s usually generated by Atmos, but you can override it per component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

workspace_key_prefix is, if you look at the backend s3 bucket, the top-level folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so each component will have it’s own top-level folder in the bucket, and each stack, in a separate TF worksapce, will have its own subfolder in the folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, each component state is separated from any other component state (diff folders in the state bucket)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that it’s still in the same backend (same S3 bucket). If you want to separate backends (e.g. per tenant/OU, per account, etc.), you need to create and configure multiple backends

Kalman Speier avatar
Kalman Speier

sure. but i don’t see workspace_key_prefix generated anywhere.

Kalman Speier avatar
Kalman Speier
{
  "terraform": {
    "backend": {
      "gcs": {
        "bucket": "mw-tf-state",
        "encryption_key": "...",
        "prefix": "platform/infra"
      }
    }
  }
}
Kalman Speier avatar
Kalman Speier

it’s gcs, not s3 actually.

Kalman Speier avatar
Kalman Speier

_defaults.yaml:

terraform:
  backend_type: gcs
  backend:
    gcs:
      bucket: mw-tf-state
      prefix: platform/infra
      encryption_key: '{{ env "GCS_ENCRYPTION_KEY" }}'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

GCP has prefix

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which is the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
If the prefix is not specified for a component, Atmos will use the component name (my-component in the example above) to auto-generate the prefix. In the component name, all occurrences of / (slash) will be replaced with - (dash).
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t need to hardcode it here

backend:
    gcs:
      bucket: mw-tf-state
      prefix: platform/infra
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, all components will use the same prefix

Kalman Speier avatar
Kalman Speier

hmm ok, let me check that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review the doc, if you don’t specify prefix, Atmos will auto-generate it

Kalman Speier avatar
Kalman Speier

thank you!

Kalman Speier avatar
Kalman Speier

so if i understand it correctly, without setting the prefix atmos will generate it and because of that my component states will end up in separate folders in the gcs bucket, so even they share a workspace states are separated.

1
Kalman Speier avatar
Kalman Speier

good to know that.:)

Kalman Speier avatar
Kalman Speier

only problem is that i prefer to store them in some folder instead of the root of the bucket but i can leave with that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you can specify the prefix per component, in which case Atmos will just use it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i can review your config, let me know

Kalman Speier avatar
Kalman Speier

it’s fine. this bucket is solely for tf states so it’s ok even in the root. i prefer to leave it to atmos to generate.

1
Kalman Speier avatar
Kalman Speier

on a different topic while we chat.. any chance to add support for command like this in atmos.yaml :

components:
  terraform:
    command: xy command -- tofu
Kalman Speier avatar
Kalman Speier

so support command with double dash

Kalman Speier avatar
Kalman Speier

it would be perfect that way i could load secrets as env vars. and i won’t need custom commands.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t remember if command: xy command -- tofu is supported now (need to look at the code). Did you test it?

Kalman Speier avatar
Kalman Speier

yes. i just tested unfortunately it’s not working.

Kalman Speier avatar
Kalman Speier
atmos terraform plan cluster --stack dev

template: all-atmos-sections:100:35: executing "all-atmos-sections" at <atmos.Component>: error calling Component: exec: "op run --no-masking --env-file=.env -- tofu": executable file not found in $PATH
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have done that with asdf and it works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, we’ll create a task for this, thank you

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you try to quote it?

Kalman Speier avatar
Kalman Speier

thanks a lot!!

Kalman Speier avatar
Kalman Speier

@Erik Osterman (Cloud Posse) do you have an example maybe with asdf?

Kalman Speier avatar
Kalman Speier

i’ve just tried with quote but not working.

Kalman Speier avatar
Kalman Speier

trying with a small shell script:

command: ./optofu.sh

but still not working

Kalman Speier avatar
Kalman Speier
#762 add support to exec shell commands with args

what

Add support for shell commands.

why

To support complex commands, for example:

components: terraform: command: op run –no-masking –env-file=.env – tofu

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks, i’ll review it today. Did you test it?

Kalman Speier avatar
Kalman Speier

roughly. is there any go test(s) i can run?

Kalman Speier avatar
Kalman Speier

strange but for some reason it’s not working with my atmos config. it’s working fine with atmos.yaml in the repository. i will dig into it.

Kalman Speier avatar
Kalman Speier

problem is that when the template executed it uses tfexec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah, yes, tfexec doesn’t understand those commands, it needs terraform

2024-11-04

Kalman Speier avatar
Kalman Speier

possible to organize a few smaller components into one catalog?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Of course… this is what we frequently do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can create a catalog stack file for a solution.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. here’s how we do “EKS” and all related components

Kalman Speier avatar
Kalman Speier

ok. is there any related example in the repo?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here you can see we created a default cluster config, that imports a bunch of other componetns

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Those could be inline, but we chose to import them

Kalman Speier avatar
Kalman Speier

thanks!

1

2024-11-05

Kalman Speier avatar
Kalman Speier

whats the best way to share vars between some components but not all of them? if i place in the stack yaml vars section, i got warnings from the components which are not using them.
Warning: Value for undeclared variable

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, please don’t use globals

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are a few ways to share vars b/w components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. create a base abstract component (with the default values) and inherit it in the other components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Abstract Component | atmos

Abstract Component Atmos Design Pattern

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And multiple inheritance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Inherit Configurations in Atmos Stacks | atmos

Inheritance provides a template-free way to customize Stack configurations. When combined with imports, it provides the ability to combine multiple configurations through ordered deep-merging of configurations. Inheritance is how you manage configuration variations, without resorting to templating.

Kalman Speier avatar
Kalman Speier

thx!

1

2024-11-06

Dennis Bernardy avatar
Dennis Bernardy

Hey, when using helmfile with atmos it requires to have helm_aws_profile_pattern and cluster_name_pattern set. Is there a way to use a name_template like in the stack configuration?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s actually not required and we improved the demo and examples here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This uses k3s

Dennis Bernardy avatar
Dennis Bernardy

But if I want to use it with eks I have to set it, no? Your examples all set use_eks to false

Dennis Bernardy avatar
Dennis Bernardy

Or other question: if I use use_eks: false how does atmos now to which kubernetes to deploy to? Will it use my current context? Can I dynamically change the context in atmos in the stack configuration?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s only if you want the automatic kubeconfig creation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

use_eks could probably be better named. Its true, it’s if you use eks, but it’s not required to use EKS.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the end, all you need is a kubeconfig. With use_eks set to false, it just means you need to manage the kubeconfig yourself.

1
Hao Wang avatar
Hao Wang

Atmos may need a RAG application for QA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would love a bot that could auto answer with links to threads

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and TL;DR

Hao Wang avatar
Hao Wang

exactly, RAG can do with its metadata

Hao Wang avatar
Hao Wang

and need an custom integration with slack, there should be existing SaaS service on the market to do the similar thing

Hao Wang avatar
Hao Wang

I’m looking into RAG recently, and it is not hard to write one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And want to train it on atmos docs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and examples

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you also checked out danswer?

Hao Wang avatar
Hao Wang

just heard of it, saw some similar projects, will take a look into it

Hao Wang avatar
Hao Wang

after a quick review, danswer uses langchain and fast api, should be a reliable project to be used

Hao Wang avatar
Hao Wang
Customer Support - Danswer Documentationattachment image

Help your customer support team instantly answer any question across your entire product.

Hao Wang avatar
Hao Wang

dived into the project and gave it a test, seems it is not easy to make it fully up, e.g. I tried with a public #atoms web page, but QA failed, I used local LLM so I guess it may work with public LLM service. side note: the project has a big vision to be a platform, so the codes are abstracted very well.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On what way did the QA fail?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You had it index the atmos slack channel in the public web archive?

Hao Wang avatar
Hao Wang
api_server-1              | Traceback (most recent call last):
api_server-1              |   File "/app/danswer/chat/process_message.py", line 731, in stream_chat_message_objects
api_server-1              |     for packet in answer.processed_streamed_output:
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 280, in processed_streamed_output
api_server-1              |     for processed_packet in self._get_response([llm_call]):
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 245, in _get_response
api_server-1              |     yield from response_handler_manager.handle_llm_response(stream)
api_server-1              |   File "/app/danswer/llm/answering/llm_response_handler.py", line 69, in handle_llm_response
api_server-1              |     for message in stream:
api_server-1              |   File "/app/danswer/llm/chat_llm.py", line 386, in _stream_implementation
api_server-1              |     for part in response:
api_server-1              |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/ollama.py", line 427, in ollama_completion_stream
api_server-1              |     raise e
api_server-1              |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/ollama.py", line 403, in ollama_completion_stream
api_server-1              |     response_content = "".join(content_chunks)
api_server-1              |                        ^^^^^^^^^^^^^^^^^^^^^^^
api_server-1              | TypeError: sequence item 19: expected str instance, NoneType found
Hao Wang avatar
Hao Wang

I used one of archived page, https://archive.sweetops.com/atmos/2024/08/

SweetOps #atmos for August, 2024

SweetOps Slack archive of #atmos for August, 2024.

Hao Wang avatar
Hao Wang

should be related to ollama python lib

Hao Wang avatar
Hao Wang

litellm’s ollama lib

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha understand.. so got stuck on even just indexing one page

Hao Wang avatar
Hao Wang

indexing is ok, should be in the retrieve answer part

Hao Wang avatar
Hao Wang

side note: found this one, https://ollama.com/blog/continue-code-assistant looks useful for code refactoring

An entirely open-source AI code assistant inside your editor · Ollama Blogattachment image

An entirely open-source AI code assistant inside your editor

Ryan avatar

Good morning gents, just checking in here - is this the module usually used to automate remote backend stand-up - https://github.com/cloudposse/terraform-aws-tfstate-backend

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Erik Osterman (Cloud Posse)

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, but in the context of atmos, we use our component

Ryan avatar

ty thats perfect, never got a chance to walk through the process using automation. appreciate the quick response.

Ryan avatar

coming back here this is cool, sorry i was stuck in my head in the chicken/egg scenario of the backend and the cold start stuff helped alot.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Glad to hear!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Initializing the Terraform State S3 Backend | The Cloud Posse Reference Architecture

Follow these steps to configure and initialize the Terraform state backend using Atmos, ensuring proper setup of the infrastructure components and state management.

Ryan avatar

ooo thank you.

Ryan avatar

I will say I think our internal Atmos is on an older ver

Ryan avatar

I’m kind of at a design decision with regards to a second updated atmos for my new region + new backend, idk yet

Derrick Hammer avatar
Derrick Hammer

Hello, I just found this project and trying to plan how im going to design things. Would like input on how atmos required git repos structured in respect to monorepo vs multirepo? I intend to create modules in 1 repo and create environments in another. Also curious about submitting to terraform/opentudfu registry and I think they require 1 repo per module or something. Would appreciate input from others with experience!

Kudos.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey @Derrick Hammer great to hear you’re checking it out.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an example mono repo structured here: https://github.com/cloudposse-examples/infra-demo-atmos-pro/tree/main

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I intend to create modules in 1 repo and create environments in another
Perfect, checkout vendoring: https://atmos.tools/core-concepts/vendor/

Vendoring | atmos

Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: demo-vendoring
  description: Atmos vendoring manifest for Atmos demo component library
spec:
  # Import other vendor manifests, if necessary
  imports: []

  sources:
    - component: "github/stargazers"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      included_paths:
        - "**/*.tf"
        - "**/*.tfvars"
        - "**/*.md"
      tags:
        - demo
        - github

    - component: "weather"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      tags:
        - demo

    - component: "ipinfo"
      source: "github.com/cloudposse/atmos.git//examples/demo-library/{{ .Component }}?ref={{.Version}}"
      version: "main"
      targets:
        - "components/terraform/{{ .Component }}/{{.Version}}"
      tags:
        - demo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also provide a well-maintained reference architecture for AWS (which is how Cloud Posse makes money). The docs are public here: https://docs.cloudposse.com

The Cloud Posse Reference Architecture

The turnkey architecture for AWS, Datadog & GitHub Actions to get up and running quickly using the Atmos open source framework.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, we also have some design patterns that might be helpful.

https://atmos.tools/design-patterns/organizational-structure-configuration

Organizational Structure Configuration | atmos

Organizational Structure Configuration Atmos Design Pattern

2024-11-07

jose.amengual avatar
jose.amengual

Hello, atmos describe affected always output the component that changed , or that is a somewhat recent change?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure the question

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The whole point is to output the components and stacks that changed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(so it’s always done that)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It also conveys what triggered the change as some metadata in the JSON

jose.amengual avatar
jose.amengual

ok, for some weird reason I thought it was only yaml updates to stacks

jose.amengual avatar
jose.amengual

it has been a coincidence that I have always done stack.yaml changes together with component changes

jose.amengual avatar
jose.amengual

and since now is the first time I use describe affected on a pipeline, I just realized yesterday that a component change will show as a change

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the command checks a lot of things: atmos stack for the component, terraform code in the component folder, and terraform modules in any other folder that the current component uses in terraform (atmos detects it by using terraform metadata about the component)

jose.amengual avatar
jose.amengual

ahhh interesting

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, it checks the depends_on attributes in the stacks, if a component depends on an external file or folder, and the file or folder changes, the component will be considered affected

1
github3 avatar
github3
08:52:08 PM

Add .coderabbit.yaml for CodeRabbit integration configuration settings @osterman (#758)

what

• Add coderrabit config • Tune the prompt • Enable linear integration

why

• Want to work towards a config that is less noisy (although this is probably not the PR that solves that)

Enhancements

handle invalid command error @pkbhowmick (#766)

what

• Improved error handling for command arguments, providing clearer feedback when invalid commands are used • Enhanced logging to include a list of available commands when an error occurs due to invalid arguments.

why

• Better user experience

working example

Before:

Screenshot 2024-11-08 at 1 56 30 AM

After fix:

Screenshot 2024-11-08 at 1 57 12 AM

2024-11-08

github3 avatar
github3
02:06:28 PM

Skip component if metadata.enabled is set to false @pkbhowmick (#756)

what

• Skip component if metadata.enabled is set to false • Added documentation on using the metadata.enabled parameter to conditionally exclude components in deployment

why

• Allow disabling Atmos components from being processed and provisioned by setting metadata.enabled to false in the stack manifest w/o affecting/changing/disabling the Terraform components (e.g. w/o setting the enabled variable to false)

demo

image

Miguel Zablah avatar
Miguel Zablah

@Andriy Knysh (Cloud Posse) if we set the metadata.enabled to false will it also be ignore on CI/CD? or do we have to have both settings.github.actions_enabled to false?

Skip component if metadata.enabled is set to false @pkbhowmick (#756)

what

• Skip component if metadata.enabled is set to false • Added documentation on using the metadata.enabled parameter to conditionally exclude components in deployment

why

• Allow disabling Atmos components from being processed and provisioned by setting metadata.enabled to false in the stack manifest w/o affecting/changing/disabling the Terraform components (e.g. w/o setting the enabled variable to false)

demo

image

jose.amengual avatar
jose.amengual

related to the question: describe affected will show it as change?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess the second question relates to the first one

@Pulak Kanti Bhowmick can you please confirm what atmos describe affected will return if a component is disabled using metadata.enabled: false

jose.amengual avatar
jose.amengual

if I add anything to metadata like sdfsfdsafdsdf: ff it shows on atmos describe affected output

jose.amengual avatar
jose.amengual

using atmos latest

Pulak Kanti Bhowmick avatar
Pulak Kanti Bhowmick

Hi @Andriy Knysh (Cloud Posse), let me check and get back here

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should be affected because it’s changed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s up to the caller to decide what to do with the disabled component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual I believe you were asking for this functionality (metadata.enabled) ^

1
1

2024-11-10

github3 avatar
github3
08:23:36 PM

Wrapper for long lines in help @Cerebrovinny (#770)

what

• Implemented a new terminal-aware text wrapping system for CLI help output • Added responsive width handling based on terminal size with fallback values • Introduced custom usage template handling for consistent help text formatting • Created dedicated terminal writer component for automatic text wrapping

why

• Improves readability of CLI help text by ensuring content fits within terminal width • Provides better user experience with dynamic text wrapping based on terminal size • Standardizes help text formatting across all commands • Fixes potential issues with text overflow in narrow terminal windows

references

Before:
Screenshot 2024-11-09 at 16 15 26
Screenshot 2024-11-09 at 16 31 54

After:
Screenshot 2024-11-09 at 18 19 56

1
NotWillFarrell avatar
NotWillFarrell

Hi, I’ve been reading up on your documentation on Atmos and the reference architecture for the past 1 or 2 weeks and some things are not clicking in my head. I hope you can give me some pointers in the right direction because it is not easy to find this on the Internet.

For AWS, I see references in the mixins to regions, tenants and stages but the sample info only gives me a name, but I’m not seeing how it relates to let’s say an Account ID. This for example:

vars:
  stage: sandbox

# Other defaults for the `sandbox` stage/account

Am I overlooking some documentation part where I can see what can be a default for OUs/Accounts? There should be some relationship to AWS terminology right?

I hope you can give me a hint. Thanks in advance!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For additional context, do you mean these docs:

doc.cloudposse.com (cloud posse’s reference architecture)

Or https://atmos.tools?

That will help me address any confusion.

NotWillFarrell avatar
NotWillFarrell

Hi, also there I see not the relationship made between let’s say a stage and the account ID.

It’s one of those pieces that would make Atmos click in my head. Architecture wise I can relate Catalogs/mixins/stacks etc but somewhere down the line you can’t apply to a ‘sandbox’ as long that ‘sandbox’ has an account id…right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So just to disambiguate some things, in cloudposse’s refarch (docs.cloudposse.com), we by convention tie a stage (dev, staging, production) to an AWS Account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In our refarch, we have something called the account-map https://docs.cloudposse.com/components/library/aws/account-map/

account-map | The Cloud Posse Reference Architecture

This component is responsible for provisioning information only: it simply populates Terraform state with data (account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what handles that mapping

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what allows us to refer to everything by name, instead of thinking of accounts account IDs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But it’s also what can make using the Cloud Posse refarch harder with brownfield environments https://atmos.tools/core-concepts/components/terraform/brownfield/

Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All that said, I want to point out that these are not Atmos conventions, these are Cloud Posse conventions in our reference architecture for AWS that uses Atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The account-map returns an IAM role that is used to access a given account

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the IAM role will have the AWS account ID information

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(that’s the missing gap, I believe in the understanding)

NotWillFarrell avatar
NotWillFarrell

Thanks for your answers.

I saw on GH a file (see below) and it gives me some direction. But maybe it’s me and then I think “what are all the possible defaults?

ah..that last one.

# Global variables used for account maps, role maps and other global values
vars:
  account_map:
    dev: 222222222222
    staging: 333333333333
    automation: 111111111111
    prod: 444444444444
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, good question. I’ll answer that in a sec.

(Also, in a future release of our refarch, we intend to move away from the account-map convention to make it easier to use in brownfield environments)

NotWillFarrell avatar
NotWillFarrell

Ah…that’s where my head indeed is: How on earth would I use this in brownfield situations? We have a lot of those; Customers that tried first themselves and then start looking for a partner.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Understandable… so what we have as our refarch was born from 99% of our engagements which are what we call “cold starts”, so brownfields considerations we seldom a concern. As we now reach a much wider audience, that’s coming up more and more often.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re doing some groundwork changes first, related to https://github.com/cloudposse/terraform-aws-components/issues/1177

#1177 :loudspeaker: Upcoming Migration of Components to a New GitHub Organization (CODE FREEZE 11/12 - 11/17)

Hello, Cloud Posse Community!

We’re excited to announce that starting on November 12, 2024, we will begin migrating each component in the cloudposse/terraform-aws-components repository to individual repositories under a new GitHub organization. This change aims to improve the stability, maintainability, and usability of our components.

Why This Migration?

Our goal is to make each component easier to use, contribute to, and maintain. This migration will allow us to:

• Leverage terratest automation for better testing. • Implement semantic versioning to clearly communicate updates and breaking changes. • Improve PR review times and accelerate community contributions. • Enable Dependabot automation for dependency management. • And much more!

What to Expect Starting November 12, 2024

Migration Timeline: The migration will begin on November 12 and is anticipated to finish by the end of the following week. • Code Freeze: Starting on November 12, this repository will be set to read-only mode, marking the beginning of a code freeze. No new pull requests or issues will be accepted here after that date. • New Contribution Workflow: After the migration, all contributions should be directed to the new individual component repositories. • Updated Documentation: To support this transition, we are updating our documentation and cloudposse-component updater. • Future Archiving: In approximately six months, we plan to archive this repository and transfer it to the cloudposse-archives organization.

Frequently Asked Questions

Does this affect Terraform modules? No, only the terraform-aws-components repository is affected. Our Terraform modules will remain where they are.


We are committed to making this transition as seamless as possible. If you have any questions or concerns, please feel free to post them in this issue. Your feedback is important to us, and we appreciate your support as we embark on this new chapter!

Thank you,
The Cloud Posse Team

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And once we have that in place, we can start making more breaking changes to our components to support brownfield - while keeping our ecosystem stable.

NotWillFarrell avatar
NotWillFarrell

Sounds really promising! I’m going to do some dishes over here (CET) and then just start with this afterwards.

Thanks for all your time!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Jeremy G (Cloud Posse) where do we have a full definition of the static backend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Brownfield Considerations | atmos

There are some considerations you should be aware of when adopting Atmos in a brownfield environment. Atmos works best when you adopt the Atmos mindset.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I don’t think we have a full definition anywhere. In part, that is because it is too simple. See the examples here. You just set the remote_state_backend_type to static and set the outputs as a map under remote_state_backend.static. Although the example shows setting backend_type: static, that is a bit misleading because you cannot run terraform plan etc. with a static backend.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, what about a static account-map? @NotWillFarrell is working in a brownfield

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

My recommendation for brownfield is to use a static backend for account to create the list of accounts, and use the real account-map. Account map is an information collector and processor, it it does not manage any cloud resources. If account-map contains information for accounts you are not using, that should not cause problems.

The example @NotWillFarrell cited:

# Global variables used for account maps, role maps and other global values
vars:
  account_map:
    dev: 222222222222
    staging: 333333333333
    automation: 111111111111
    prod: 444444444444

is from before account-map got that information from account. Convert that to a static backend for account and I think account-map will work fine. If not, we should fix account-map so that it does.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example of the static backend for account

components:
  terraform:
    account:
      backend:
        s3:
          role_arn: null
      vars:
        enabled: false
      # Use `static` remote state to configure the attributes (outputs) for the existing organization, OUs and accounts
      remote_state_backend_type: static
      remote_state_backend:
        static:
          account_arns:
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
            - "arn:aws:organizations::xxxxxxxxxxx:account/o-xxxxxxxxxxx/xxxxxxxxxxx"
          account_ids:
            - "xxxxxxxxxxx"
            - "xxxxxxxxxxx"
            - "xxxxxxxxxxx"
          account_info_map: {}
          account_names_account_arns: {}
          account_names_account_ids: {}
          organization_arn: "arn:aws:organizations::xxxxxxxxxxx:organization/o-xxxxxxxxxxx"
          organization_id: "o-xxxxxxxxxxx"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all the outputs returned from the account component https://github.com/cloudposse/terraform-aws-components/blob/main/modules/account/outputs.tf. can be added to the static backend and then used in other components as if they were returned from the account remote state

output "account_arns" {
  value       = local.all_account_arns
  description = "List of account ARNs (excluding root account)"
}

output "account_ids" {
  value       = local.all_account_ids
  description = "List of account IDs (excluding root account)"
}

output "organizational_unit_arns" {
  value       = local.organizational_unit_arns
  description = "List of Organizational Unit ARNs"
}

output "organizational_unit_ids" {
  value       = local.organizational_unit_ids
  description = "List of Organizational Unit IDs"
}

output "account_info_map" {
  value       = local.account_info_map
  description = <<-EOT
    Map of account names to
      eks: boolean, account hosts at least one EKS cluster
      id: account id (number)
      stage: (optional) the account "stage"
      tenant: (optional) the account "tenant"
    EOT
}

output "account_names_account_arns" {
  value       = local.account_names_account_arns
  description = "Map of account names to account ARNs (excluding root account)"
}

output "account_names_account_ids" {
  value       = local.account_names_account_ids
  description = "Map of account names to account IDs (excluding root account)"
}

output "organizational_unit_names_organizational_unit_arns" {
  value       = local.organizational_unit_names_organizational_unit_arns
  description = "Map of Organizational Unit names to Organizational Unit ARNs"
}

output "organizational_unit_names_organizational_unit_ids" {
  value       = local.organizational_unit_names_organizational_unit_ids
  description = "Map of Organizational Unit names to Organizational Unit IDs"
}

output "organization_id" {
  value       = local.organization_id
  description = "Organization ID"
}

output "organization_arn" {
  value       = local.organization_arn
  description = "Organization ARN"
}

output "organization_master_account_id" {
  value       = local.organization_master_account_id
  description = "Organization master account ID"
}

output "organization_master_account_arn" {
  value       = local.organization_master_account_arn
  description = "Organization master account ARN"
}

output "organization_master_account_email" {
  value       = local.organization_master_account_email
  description = "Organization master account email"
}

output "eks_accounts" {
  value       = local.eks_account_names
  description = "List of EKS accounts"
}

output "non_eks_accounts" {
  value       = local.non_eks_account_names
  description = "List of non EKS accounts"
}

output "organization_scp_id" {
  value       = join("", module.organization_service_control_policies.*.organizations_policy_id)
  description = "Organization Service Control Policy ID"
}

output "organization_scp_arn" {
  value       = join("", module.organization_service_control_policies.*.organizations_policy_arn)
  description = "Organization Service Control Policy ARN"
}

output "account_names_account_scp_ids" {
  value       = local.account_names_account_scp_ids
  description = "Map of account names to SCP IDs for accounts with SCPs"
}

output "account_names_account_scp_arns" {
  value       = local.account_names_account_scp_arns
  description = "Map of account names to SCP ARNs for accounts with SCPs"
}

output "organizational_unit_names_organizational_unit_scp_ids" {
  value       = local.organizational_unit_names_organizational_unit_scp_ids
  description = "Map of OU names to SCP IDs"
}

output "organizational_unit_names_organizational_unit_scp_arns" {
  value       = local.organizational_unit_names_organizational_unit_scp_arns
  description = "Map of OU names to SCP ARNs"
}

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The only outputs of account used by account-map are • eks_accountsnon_eks_accountsaccount_info_map The first 2 are just lists of account names, indicating which accounts are expected to have EKS deployments and which are not. Every account should be in one of those lists.

account_info_map is a map of account name to various pieces of information. In the real account output, there is more information, but I think you only need to fill in: • eks: boolean, account hosts at least one EKS cluster • id: account id (number) • stage: the account “stage” • tenant: (optional) the account “tenant”, or null if you are not using tenant Example:

account_info_map:
  artifacts:
    eks: false
    id: "123456789012"
    stage: "artifacts"
    tenant: null
  dev:
    eks: true
    id: "210987654321"
    stage: "dev"
    tenant: null

The other outputs of account are used by components that manage the accounts and organizations, but you can skip all that if you already that set up.

cc: @Erik Osterman (Cloud Posse)

2024-11-11

github3 avatar
github3
05:25:12 PM

feat: additional atmos docs parameters for specifying width, using auto-styling and color profile, and preserving new lines @RoseSecurity (#757)

what

atmos_docs

• Add an additional atmos docs flag for specifying the width of markdown output • Utilizing auto-styling based on light or dark mode preferences instead of hardcoding to dark • Preserving new lines with rendered markdown

why

• Enhance the user experience for interacting with documentation. The width parameter is useful for users who prefer seeing wider output for Terraform docs-generated tables and is defined in the atmos.yaml:

settings: docs: max-width: 200

references

glow docs

1
github3 avatar
github3
05:51:44 PM

Change PS1 to show that Atmos is in the atmos terraform shell mode @pkbhowmick (#761)

what

• Change PS1 to show that Atmos is in the atmos terraform shell mode • Customized command prompt for the interactive shell with the addition of the “atmos>” prefix • Enhanced shell behavior by removing the unnecessary -l flag for non-Windows systems and implementing a fallback to sh if bash is unavailable. • Improved handling for the /bin/zsh shell with additional flags

why

• Improve user experience

test

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB this closes something you asked for a long time ago

Change PS1 to show that Atmos is in the atmos terraform shell mode @pkbhowmick (#761)

what

• Change PS1 to show that Atmos is in the atmos terraform shell mode • Customized command prompt for the interactive shell with the addition of the “atmos>” prefix • Enhanced shell behavior by removing the unnecessary -l flag for non-Windows systems and implementing a fallback to sh if bash is unavailable. • Improved handling for the /bin/zsh shell with additional flags

why

• Improve user experience

test

image

RB avatar

I saw that! Thank you very much

2024-11-12

tretinha avatar
tretinha

does anybody have thoughts on how to manage ECS image tags? we are used to creating different image tags whenever something is ready to be tested or to go to production on each project’s pipeline. So let’s say I have an application that just generated a new docker tag corresponding to some new changes, and that this tag is now saved in ECR, how can I reflect this image tag in my atmos/infrastructure repository? at first I thought about the app opening a PR to atmos, changing the line that corresponds to the image, but I’m unsure if this is the best way. How do you typically deal with this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our ECS components solve this using SSM parameters

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
ECS with ecspresso | The Cloud Posse Reference Architecture

We use the ecspresso deployment tool for Amazon ECS to manage ECS services using a code-driven approach, alongside reusable GitHub Action workflows. This setup allows tasks to be defined with Terraform within the infrastructure repository, and task definitions to reside alongside the application code. Ecspresso provides extensive configuration options via YAML, JSON, and Jsonnet, and includes plugins for enhanced functionality such as Terraform state lookups.

tretinha avatar
tretinha

I’ll take a look. Thank you!

github3 avatar
github3
01:46:02 PM

Clean Terraform workspace before executing terraform init in the atmos.Component template function @aknysh (#775)

what

• Clean Terraform workspace before executing terraform init in the atmos.Component template function

why

When using multiple backends for the same component (e.g. separate backends per tenant or account), and if an Atmos command was executed that selected a Terraform workspace, Terraform will prompt the user to select one of the following workspaces:

  1. default

The prompt forces the user to always make a selection (which is error-prone), and also makes it complicated when running on CI/CD.

This PR adds the logic that deletes the .terraform/environment file from the component directory before executing terraform init when executing the atmos.Component template function. It allows executing the atmos.Component function for a component in different Terraform workspaces without Terraform asking to select a workspace. The .terraform/environment file contains the name of the currently selected workspace, helping Terraform identify the active workspace context for managing your infrastructure.

party_parrot1
1
Stephan Helas avatar
Stephan Helas

Hi,

i’ve found, that the validate output differs between 1.98 and 1.99 (and ever since). I don’t know if i am doing anything wrong or if its a bug:

old behavior (1.98.0)

❯ atmos validate component wms-base -s wms-xe02-sandbox
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfully

new behavior (1.99.0)

❯ atmos validate component wms-base -s wms-xe02-sandbox
'atmos' supports native ' wms-base' command with all the options, arguments and flags.

In addition, 'component' and 'stack' are required in order to generate variables for the component in the stack.

atmos  wms-base <component> -s <stack> [options]
atmos  wms-base <component> --stack <stack> [options]
component 'wms-base' in stack 'wms-xe02-sandbox' validated successfully
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a regression from the many new PRs, we’ll fix it, thank you @Stephan Helas

2024-11-13

RB avatar

Is there already prior art using github actions and atmos to use a readonly for the plan and an admin role for the apply ?

RB avatar

That way if the plan was compromised without the ability to apply, no one could do any funny business with a local exec data source

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This relates to the recent work by @jose.amengual

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual where did we leave off with this?

jose.amengual avatar
jose.amengual

Waiting for @Andriy Knysh (Cloud Posse) to have some time to go over the failing tests on my PR

jose.amengual avatar
jose.amengual

in the convo we have in the other slack

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Andriy should have some time now to look

1
1
1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Quick update: @Igor Rodionov will review the GH action PR (prob tomorrow), and @Andriy Knysh (Cloud Posse) will check what @jose.amengual said about Atmos (not related to the GH action)

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
From the Terraform community on Redditattachment image

Explore this post and more from the Terraform community

1
github3 avatar
github3
08:57:01 PM

Add support for vendor path setting in atmos.yaml @Cerebrovinny (#737)

what

• Add support for vendor path setting in atmos.yaml • Add support for vendor files under folders or multiple vendor files to be processed in lexicographic order

why

• Users now should be able to use new variable vendor in atmos.yaml and process different vendor files at different locations

2024-11-14

toka avatar

I need to share my local submodules/child modules with every component that I will define in atmos. I’d like to move to atmos, but I have a codebase that I need to migrate to atmos with many modules. At this point I cannot afford to rewrite each small module into a component, but I’d like to move my root modules into atmos components as a starting point. I’d like to build my components out of the existing modules. Any advice how to approach?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@toka please show the current file system layout

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use your TF modules as components (there is nothing special about components)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you put your modules into components/terraform and define Atmos stacks in stacks, it should work. Atmos will generate the backend and varfile for the modules and execute Terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, “components” are really just a philosophy. You don’t need to rewrite anything.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

…it won’t be worse off than it is now, but it won’t benefit from the way of thinking about architectures as made up of components.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, components typically will comprise a solution, so they shouldn’t be as small as a small module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Thinking Like Atmos | atmos

Atmos can change how you think about the Terraform code you write to build your infrastructure.

Component Best Practices | atmos

Learn the opinionated “Best Practices” for using Components with Atmos

2024-11-15

RB avatar

Hi all, if you folks have a second, i have a couple questions on the component migration. Very excited for the component testing

https://github.com/cloudposse/terraform-aws-components/issues/1177#issuecomment-2474148290

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ah thanks, I will respond later today

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Comment on #1177 :loudspeaker: Migration of Components to a New GitHub Organization (CODE FREEZE 11/12 - 11/17)

@nitrocode thanks for raising these questions.

  1. Will all the same components be available after the migration?

• Yes, we’re migrating the components as is, bit-for-bit, to facilitate a switch. However, we anticipate promptly doing major releases on many of the components after that point, to introduce new functionality and improve brownfield infra (a large driver for the initiative). Those breaking changes will likely require local changes in your configurations. That won’t happen immediately, but is the reason we’re doing all this.

  1. Will components still be open source?

Yes, we have no plans to change license.

  1. What will the new organization be named?
    Looked through the source code of the component updater and saw this https://github.com/cloudposse-terraform-components

Correct, you found it!

https://github.com/cloudposse-terraform-components

As part of our transition to GitHub Enterprise (GHE), we are reorganizing our open-source projects into more purpose-built organizations. This allows us to better manage repository rulesets, GitHub Apps, and other configurations specific to each organization’s purpose. This approach enhances our security posture and improves discoverability. Additionally, keeping our components separate from less opinionated child modules avoids confusion and ensures clearer organization.

  1. Will the component updater be updated to allow overriding the above org to use different sources if needed?

The component updater uses the sources as define in the vendor and component manifests. Thus, that’s supported today.

One thing we’ve added to the component updater to make this switch less painful, is the ability for it to rewrite the sources to their new homes. So if it see’s references to components in cloudposse/terraform-aws-components it will rewrite those to the new locations.

  1. Would you folks consider opening up the codeowners for components once they are all in their own repositories like you folks do with terraform modules ?

Yes, so we’ll be able to accept more contributions of components and delegate ownership of components with this move. Note, CODEOWNERS only works with paid GitHub seats, so I think we’ll continue to look for solutions that work better for non-org members, such as “allow lists” that we’ve implemented elsewhere.

Kalman Speier avatar
Kalman Speier

hey folks, is there a way to generate kubernetes provider blocks for different cloud providers? scenario: stack-1 is ecs, stack-2 is gke, and i’d like to use the same components for kubernetes resources.

i can output host, cert and token from the cluster components, however i’d like to configure kubernetes provider with oauth2 access token. using google_client_config data source for example.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you familiar with the atmos provider generation?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Providers | atmos

Configure and override Terraform Providers.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Kalman Speier the question is, how do you handle the auth token. In terraform, you can get it from a secret storage (SSM/ASM, GCP vault, etc.), and send it as an input to the other resources/modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the token should def not be hardcoded in Atmos stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I guess you are asking about how to do it in Terraform (get the token from data sources and then use it as input to other modules, per cloud)

Kalman Speier avatar
Kalman Speier

what i’d like to achieve is simply like this: https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/gke/main.tf

so, the kubernetes provider is configured outside of the component which is responsible to deploy k8s resources.

Kalman Speier avatar
Kalman Speier

if the cluster outputs the token and i set as a variable to my k8s component that works fine, however these tokens are expiring, hence i’d like to use a provider specific datasource but i couldn’t place that in the component obviously.

Kalman Speier avatar
Kalman Speier
  1. create a cluster on gke for example

  2. run cloud specific kubernetes provider configuration ``` data “google_client_config” “default” {}

provider “kubernetes” { token = data.google_client_config.default.access_token … } ```

  1. deploy my k8s resources using the configured provider without the component knowing about which cloud provider in use

the #2 needs to run each time #3 is running, because outputs from state isn’t working as tokens expiring.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would put all those providers and the data sources in separate files per cloud (obe for AWS/ECS, one for GKE) Then add a variable defining the cloud

variable "cloud" {
   description = "Cloud provider"
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in each provider "kubernetes", use count depending on the cloud variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in [main.tf](http://main.tf), use https://developer.hashicorp.com/terraform/language/functions/coalesce to read the token from all the data sources

coalesce - Functions - Configuration Language | Terraform | HashiCorp Developerattachment image

The coalesce function takes any number of arguments and returns the first one that isn’t null nor empty.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

similar to

token = coalesce(data.google_client_config.default.xxxx, data.xxxx.xxxx.xxxx)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

too bad that the provider block still does not support count and for_each (people have been asking for that for many years). So oyu will need to make sure your TF code has access to all clouds at once for the provider blokcs to work and not throw errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://support.hashicorp.com/hc/en-us/articles/6304194229267-Using-count-or-for-each-in-Provider-Configuration#<i class="em em-~"</i>text=While%20a%20longtime%20requested%20feature,provider%20configuration%20block%20in%20Terraform>.

Using count or for_each in Provider Configuration

  Current Status While a longtime requested feature in Terraform, it is not possible to use count or for_each in the provider configuration block in Terraform.   Background Much of the reasoning be…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would create a common TF module to deal with the clusters, then a few root modules (components) for the specific clouds

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the common module would accept the token as a variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the parent/root modules would read the token using the corresponding cloud provider and provide it in the variable to the child module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and if you are using Atmos, it’s easy to configure your components pointing to the diff root modules per cloud

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    ecs-component:
      metadata:
        component:
          ecs/xxxxx. # Point to the Terraform component (root module)
    gke-component:
      metadata:
        component:
          gke/xxxxx. # Point to the Terraform component (root module)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ecs/xxxxx terraform component uses a data source to read the token from SSM/ASM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

gke/xxxxx terraform component uses data "google_client_config" "default" {} to read the token from GCP

Kalman Speier avatar
Kalman Speier

ok, thanks a lot, i will think about these.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tha main point is to create a common TF child module (with almost all the code except the code to read the token from the data sources), then reuse it in the root modules using diff providers

Miguel Zablah avatar
Miguel Zablah

Hey guys I was looking for a way to read secrets from 1password and saw this PR: https://github.com/cloudposse/atmos/pull/762 What are the plans for this?

This will actually solved a loot of issues and simplify my work on some projects hehe

#762 Add support to load config values and secrets from external sources

what

Integrate vals as a template function.

why

Loading configuration values and secrets from external sources, supporting various backends.

Summary by CodeRabbit

New Features
• Introduced the atmos.Vals template function for loading configuration values and secrets from external sources.
• Added a logging mechanism for improved tracking of value operations. • Updates
• Updated various dependencies to newer versions, enhancing compatibility with cloud services and improving overall performance. • Documentation
• Added comprehensive documentation for the atmos.Vals template function, including usage examples and security best practices.

Miguel Zablah avatar
Miguel Zablah

@Erik Osterman (Cloud Posse) any updates on this?

#762 Add support to load config values and secrets from external sources

what

Integrate vals as a template function.

why

Loading configuration values and secrets from external sources, supporting various backends.

Summary by CodeRabbit

New Features
• Introduced the atmos.Vals template function for loading configuration values and secrets from external sources.
• Added a logging mechanism for improved tracking of value operations. • Updates
• Updated various dependencies to newer versions, enhancing compatibility with cloud services and improving overall performance. • Documentation
• Added comprehensive documentation for the atmos.Vals template function, including usage examples and security best practices.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our plan is leaning more towards implementing a pluggable way of storing/retrieving values from directly within Atmos, but supporting many of these same backends.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll have a more conclusive answer by the end of this week.

Miguel Zablah avatar
Miguel Zablah

oh nice thanks!

github3 avatar
github3
07:25:33 PM

Update gettings started, add $schema directive at the top of files @osterman (#769)

what

• Remove unimplemented commands • $schema directive at the top of files

why

• Not everyone will have $schema validation enabled by default in their editor Enhance WriteToFileAsJSON with pretty-printing support @RoseSecurity (#783)

what

• Used the ConvertToJSON utility with json.MarshalIndent to produce formatted JSON • Indentation is set to two spaces (“ “) for consistent readability

why

• This PR improves the WriteToFileAsJSON function by introducing pretty-printing for JSON outputs. Previously, the function serialized JSON using a compact format, which could make the resulting files harder to read. With this change, all JSON written by this function will now be formatted with indentation, making it easier for developers and users to inspect and debug the generated files • This specifically addresses #778 , which previously rendered auto-generated backends as:

{ “terraform”: { “backend”: { “s3”: { “acl”: “bucket-owner-full-control”, “bucket”: “my-tfstate-bucket”, “dynamodb_table”: “some-dynamo-table”, “encrypt”: true, “key”: “terraform.tfstate”, “profile”: “main”, “region”: “us-west-2”, “workspace_key_prefix”: “something” } } } }

With this addition, the output appears as:

{ “terraform”: { “backend”: { “s3”: { “acl”: “bucket-owner-full-control”, “bucket”: “my-tfstate-bucket”, “dynamodb_table”: “some-dynamo-table”, “encrypt”: true, “key”: “terraform.tfstate”, “profile”: “main”, “region”: “us-west-2”, “workspace_key_prefix”: “something” } } } }

references

Stack Overflow • Closes #778

2024-11-16

github3 avatar
github3
09:00:39 PM

Add support for custom atmos terraform shell prompt @pkbhowmick (#786)

what

• Add support for custom atmos terraform shell prompt • Allow specifying custom prompt for atmos terraform shell command in atmos.yaml. Supports Go templates

why

• Improve user experience • Make the prompt customizable

Working demo

With custom prompt:

Screenshot 2024-11-16 at 11 20 14 PM

Without custom prompt:

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Michael this should restore the behavior in #geodesic

Add support for custom atmos terraform shell prompt @pkbhowmick (#786)

what

• Add support for custom atmos terraform shell prompt • Allow specifying custom prompt for atmos terraform shell command in atmos.yaml. Supports Go templates

why

• Improve user experience • Make the prompt customizable

Working demo

With custom prompt:

Screenshot 2024-11-16 at 11 20 14 PM

Without custom prompt:

image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We disable the prompt formatting by default, and instead allow it to be customized in atmos.yaml

Michael avatar
Michael

Awesome stuff, thank you for such a quick turnaround on it! Excited to try it out

2024-11-18

shirkevich avatar
shirkevich

Hey guys, thanks for awesome project! Trying to recreate multi workspace project in terraform.io with atmos.

My use case is provisioning pretty same infra for multiple tenants. Need your advise on how to proper organise variables.

Each component in tenant share a list of variables like project_id and region which I put to mixin with the same name as tenant.

Then for each component I’m passing project_number with atmos.Component (tenants are named as pokemons):

deploy/bulbasaur-stg.yaml

vars:
  tenant: bulbasaur
  stage: stg

import:
  - path: "deploy/_defaults.yaml.tmpl"
    context:
      stack: bulbasaur

components:
  terraform:
    tenant:
      vars:
        foo: bar

    db:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}' <-- this I also want to DRY somehow

    cloudrun:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'

    jobs:
      vars:
        project_number: '{{ (atmos.Component "tenant" .stack).outputs.project_number }}'

All good for now, then for cloudrun and jobs components I need same list of ENV variables that are used to provision docker image. The problem here is that I want to use previously defined project_id and pokemon_name in templating of those envs…

I tried to create mixin and thought that it can be templated like that:

mixins/tenants/bulbasaur-stg.yaml

vars:
  region: europe-west3
  env_vars:
    TENANT: '{{ .vars.tenant }}'
    DATABASE_USER: 'user@{{ .vars.project_id }}.iam'
    BIGQUERY_PROJECTID: '{{ .vars.project_id }}'
    ...

deploy/_defaults.yaml.tmpl

import:
  - mixins/tenants/{{ .stack }}

terraform:
  backend_type: gcs
  backend:
    gcs:
      bucket: "tf-state"

It is not working giving me <no-value> for TENANT Clearly I’m doing it wrong. Should I create a component that just outputs env_vars instead and pass it to cloudrun and jobs?

P.S. I have name_pattern: "{tenant}-{stage}" in atmos.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve not yet had a chance to read through the entire message

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, you will likely need to commit the varfiles for it, for it to work with TFC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, dynamic backend generation will not work well, if you use multiple backends for the same component (E.g. by region)

RB avatar

Opentofu is considering deprecating workspaces

https://github.com/opentofu/opentofu/issues/2160

Is it possible to use atmos without workspaces and instead use unique keys per stack instead of unique workspaces per stack ?

Junk avatar

Currently, Atmos relies on the concept of workspaces for managing unique configurations and state per stack. However, with the introduction of OpenTofu’s Early Evaluation feature, there is potential to move away from workspaces and instead use unique keys per stack to manage state more flexibly.

While this functionality is not natively supported yet, I believe it could be feasible to implement. By leveraging Early Evaluation, we could dynamically configure the backend state storage, using variables to differentiate each stack’s state, rather than depending on separate workspaces. This approach would allow us to specify unique keys based on the stack name or environment and ensure proper isolation of state per stack.

In essence, although Atmos doesn’t currently support a workspace-less setup, utilizing unique keys per stack with Early Evaluation could be a similar concept and an effective alternative worth exploring.

Therefore, I don’t anticipate that the CloudPosse team will find it impossible to adapt to the deprecation of workspaces. They are an incredibly talented group, and I’m confident in their ability to develop a robust solution or an alternative approach. With their expertise, I believe they will be able to leverage features like Early Evaluation effectively to maintain or even improve the functionality of Atmos without relying on workspaces.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Actually, I think it’s already supported, depending on your backend. With S3, it’s just a different path. Since the backends are entirely configurable in Atmos, I think it’s just about configuring the right path to match the workspace path, and dropping the workspace parameter.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We might need to add a parameter in atmos to disable workspace operations, but the lift on that is trivial.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We talk about the file structure of the S3 backend here https://docs.cloudposse.com/layers/accounts/tutorials/terraform-s3-state/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So with that it should be as simple as updating the key to the fully qualified path to the workspace tfstate file

https://developer.hashicorp.com/terraform/language/backend/s3

Backend Type: s3 | Terraform | HashiCorp Developerattachment image

Terraform can store state remotely in S3 and lock that state with DynamoDB.

RB avatar

Thanks, ill review!

Yes i think the workspace option would be needed.

You’re right, all the other stuff is there to override the backend key per component using some yaml magic (use stack name as the unique key, as you said)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Yes i think the workspace option would be needed.
Just to confirm, an option to disable usage of workspace operations?

1
github3 avatar
github3
05:54:47 AM

:rocket: Enhancements

Handle empty stack YAML file configurations @haitham911 (#791)

what

• Handle empty stack YAML file configurations

why

atmos validate stacks should not error on empty stack manifest files

2024-11-19

Junk avatar

For root modules that do not use the terraform-null-label module (e.g., modules from terraform-aws-modules instead of CloudPosse), I find it challenging to maintain a consistent naming convention for resources. Specifically, I use a mix of CloudPosse-provided modules and other third-party modules as needed, but ensuring uniformity in the naming and tagging of provisioned resources (not just the stack’s name_pattern, but the actual resource names) is difficult.

I’ve tried using the Component Vendor’s Mixin feature to blend context.tf, but it proved to be inconvenient.

Does anyone have ideas or alternative methods for achieving a uniform naming and tagging convention across all resources? Any suggestions would be greatly appreciated!

Miguel Zablah avatar
Miguel Zablah

what I do is save the other third-party module in another directory and use that with CloudPosse context.tf file to create the naming.

For example: components/vendor/aws/vpc -> AWS Module compoents/aws/vpc -> custom module using the vendor aws vpc with CloudPosse [context.tf](http://context.tf) file for naming and enable ENV

and I will use the compoents/aws/vpc module in catalog

1
Junk avatar

@Miguel Zablah Thanks! I understand, but to help, could you give me a simple example? I get the general picture, but the ‘custom module using the vendor aws vpc with CloudPosse context.tffile for naming and enableENV’ part doesn’t come to mind specifically.

Junk avatar

If what you mean by custom module is ‘combine the newly attached context.tf with the “aws/vpc” component in the vendor directory to create a new Root Module (Component)’, this seems like it would be complicated to configure and maintain the component every time. Am I not understanding this correctly?

Miguel Zablah avatar
Miguel Zablah

not really bc both are going to be manage by atmos vendor file, so using this example I use this two modules: https://github.com/cloudposse/terraform-null-label https://github.com/terraform-aws-modules/terraform-aws-vpc

so I add both of them to the Atmos vendor file and in a different directory on components in this example it will be something like this: components/terraform/vendor/cloudposse/tf-null-label -> https://github.com/cloudposse/terraform-null-label components/terraform/vendor/aws/vpc -> https://github.com/terraform-aws-modules/terraform-aws-vpc

than in I will create a new root module with this modules here: components/terraform/aws/vpc

where I will have a this files: [main.tf](http://main.tf) -> to call the components/terraform/vendor/aws/vpc [context.tf](http://context.tf) -> to call the components/terraform/vendor/cloudposse/tf-null-label

so essentially what I do is copy there context file but reference the module internally kind of what they do in this example: https://github.com/cloudposse/terraform-null-label/blob/main/examples/autoscalinggroup/context.tf

and then I can use module.this.id for naming, module.this.tag for tagging and module.this.enabled to enable/disable the module like CloudPosse dose on there modules all of this I will use it on [main.tf](http://main.tf) when calling the VPC module

Miguel Zablah avatar
Miguel Zablah

hopefully this explains it a bit better

Junk avatar

@Miguel Zablah Thanks for the detailed explanation, I understood it perfectly. My only further question is, so what I need to do is to actually create a components/terraform/vendor/aws/vpc module block in main.tf in the components/terraform/vendor/aws/vpc directory and declare the required variables one by one inside the module block so that they are assignable (ex: azs, cidr, private_subnes, etc…)?

Miguel Zablah avatar
Miguel Zablah

so this is how the root component will look like: components/terraform/aws/vpc : • [main.tf](http://main.tf) -> here you will put the module block calling the components/terraform/vendor/aws/vpc[context.tf](http://context.tf) -> use this example but reference your vendor module components/terraform/vendor/cloudposse/tf-null-label[outputs.tf](http://outputs.tf) -> you will expose the vpc module outputs (you can use a loop for this) • [variables.tf](http://variables.tf) -> this will be almost the same as the aws-vpc module but removing name and stuff that is manage by [context.tf](http://context.tf) example [main.tf](http://main.tf) :

module "vpc" {
  count = module.this.enabled ? 1 : 0
  source = "../../vendor/aws/vpc"

  name = module.this.id
...
}

after this is setup correctly made you can use it as normally on your atmos catalog and what not

# DO NOT COPY THIS FILE
#
# This is a specially modified version of this file, since it is used to test
# the unpublished version of this module. Normally you should use a
# copy of the file as explained below.
#
# ONLY EDIT THIS FILE IN github.com/cloudposse/terraform-null-label
# All other instances of this file should be a copy of that one
#
#
# Copy this file from <https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf>
# and then place it in your Terraform module to automatically get
# Cloud Posse's standard configuration inputs suitable for passing
# to Cloud Posse modules.
#
# Modules should access the whole context as `module.this.context`
# to get the input variables with nulls for defaults,
# for example `context = module.this.context`,
# and access individual variables as `module.this.<var>`,
# with final values filled in.
#
# For example, when using defaults, `module.this.context.delimiter`
# will be null, and `module.this.delimiter` will be `-` (hyphen).
#

module "this" {
  source = "../.."

  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit

  context = var.context
}

# Copy contents of cloudposse/terraform-null-label/variables.tf here

variable "context" {
  type = object({
    enabled             = bool
    namespace           = string
    environment         = string
    stage               = string
    name                = string
    delimiter           = string
    attributes          = list(string)
    tags                = map(string)
    additional_tag_map  = map(string)
    regex_replace_chars = string
    label_order         = list(string)
    id_length_limit     = number
    label_key_case      = string
    label_value_case    = string
  })
  default = {
    enabled             = true
    namespace           = null
    environment         = null
    stage               = null
    name                = null
    delimiter           = null
    attributes          = []
    tags                = {}
    additional_tag_map  = {}
    regex_replace_chars = null
    label_order         = []
    id_length_limit     = null
    label_key_case      = null
    label_value_case    = null
  }
  description = <<-EOT
    Single object for setting entire context at once.
    See description of individual variables for details.
    Leave string and numeric variables as `null` to use default value.
    Individual variable settings (non-null) override settings in context object,
    except for attributes, tags, and additional_tag_map, which are merged.
  EOT

  validation {
    condition     = var.context["label_key_case"] == null ? true : contains(["lower", "title", "upper"], var.context["label_key_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }

  validation {
    condition     = var.context["label_value_case"] == null ? true : contains(["lower", "title", "upper", "none"], var.context["label_value_case"])
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}

variable "enabled" {
  type        = bool
  default     = null
  description = "Set to false to prevent the module from creating any resources"
}

variable "namespace" {
  type        = string
  default     = null
  description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}

variable "environment" {
  type        = string
  default     = null
  description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}

variable "stage" {
  type        = string
  default     = null
  description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}

variable "name" {
  type        = string
  default     = null
  description = "Solution name, e.g. 'app' or 'jenkins'"
}

variable "delimiter" {
  type        = string
  default     = null
  description = <<-EOT
    Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
    Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
  EOT
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}

variable "additional_tag_map" {
  type        = map(string)
  default     = {}
  description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}

variable "label_order" {
  type        = list(string)
  default     = null
  description = <<-EOT
    The naming order of the id output and Name tag.
    Defaults to ["namespace", "environment", "stage", "name", "attributes"].
    You can omit any of the 5 elements, but at least one must be present.
  EOT
}

variable "regex_replace_chars" {
  type        = string
  default     = null
  description = <<-EOT
    Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
    If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
  EOT
}

variable "id_length_limit" {
  type        = number
  default     = null
  description = <<-EOT
    Limit `id` to this many characters.
    Set to `0` for unlimited length.
    Set to `null` for default, which is `0`.
    Does not affect `id_full`.
  EOT
}

variable "label_key_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
    Possible values: `lower`, `title`, `upper`. 
    Default value: `title`.
  EOT

  validation {
    condition     = var.label_key_case == null ? true : contains(["lower", "title", "upper"], var.label_key_case)
    error_message = "Allowed values: `lower`, `title`, `upper`."
  }
}

variable "label_value_case" {
  type        = string
  default     = null
  description = <<-EOT
    The letter case of output label values (also used in `tags` and `id`).
    Possible values: `lower`, `title`, `upper` and `none` (no transformation). 
    Default value: `lower`.
  EOT

  validation {
    condition     = var.label_value_case == null ? true : contains(["lower", "title", "upper", "none"], var.label_value_case)
    error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
  }
}

#### End of copy of cloudposse/terraform-null-label/variables.tf

Miguel Zablah avatar
Miguel Zablah

btw this just how I do it there might be a better way to do this haha

github3 avatar
github3
02:25:21 PM

Enhancements

Set Default Schema to Remote Schema @haitham911 (#777)

what

• Set Default Validation Schema to Remote Schema

why

• We should set the default schema to the remote atmos schema so that atmos validate work even if the user does not configure a validation schema

John Seekins avatar
John Seekins

:wave: We’re experimenting with Atmos and seeing a strange behavior with templating:

$ atmos describe stacks --process-templates | grep Component
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
                    vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'

It seems like templates just…aren’t being processed and I’m not really sure how to debug this… The docs imply this should “just work”. I’m clearly missing something obvious, and would love some help. (Atmos 1.107.1 on darwin/arm64)

John Seekins avatar
John Seekins

Some more context:

components:
  terraform:
    vpc:
      vars:
        enabled: true
        name: "compute"
        ipv4_primary_cidr_block: "10.1.0.0/16"
        vpc_flow_logs_enabled: false
        nat_gateway_enabled: true
        public_subnets_enabled: true
    vpc-flow-logs-bucket:
      vars:
        name: "vpc-flow-logs"
    internal-domain-and-cert:
      settings:
        depends_on:
          1:
            component: vpc
      vars:
        create_wildcard_cert: true
        vpc_id: '{{ (atmos.Component "vpc" .stack).outputs.vpc_id }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

By default, templating is disabled. We originally implemented this as an escape hatch, but it’s become very popular.

John Seekins avatar
John Seekins

It is (theoretically) super useful!

John Seekins avatar
John Seekins

Ooo…I probably just want inheritance, huh?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So at Cloud Posse, in our refarch, we almost never use templating, and instead use inheritance and imports 99% of the time.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

However, we acknowledge the usefulness of the template functions. We’re also working on improvements, which involve moving towards what YAML calls “explicit types”, which are basically first-class functions in YAML.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos.Component | atmos

Read the remote state or configuration of any Atmos component

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The docs imply this should “just work”. I’m clearly missing something obvious, and would love some help.
Yes, that might be the case. We’re working on 2 things.

  1. Warning if you’re using templates and have it disabled
  2. Improving the docs to call it out
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you could share a screenshot or link to the page/chapter that you encountered it, I’ll fix it

John Seekins avatar
John Seekins

Jumps right out at you in the docs. Root page and all…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, that could definitely need some TLC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks!!

John Seekins avatar
John Seekins

I appreciate the context, but do you have any tips on how I can reference the vpc_id from vpc in internal-domain-and-cert in my example above? All the links you’ve passed along seem to talk about sharing data between stacks, and I just need to pass between components in a single stack here.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This, no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Share Data Between Components | atmos

Share data between loosely-coupled components in Atmos

John Seekins avatar
John Seekins

Yep. That is what isn’t working.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But do you have templating enabled in atmos.yaml

John Seekins avatar
John Seekins

That looks like what I was missing!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the flow of the documentation, we have this

1
John Seekins avatar
John Seekins

Awesome. Thanks, Eric.

John Seekins avatar
John Seekins

Definitely just saw that root page before I read more deeply about templating.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m going to update “Share Data Between Components”, to reference back to Template Configurations, to help avoid this snafu

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and call out that templating needs to be enabled.

John Seekins avatar
John Seekins

That’s great. The docs are generally pretty robust, which may have lulled me into a false sense of “well…if they say this just works…”

1
karel_alfonso avatar
karel_alfonso

Hi, I’m assessing Atmos in a use case that needs to provision a set of infrastructure components that are deployed separately (their own TF root module), in different AWS accounts. I also have to use Atlantis to apply changes. To simplify if I have three components C1, C2 and C3. C2 and C3 depend on the result of applying C1. I want to orchestrate the plan/apply flow of C1, C2 and C3 in that order. With Atmos, do I need to define a Workflow and how would it be used from Atlantis?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A workflow is one way to do that. We haven’t looked into solving ordered dependencies in a way that provides first-class support for Atlantis. But to your point, you could create a custom workflow in atmos that you call via Atmos. The issue is you really want to review/approve each individual component’s plan. So for that, a more elaborate approach is necessary.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual is one of the #atlantis maintainers and might have some other ideas

karel_alfonso avatar
karel_alfonso

Thanks for the reply. I was initially confused with Atmos stacks thinking that I could deploy in an ordered way the components listed in it. But reading the documentation I realised is probably a workflow. Another option I’ve been thinking is to do this via CI/CD and build the dependencies in each stage of the CI/CD pipeline. So, what is atmos solution to be able to orchestrate multiple Terraform root modules that depend on each other?

jose.amengual avatar
jose.amengual

Atlantis supports dependencies

jose.amengual avatar
jose.amengual

we could potentially ask CloudPosse to add component dependencies to the automatic workflow generation that atmos can create

karel_alfonso avatar
karel_alfonso

That would be great. We’re not using Atlantis dependencies at the moment. It would be good to find a away to orchestrate the plan/apply flow of multiple components (TF root modules) that depend on each other.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh that’s interesting, I think I forgot about that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Since we already represent dependencies in stack configs, it’s maybe a simple mapping

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@karel_alfonso did you see the atmos Atlantis generation?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s templatized so it might already be possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

jose.amengual avatar
jose.amengual
Repo Level atlantis.yaml Config | Atlantis

Atlantis: Terraform Pull Request Automation

karel_alfonso avatar
karel_alfonso


@karel_alfonso did you see the atmos Atlantis generation?
No, haven’t looked into that yet. Will the generation take into account the dependencies in the stack config?

jose.amengual avatar
jose.amengual

no, I do not think it will do that

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But the entire Atlantis config is one big template

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And atmos stacks define dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The templates have the full context of the stack configs, so it should be possible to define the Atlantis dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way to think about it is you are transforming the shape of one YAML configuration to the shape of another

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Configure Dependencies Between Components | atmos

Atmos supports configuring the relationships between components in the same or different stacks. You can define dependencies between components to ensure that components are deployed in the correct order.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
jose.amengual avatar
jose.amengual

that section, I believe is not free form Erik

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

What do you mean by free form?

jose.amengual avatar
jose.amengual

I think if you add a line that is not predefined, atmos does not add it to the generated atlantis,yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, that is plausible- but and easy change

jose.amengual avatar
jose.amengual

yes, I remember Andry adding a few lined when I was testing the integration pretty quickly

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok, so @karel_alfonso if you end up pursuing this route, we may need to make some tweaks, but I don’t think it would be anything radical.

1
karel_alfonso avatar
karel_alfonso


Atmos supports configuring the relationships between components in the same or different stacks. You can define dependencies between components to ensure that components are deployed in the correct order.
Is there a way to apply an entire stack with its dependencies using atmos CLI (without Atlantis) for demonstration purposes? I want to show to my team that a tool like atmos can help organise a large Terraform codebase with multiple tenants and AWS accounts. I can then look into Atlantis

karel_alfonso avatar
karel_alfonso

All examples I’ve seen deploy a specific component in a stack

karel_alfonso avatar
karel_alfonso

A concrete example of what I want to achieve is that I have a TF component that provisions a Kafka cluster. Whenever I change the number of brokers I want to deploy/update two other components that require the bootstrap servers returned after applying the first component. I thought/wished that atmos could help achieve something like that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the CLI we have not implemented that. The main reason is that it’s dangeous to “apply all”, however, we do have it on our roadmap but no ETA.

However, it is possible using custom commands. Others have done the same.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Custom Commands | atmos

Atmos can be easily extended to support any number of custom CLI commands. Custom commands are exposed through the atmos CLI when you run atmos help. It’s a great way to centralize the way operational tools are run in order to improve DX.

karel_alfonso avatar
karel_alfonso

excellent! thanks so much for the help and support. I’ll proceed with a demo I’m preparing using custom commands and later on will look into the integration with Atlantis

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Drift detection will pick up the changes in the scenario you described, but the way it works is not based on dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It work based on replanning all components

karel_alfonso avatar
karel_alfonso

Oh, hadn’t seen that feature. It’s something we can run in a scheduled CI/CD job

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Exactly!

karel_alfonso avatar
karel_alfonso

Ultimately, I want to prove that we don’t need TACOS, just Terraform, best practices framework and existing tools to improve all the issues you’ve listed that teams run into when scaling TF to a large org.

Bob avatar

Nothing helpful to add here, but want to just say thank you for asking the question and folks for answering swiftly. I was about to go on a rabbit hole reading atmos docs for the same exact use case (just no atlantis requirement). @karel_alfonso Curious on what you come up with.

Before reading this, I was also under the impression atmos can deploy all components in the stack in order of dependency. I’ll start investigating custom command and play around with our use case as well

Seeing HCP Terraform stacks (and deferred changes/plan capability) got me wanting to see what is available out there outside of the usual TACOS. Saw Terragrunt Stacks RFC, exciting, but timing of its release is unknown

1
jose.amengual avatar
jose.amengual

you can create an atmos workflow to deploy components in order using the CLI

2024-11-20

Samuel Than avatar
Samuel Than

Hi, i’m at the stage of initialising the tfstate-backend in a brownfield environment context.

I was successful in creating the s3 and dynamodb part and migrating all the workspace to s3.

However, i hit a wall when it comes to the process of enabling the access_roles_enabled flag.

The following is the error i recieved

Error: 
│ Could not find the component 'account-map' in the stack 'cs-core-gbl-root'.
│ Check that all the context variables are correctly defined in the stack manifests.
│ Are the component and stack names correct? Did you forget an import?

My namespace is cs however the stack name -core-gbl-root is foreign to me as i’ve not declare any of that. Not sure how that came about.

This was the stack yaml i’m using to deploy the tfstate-backend. I used the output of the iam role created by the access_role flag and pass it into the role_arn prior to turning on the access_role_enable flag.

Is there some sort of mapping i have miss configured ?

tfstate-backend:
      backend:
        s3:
          role_arn: null
      vars:
        access_roles_enabled: true # Set to false initially, and only used for cold start. 
        enable_server_side_encryption: true
        enabled: true
        force_destroy: false
        name: terraformstate
        prevent_unencrypted_uploads: true
        label_order: ["namespace", "tenant", "environment", "stage", "name"]
        access_roles:
          default: &tfstate-access-template
            write_enabled: true
            allowed_roles: {}
            denied_roles: {}
            allowed_permission_sets: {}
            denied_permission_sets: {}
            allowed_principal_arns: [
              "arn:aws:iam::XXXXXXXX:role/XXXXXXXX"
            ]
            denied_principal_arns: []
        tags:
          component: "tfstate-backend"
          expense-class: "storage"

My folder structure is stacks/orgs/cs/xxx/dev/ap-southeast-2/tfstate-backend.yaml and my stack name is cs-xxx-apse2-dev.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Ben Smith (Cloud Posse) maybe an easy one for you

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

your backend configuration is likely configured at a higher level so that it can be reused. It’s most likely under stacks/orgs/cs/_defaults.yaml

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

you can also always check the final result of stack configuration with atmos. For example,

atmos describe component tfstate-backend -s your-stack-name

https://atmos.tools/cli/commands/describe/component/

atmos describe component | atmos

Use this command to describe the complete configuration for an Atmos component in an Atmos stack.

    keyboard_arrow_up