#atmos (2023-01)

2023-01-04

Release notes from atmos avatar
Release notes from atmos
11:44:37 PM

v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…

Release v1.19.1 · cloudposse/atmosattachment image

what

Add Sources of Component Variables to atmos describe component command Update docs

why The atmos describe component command outputs the final deep-merged component configuration in YAML form…

Release notes from atmos avatar
Release notes from atmos
12:04:37 AM

v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…

2023-01-06

Viacheslav avatar
Viacheslav

Hi all! I have an atmos workflow that creates two stacks with terraform. Let suppose, stack1 creates an Azure resource group with randomly generated name, and stack2 creates a storage account that needs to be pointed to that resource group. In order to create a storage account, I need to use a data source for the resource group in stack2, but I don’t know the name of that resource group to import.

Is there any native way to read outputs from stack1 and pass it as variables to stack2? I want to avoid using some complex bash commands between applying two stacks. Maybe there are any other suitable approaches to solve this problem following the philosophy of atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(not related to workflows, but it’s a way to get the remote state of one component and use it as inputs for another component)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this is the way you would want to do it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(note this is a link to an open PR updating the documentation; we’re going to merge it soon, and the link above will break)

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) thanks a lot, it looks suitable for my project

Viacheslav avatar
Viacheslav

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) For some reason it does not work as I expected, I’ll be glad to get any advice.

I created resources with component main and stack prerequisites In the second stack mid with a reference to the same component main, I enabled remote-state module following the guide and passed stack=prerequisites and component=main as inputs, so I expect to get outputs from remote backend of the previous stack, but got module.remote_state[0].outputs is null instead.

Investigating the state file I found that module.remote_state[0] with name “config” contains all input vars of the stack prerequisites, which is correct I think. But “data_source” looks like:

    {
      "module": "module.remote_state[0]",
      "mode": "data",
      "type": "terraform_remote_state",
      "name": "data_source",
      "provider": "provider[\"<http://terraform.io/builtin/terraform\|terraform.io/builtin/terraform\>"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "backend": "local",
            "config": {
              "value": {
                "path": ".terraform/modules/remote_state/modules/remote-state/dummy-remote-state.json"
              },
              "type": [
                "object",
                {
                  "path": "string"
                }
              ]
            },
            "defaults": null,
            "outputs": {
              "value": {},
              "type": [
                "object",
                {}
              ]
            },
            "workspace": null
          },
          "sensitive_attributes": []
        }
      ]
    },

For some reason outputs of the component created in stack prerequisites can’t be read by the remote state module.

P.S. component main uses a remote backend in Azure storage account, that defined in backend.tf file, so I do not define backend in atmos.yaml, but here I see that remote-state supports only s3 and tf cloud. Can it be a reason why the module returns null?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are not using Azure and can’t currently test it

Viacheslav avatar
Viacheslav

Thanks for update, got it. Do you plan to add Azure as a backend for remote state in the near future?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, this would be a community driven feature. We would accept a PR adding support.

1
Viacheslav avatar
Viacheslav

@Erik Osterman (Cloud Posse) Thanks, here is my PR to enable azurerm support: https://github.com/cloudposse/terraform-yaml-stack-config/pull/61

#61 add azurerm backend to remote-state

what

• Allow to use azurerm backend to read a state from

why

• Currently only s3 and remote backend supported

references

example backend configuration for azurerm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Viacheslav thanks for the PR, approved and merged

1

2023-01-07

Release notes from atmos avatar
Release notes from atmos
04:44:36 PM

v1.20.0 what & why

Add “Quick Start” doc https://atmos.tools/category/quick-start/

Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/

Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/

Update/improve “Atmos…

Release v1.20.0 · cloudposse/atmosattachment image

what & why

Add “Quick Start” doc https://atmos.tools/category/quick-start/

Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/

Add “CLI Commands Chea…

Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

CLI Commands Cheat Sheet | atmos

CLI Commands Cheat Sheet

Release notes from atmos avatar
Release notes from atmos
05:04:37 PM

v1.20.0 what & why

Add “Quick Start” doc https://atmos.tools/category/quick-start/

Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/

Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/

Update/improve “Atmos…

2023-01-09

jose.amengual avatar
jose.amengual

Hi, I see some action in this repo https://github.com/cloudposse/github-action-setup-atmos is this action ready to use? @matt

cloudposse/github-action-setup-atmos

A GitHub Actions that installs the Atmos CLI

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This one is ready

cloudposse/github-action-setup-atmos

A GitHub Actions that installs the Atmos CLI

1

2023-01-10

Release notes from atmos avatar
Release notes from atmos
01:14:34 AM

v1.21.0 what

Avoid exiting early when missing configuration file

Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs

Update atmos describe stacks and atmos describe affected commands

why

Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…

Release v1.21.0 · cloudposse/atmosattachment image

what

Avoid exiting early when missing configuration file

Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs

Update atmos describe stac…

Release notes from atmos avatar
Release notes from atmos
01:34:37 AM

v1.21.0 what

Avoid exiting early when missing configuration file

Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs

Update atmos describe stacks and atmos describe affected commands

why

Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…

2023-01-12

Geoffrey Hichborn avatar
Geoffrey Hichborn

Hello all, I have (what I hope is) a fairly basic question about Atmos that I haven’t been able to answer with the documentation. Is there a way to configure components in a stack, such that you can take outputs from one component and feed them in as inputs to another component?

Geoffrey Hichborn avatar
Geoffrey Hichborn

As an example, let’s assume I have a stack that creates a VPC and a set of EC2 instances – could I take a set of subnet ids from the “VPC component” and pass it into the components that create the EC2 instances?

Geoffrey Hichborn avatar
Geoffrey Hichborn

Imports don’t seem like the right answer since that appears to be primarily static configuration

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Create Components | atmos

In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform

Geoffrey Hichborn avatar
Geoffrey Hichborn

Oh, I see . One thing that’s not immediately clear is how you indicate the order in which the components have to run. Is there some way to create a dependency tree out of them?

Geoffrey Hichborn avatar
Geoffrey Hichborn

I see no indication in here how the stack determines which TF modules to run first – is it just based on the definition order?

Create Atmos Stacks | atmos

In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you decide what to deploy first. In stacks, you just configure the components that CAN be deployed into the stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you execute atmos terraform apply <component> --stack <stack> command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you deploy the dependencies first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then deploy the components that uses the remote state from the dependencies

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use workflows if you want to deploy in particular order https://atmos.tools/core-concepts/workflows/

Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Geoffrey Hichborn avatar
Geoffrey Hichborn

Hmm, that’s unfortunate. I’d really like a “deploy this whole tree of dependencies” where I can clearly tell each component what other components it depends on

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s what the workflows are for

Geoffrey Hichborn avatar
Geoffrey Hichborn

Workflows appear to require you to create a linear list rather than a dependency tree, where things can work in parallel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh, graphs are complicated and we don’t support anything like that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it would convolute the stacks and components

Geoffrey Hichborn avatar
Geoffrey Hichborn

In a previous role, I wrote something like that to orchestrate CloudFormation stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s def possible, but will create a lot of new issues and will make everything much more complicated

Geoffrey Hichborn avatar
Geoffrey Hichborn

Each component would indicate what outputs it requires from each other component and it would automatically resolve the dependency tree

Geoffrey Hichborn avatar
Geoffrey Hichborn

Yeah, I understand

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It can be done for a particular case, but to make all of that generic is not an easy task

Geoffrey Hichborn avatar
Geoffrey Hichborn

When you are looking at deploying literally hundreds of components, having things that don’t depend on each other deployed in parallel is a nice benefit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are planning on extending Atmos workflows to add parallel section to deploy a collection of components in parallel

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is much easier to understand, reason about and use than some complex dependencies expressed in YAML for diff components in the same or diff stacks

Release notes from atmos avatar
Release notes from atmos
07:55:49 PM

v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…

Release v1.22.0 · cloudposse/atmosattachment image

what

Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/

why Useful when you want to restart a workflow…

Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Release notes from atmos avatar
Release notes from atmos
08:15:02 PM

v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…

2023-01-13

Release notes from atmos avatar
Release notes from atmos
03:34:35 PM

v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why

Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:

atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…

Release v1.23.0 · cloudposse/atmos

what

Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs

why

Support two ways of speci…

Release notes from atmos avatar
Release notes from atmos
03:54:33 PM

v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why

Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:

atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…

2023-01-17

Release notes from atmos avatar
Release notes from atmos
03:54:36 PM

v1.24.0 what & why

Update atmos describe affected command Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[ { “component”: “test/test-component-override-2”, “component_type”:…

Release v1.24.0 · cloudposse/atmosattachment image

what & why

Update atmos describe affected command

Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for …

Release notes from atmos avatar
Release notes from atmos
04:14:45 PM

v1.24.0 what & why

Update atmos describe affected command Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[ { “component”: “test/test-component-override-2”, “component_type”:…

2023-01-19

Nimesh Amin avatar
Nimesh Amin

hi! When I try to run atmos terraform shell <component> -s <env> I get: “can’t find a shell to execute” at the very end. I remember this working at some point, but now I’m second guessing myself

This is all within the docker container. Am I forgetting to set some env?

Nimesh Amin avatar
Nimesh Amin

Fixed it. Upgraded to a newer version

2

2023-01-20

Russell Sherman avatar
Russell Sherman

it appears that the atmos helmfile implementation does not not take into account the possibility of an eks cluster name including attributes

adding attributes to the cluster_name_pattern does not appear to have any effect and there does not appear to be a way to override or specify the cluster name

am i missing something?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you set the name pattern in atmos.yaml ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
    cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all these tokens are supported

// ReplaceContextTokens replaces context tokens in the provided pattern and returns a string with all the tokens replaced
func ReplaceContextTokens(context Context, pattern string) string {
	r := strings.NewReplacer(
		"{base-component}", context.BaseComponent,
		"{component}", context.Component,
		"{component-path}", context.ComponentPath,
		"{namespace}", context.Namespace,
		"{environment}", context.Environment,
		"{region}", context.Region,
		"{tenant}", context.Tenant,
		"{stage}", context.Stage,
		"{workspace}", context.Workspace,
		"{attributes}", strings.Join(context.Attributes, "-"),
	)
	return r.Replace(pattern)
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if you set, for example:

cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-{attributes}-eks-cluster"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and add

attributes:
  - blue
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to the resources, then the cluster name will be something like this

ns-core-ue2-prod-blue-eks-cluster
Russell Sherman avatar
Russell Sherman

using atmos 1.24.0

in atmos.yaml i have

yaml
helmfile:
    use_eks: true
    cluster_name_pattern: "{namespace}-{stage}-{attributes}"

in my stack config i have

yaml
components:
  helmfile:
    echo-server:
      vars:
        attributes:
          - "20230115"

in the execution log i see

txt

Writing the variables to file:
/conf/components/helmfile/echo-server/palolo-labs-us-west-2-echo-server.helmfile.vars.yaml

Using AWS_PROFILE=default

Downloading kubeconfig from the cluster 'palolo-labs-' and saving it to /dev/shm/palolo-labs-us-west-2-kubecfg

Executing command:
/usr/local/bin/aws --profile default eks update-kubeconfig --name=palolo-labs- --region=us-west-2 --kubeconfig=/dev/shm/palolo-labs-us-west-2-kubecfg

An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: palolo-labs-.
exit status 254

here it does not appear that the attribute is being appended to the cluster name and so the cluster is not found

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not easy to understand what’s going on from the output above, you can zip up the project (all files and atmos.yaml) and DM me, I’ll check it

Russell Sherman avatar
Russell Sherman

much appreciated..

2023-01-23

2023-01-24

2023-01-25

jose.amengual avatar
jose.amengual

How do you guys deploy lambdas with atmos? I’m trying to decide the best way when I have a monorepo with atmos infra and another repo for the lambda code and trying to avoid 2 PRs to create/deploy the lambda function

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not related to atmos, this is how do you handle two repos, it’s a terraform thing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and in general, you can use two (maybe more) diff methods to handle that:

jose.amengual avatar
jose.amengual

yes is not pure atmos related, but I was wondering if you guys handle any part of a lambda Infrastructure with atmos and what it looks like fke the user flow

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. For the lambda source repo, create a pipeline (e.g. GH actions) to compile the code, zip it, and store in, let’s say, S3 bucket. Then in the infra repo, use the ZIP artifact URL to download it and use for the lambda, e.g. https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/blob/master/main.tf#L99
module "artifact" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


trying to avoid 2 PRs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the best case scenario, you will need a PR in the lambda source repo (for the changes), and a pipeline run (e.g. Spacelift or GH action) in the infra repo to detect the source changes (or an artifact changes) and redeploy

jose.amengual avatar
jose.amengual

yes, I could avoid using 2 PRs but no matter what a user will have to interact with two different repos

2023-01-26

2023-01-27

2023-01-31

Release notes from atmos avatar
Release notes from atmos
08:54:36 PM

v1.25.0 what Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations why Auto-generate Atmos components from templates Using imports with context and parameterized config files will help to make the stack configurations extremely DRY, and is very useful when creating stacks and components for <a…

Release v1.25.0 · cloudposse/atmosattachment image

what

Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations

why

Auto-generat…

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

    keyboard_arrow_up