#atmos (2024-03)

2024-03-01

Dr.Gao avatar

What additional feature does cloudposse/github-action-pre-commit provide in addtion of pre-commit that it forked from?

Hans D avatar

according to the Readme: NOTE: This is a fork of pre-commit/action to add additional features. That needs a bit more TLC i guess (open for PR’s re that). Given the repo description that should be in the area of allow overriding the git config user name and email

RB avatar

I forked it originally and the main reason to fork was exactly that. Pre commit is gravitating more towards the saas model and they removed features from its upstream action prompting the fork.

RB avatar

The upstream maintainer also refused prs so it was easier to maintain a fork

RB avatar

It essentially wraps the pre commit cli command so it takes advantage of all the pre commit features plus git owner of the pushed commits

Dr.Gao avatar

What are the features that we you care about that are removed?

Dr.Gao avatar

I see what you are saying

Dr.Gao avatar

They are good reasons, thanks for clarifying

Dr.Gao avatar

I am curious how you keep it in sync with upstream while maintain the features you added or would like to keep?

RB avatar

Good question. I don’t maintain it anymore but i believe cloudposse can answer that better. I recall dependabot is used to keep up with package updates and precommit installed is always the latest version.

RB avatar

What’s the primary concern? Have you noticed a feature in the upstream action that the fork doesn’t have?

Dr.Gao avatar

I do not have specific concern at the moment, is in the process of using this fork version vs directly upstream, so trying to collecting all information, so I can make a decision

Hans D avatar

noticed tat there is quite some divergence now between the two now, so that needs some work/contributions. Nothing planned though re getting this back in a more synched state.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The official action is deprecated. That was the main reason we forked.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Agree the description could be better updated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If there are updates we should pull in, we should do that.

Dr.Gao avatar

Thanks for the info, they are helpful!

pv avatar

Does atmos use terraform workspaces by default, and what for if not why and how would you use them with atmos?

Brian avatar

Yes. atmos does use tf workspaces. Because atmos is used to deploy “components” (aka small reusable terraform root modules), it uses workspaces to prevent collision when deploying multiple instance of the same component (eg, marketing-db, platform-db, etc).

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, so by convention, we recommend using workspaces, that way the backend can be configured once and then a workspace is used for each stack the component is deployed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s technically not required, but it’s what we use and have the most experience with.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) I can’t seem to find any docs on workspace configuration

2024-03-04

Release notes from atmos avatar
Release notes from atmos
10:44:36 PM

v1.65.0 Add providers section to Atmos manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2167232934” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/555“…

Add `providers` section to Atmos manifests. Update docs by aknysh · Pull Request #555 · cloudposse/atmosattachment image

what

Add providers section to Atmos manifests Auto-generate the prefix attribute for Atmos components for Terraform backend gcs for GCP Update docs (https://pr-555.atmos-docs.ue2.dev.plat.cloudpos

Hans D avatar

@Andriy Knysh (Cloud Posse) If i interpret this correctly, this means we can replace the dynamic iam-role parts in found in most component providers.tf files (sourcing from account-map) using this more static approach?

Add `providers` section to Atmos manifests. Update docs by aknysh · Pull Request #555 · cloudposse/atmosattachment image

what

Add providers section to Atmos manifests Auto-generate the prefix attribute for Atmos components for Terraform backend gcs for GCP Update docs (https://pr-555.atmos-docs.ue2.dev.plat.cloudpos

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that could be one of the use cases (if you want to use static values)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on other use cases for the feature

Hans D avatar

seeing some usecases as well (localstack needs some overrides as well, nice to set them from here)

Matt Gowie avatar
Matt Gowie

cc @kevcube since you may be a fan this feature

2024-03-05

pv avatar

Are these atmos accelerators supported?

https://github.com/slalombuild/terraform-atmos-accelerator/blob/main/components/terraform/gcp/network/README.md

We were given a link to these for GCP use. Most of them are empty placeholders and the readmes give no examples of their yaml configurations. For example with the one I shared, it does not give any examples on how to configure routes or cloud nat or firewall rules so this example is completely useless for anything other than a basic deployment. Whenever I try to configure something new, atmos complains because the yaml is not formatted properly

jose.amengual avatar
jose.amengual

We created those components and we try our best to update them and document them but I will say, they are a good starting point for you to fork and keep your version of them

pv avatar

Thank you this is helps but why aren’t these in the readmes?

pv avatar

Ok if the readmes could be updated at some point that would be greatly appreciated

jose.amengual avatar
jose.amengual

we just have not had time to get all the readmes updated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, for the sake of clarity, those templates are by Slalom, not Cloud Posse, so cloudposseis not the best at answering questions on those. There are others in the channel though, including @jose.amengualwho are maintaining these templates)

this1
pv avatar

Does Atmos automatically migrate state if you change the backend for a stack?

jose.amengual avatar
jose.amengual

I think terraform will ask you that before it was X and now is Y and if you want to migrate

1
jose.amengual avatar
jose.amengual

but I only have done that from local to s3

pv avatar

So I would have to do that change locally and not in my pipeline?

jose.amengual avatar
jose.amengual

you change the yaml, which then will render a different backend config

jose.amengual avatar
jose.amengual

whether you run that on a pipeline or not, that has nothing to do

jose.amengual avatar
jose.amengual

After the backend config is generated then is all TF work, so whatever the TF workflow is for switching backends that is what is going to happen

jose.amengual avatar
jose.amengual

I do not think you can do that on a pipeline because usually when you change the backend is an interactive cli response

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s also possible to keep the state location idempotent, by setting some parameters in the backend, that way the location doesn’t change even if the structure changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The right solution will depend on what you need to accomplish

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual is correct that atmos doesn’t perform automatic state migrations at this time. It’s nontrivial given there are dozens of backend types

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Using an atmos custom command you can make that easier

pv avatar

So, specifically, we needed to fix our GCS backend because we did not have a prefix set. However, when I update the backed, neither “atmos terraform plan” nor “atmos terraform init” ask to migrate to state. It just says that it sees the backend and wants to add existing resources

jose.amengual avatar
jose.amengual

I think you need to run –reconfigure for that

pv avatar

Can I run that with the atmos command?

pv avatar

@jose.amengual I don’t think that is correct. Reconfigure should just tell tf to ignore the existing state and only use the new backend. Terraform init should ask me to migrate state but it does not

pv avatar

Is the issue that the prefix does not constitute as a change?

jose.amengual avatar
jose.amengual

mmm you might be right

jose.amengual avatar
jose.amengual

then what you could do is to switch to local

jose.amengual avatar
jose.amengual

by pulling the state, then once is local, when move it to the correct backend

pv avatar

Hmm seems like a lot of work with all the prefixes I had to add to each component. I think I will just drag the files to the new dir and run the apply and it should see the state in the correct location

jose.amengual avatar
jose.amengual

ohh, if you can do that then yes

jose.amengual avatar
jose.amengual

I have no idea about what backend you use

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv in the latest Atmos release (https://github.com/cloudposse/atmos/releases/tag/v1.65.0), the prefix for GCP backend (gcs) will be generated automatically using the Atmos component names (you don’t need to manually defined prefixes for all Atmos components)

pv avatar

@Andriy Knysh (Cloud Posse) I think the issue is when we do not set the dir in the prefix for a component then certain components try to delete and readd other components so some of the backends need to be defined more specifically

2024-03-06

Andy Wortman avatar
Andy Wortman

I’m attempting to expand our atmos architecture to support multiple accounts and regions. I’ve found the docs for overriding the provider, but I also need to override the component’s S3 backend. Is there a way to do that? I’m not finding it in the docs…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

see this doc on how to configure backends https://atmos.tools/quick-start/configure-terraform-backend

Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

backend_type and backend sections are first class sections similar to vars, settings, providers, env - meaning they can be specified at any level/scope - Org, tenant, account, region (in the corresponding _defaults.yaml), and can also be specified per component (if needed)

Andy Wortman avatar
Andy Wortman

Awesome, that’s exactly what I’m looking for. Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you can have a diff backend per Org, tenant, region (if needed), account, or even per component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and use inheritance https://atmos.tools/core-concepts/components/inheritance to make the entire config DRY

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Andy Wortman avatar
Andy Wortman

We’re currently defining the backend in each component, so auto_generate_backend_file is currently false. THis is a global setting, right, so I’ll need to be careful about shifting existing infrastructure? Or does the component’s backend.tf override this until we make the change for each component?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

auto_generate_backend_file is a global setting in atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would set it to true and then check terraform plan on the components to see if the backend does not change

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it does for any reason, there are ways to override it to make it the same as it was before for each component that is already deployed

Andy Wortman avatar
Andy Wortman

awesome. Yeah, that would be a big job, but definitely doable.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you will need to set auto_generate_backend_file in any case if you want to use multiple Atmos components managing the same TF component in the same stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Terraform Backend | atmos

In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts

Andy Wortman avatar
Andy Wortman

oh wow, that’s how you do multiple copies of the same component within a stack - I was considering that some time ago, and assumed it was impossible. outstanding!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need any help

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


that’s how you do multiple copies of the same component within a stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here are some links that will help you understand diff patterns for how to deploy multiple copies of the same component in the same stack:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
<https://atmos.tools/design-patterns/multiple-component-instances>

<https://atmos.tools/design-patterns/component-catalog>

<https://atmos.tools/design-patterns/component-catalog-with-mixins>

<https://atmos.tools/design-patterns/component-catalog-template>
1
Andy Wortman avatar
Andy Wortman
components:
  terraform:
    # Atmos component `vpc/1`
    vpc/1:

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Conceptually, an Atmos component is not the same as a Terraform component (although in many cases there is a one-to-one correspondence and correlation). An Atmos component manifest is the configuration of a Terraform component inside of a Stack manifest. Or, we can call it a metadata for a terraform component. This means you can configure multiple Atmos components with diff settings and point them to the same TF component (code). This allows you to have generic TF components (root modules) that can be deployed multiple times in the same or many accounts and regions w/o changing any Terraform code at all (the TF components don’t know and don’t care where they will be deployed). This is a separation of code (TF components) from config (Atmos component manifests and Stack manifests)

Andy Wortman avatar
Andy Wortman

I’m struggling a bit with overriding providers. I’ve been able to override the backend config; the migration to dynamically-generated backends was complicated, but not too painful. But the same method doesn’t seem to be working with provider overrides.

Here’s the yaml I created for a particular account, within my catalog: (data redacted)

terraform:
  providers:
    aws:
      region: us-west-2
      assume_role: "arn:aws:iam::XXXX:role/<role_name>"
  backend_type: s3
  backend:
    s3:
      acl: "bucket-owner-full-control"
      encrypt: true
      bucket: <bucket_name>
      dynamodb_table: <table_name>
      key: "terraform.tfstate"
      region: "us-west-2"
      role_arn: "arn:aws:iam::XXXX:role/<role_name>"

Then I import this file into the stack yaml. The backend override is working fine, but my plan appears to ignore the providers override. It’s not creating a providers_override.tf.json in the component directory, and the resources are set to be provisioned in my default account, instead of the one I specified in the providers block.

Am I missing something?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are, add --logs-level=Trace flag

atmos terraform plan <component> -s <stack --logs-level=Trace
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should show if the providers_override.tf.json file is generated and where

Andy Wortman avatar
Andy Wortman

That was exactly the problem - I had to update to the latest release. I noticed because the logs-level flag wasn’t recognized

Andy Wortman avatar
Andy Wortman

Once I updated, the provider override worked perfectly. Thanks!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(logs level was implemented a few weeks ago, but the providers section only in the latest release)

1
Patrick McDonald avatar
Patrick McDonald

We’re managing a multi-tenant architecture where each tenant operates within their own AWS account. I’m looking for efficient ways to monitor and detect changes within each tenant’s stack. Upon detecting changes, I would like to automatically run atmos terraform plan specific to the affected stack and tenant in their respective AWS account.

Patrick McDonald avatar
Patrick McDonald

I’m familiar with atmos affected stacks - I’m interested if theres a recommended pattern to apply the changes to target aws accounts

Hans D avatar

Unless you’re doing something special, the AWS account is part of the reported stack (normally aws account is the “tenant”-“stage” combination) - so this should do all what you are needing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Patrick McDonald have you seen our github actions? We have one for affected-stacks. I believe @Dan Miller (Cloud Posse) may be working on a public reference example.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our actions are already public

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


monitor and detect changes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

do you mean drift detection?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have workflows for that too that are pretty rad.

Patrick McDonald avatar
Patrick McDonald

I guess Im looking for how to manage authenticating into the respective tenant accounts to run atmos terraform plan for every tenant change.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

we use Github OIDC to assume a role in AWS that can assume other roles across the organization

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Patrick McDonald in what context are you thinking? Locally or through automation?

Patrick McDonald avatar
Patrick McDonald

through automation. I would like the github workflow to detect the change and assume role into the target account and plan the terraform. Im assuming the affected-stacks action just detects changes?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

affected-stacks detects changes that were made against code. However that does not detect any “drift” or changes that were made to the resources themselves outside of code

We have a few workflows, but the basic use case is to find all “affected stacks” or changes to code, and then run terraform against those resources. Will link that in a second

The more complex use case is what we call “drift detection”. That’s where we regularly check for changes in every single terraform resource in all stacks and create a GitHub Issue for any “drifted” resources

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
cloudposse/github-action-atmos-affected-stacks

A composite workflow that runs the atmos describe affected command

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

^ this action compares git refs between the main branch and the given branch, and then returns a list of all changed components and stacks

Patrick McDonald avatar
Patrick McDonald

I apologize I’m not asking the right question It’s more of a Github question than atmos. let’s say the affected stacks action finds the change and all is well. I have a workflow that will assume role into our sandbox account to plan/apply changes using the aws-actions/configure-aws-credentials@v3 action.

Since I have 10 accounts, is there an easy way to dynamically figure out the account of the changed tenant/stack and assume role?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

No worries! We don’t assume the role of the target account in the GitHub Action workflow directly. Instead, we assume 1 role in 1 central “identity” account and then assume another role in any target account by means of the Terraform provider configuration

like this

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

this is a part of our “reference architecture”, but here’s an idea of what that would look like. You can have a role for “github” workflow and a role for “devops” users. Both can assume the same “terraform” role in any target account. Then use that “terraform” role to plan and apply terraform

Patrick McDonald avatar
Patrick McDonald

ok so the “assuming” happens in the terraform

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yes exactly

Patrick McDonald avatar
Patrick McDonald

gotcha.. makes sense.

2024-03-07

2024-03-08

pv avatar

Does anyone know how to get Atmos to work with https_proxy env var? Normal terraform is picking it up but it appears the Atmos binary is not passing the env var of the OS it is run on to use the proxy.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos should pass all the OS environment variables to Terraform, plus the ENV variables defined in the stack config. Using this Golang code

	cmd := exec.Command(command, args...)
	cmd.Env = append(os.Environ(), env...)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where os.Environ() is all ENV variables in the executing process

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

env are ENV variables defoned in stack manifests

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pv how to you check that https_proxy is not passed?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

FYI, to bypass Atmos binary, you can execute atmos terraform shell <component> -s <stac> (which will generate the varfile and backend for the component in the stack), and then you can execute any native terraform command and check if the ENV var is working

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform shell | atmos

This command starts a new SHELL configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use the env section in stack manifests. The env section is first class section like vars, you can define it globally per Org, account, region, or per component (all of those will be deep-merged into final values). For example, in a component

components:
  terraform:
    my-component:
      env:
        https_proxy: <value>
        
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all ENV vars defined in env section` wiol be passed to Terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and when you set --logs-level parameter to Trace, you will see Atmos messages about what ENV vars are being used. For example:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos terraform plan <component> -s <stack --logs-level=Trace

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos will show a message like this:

Using ENV vars:

AWS_PROFILE=xxxxx
TF_DATA_DIR=xxxxx
TF_IN_AUTOMATION=true
https_proxy=<value>
Andy Wortman avatar
Andy Wortman

I’m struggling a bit with overriding providers. I’ve been able to override the backend config; the migration to dynamically-generated backends was complicated, but not too painful. But the same method doesn’t seem to be working with provider overrides.

Here’s the yaml I created for a particular account, within my catalog: (data redacted)

terraform:
  providers:
    aws:
      region: us-west-2
      assume_role: "arn:aws:iam::XXXX:role/<role_name>"
  backend_type: s3
  backend:
    s3:
      acl: "bucket-owner-full-control"
      encrypt: true
      bucket: <bucket_name>
      dynamodb_table: <table_name>
      key: "terraform.tfstate"
      region: "us-west-2"
      role_arn: "arn:aws:iam::XXXX:role/<role_name>"

Then I import this file into the stack yaml. The backend override is working fine, but my plan appears to ignore the providers override. It’s not creating a providers_override.tf.json in the component directory, and the resources are set to be provisioned in my default account, instead of the one I specified in the providers block. Am I missing something?

Andy Wortman avatar
Andy Wortman

Ahh… found the problem. I was running on older version of atmos (1.44.0). Upgraded to 1.65.0 and the providers override was picked up.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Andriy Knysh (Cloud Posse) shouldn’t atmos error if it encounters an unsupported key? e.g. using an old atmos with providers block)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should. Only if the version that @Andy Wortman used supported the Atmos Manifest Schema AND it’s added to the repo and configured

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1.44.0 is a very old Atmos version, which does not support Atmos Manifest Schema

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman updated to 1.65.0, but still Atmos Manifest Schema needs to be added to the repo and configured in atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman we recommend you do it (add and configure the schema) - we did after it was implemented and found a lot of misconfig (even if many people had been looking at the stack manifests for months)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you can run atmos validate stacks and it will validate all stack manifests (try to “misconfigure” any section to test it). Also, when running atmos terraform plan/apply, it will also validate all stacks manifests (so you could catch any misconfig)

2024-03-12

Release notes from atmos avatar
Release notes from atmos
08:04:34 PM

v1.66.0 Add stacks.name_template section to atmos.yaml. Add Go templating to Atmos stack manifests @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2182266112” data-permission-text=”Title is private”…

Release v1.66.0 · cloudposse/atmosattachment image

Add stacks.name_template section to atmos.yaml. Add Go templating to Atmos stack manifests @aknysh (#560) what

Add stacks.name_template section to atmos.yaml Add Go templating to Atmos stack…

aknysh - Overview

aknysh has 265 repositories available. Follow their code on GitHub.

Release notes from atmos avatar
Release notes from atmos
08:24:36 PM

v1.66.0 Add stacks.name_template section to atmos.yaml. Add Go templating to Atmos stack manifests @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2182266112” data-permission-text=”Title is private”…

prwnd9 avatar

Hi, I have trouble vendoring on atmos:

# vendor.yaml
# <https://atmos.tools/quick-start/vendor-components>
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
  name: vendor-config
  description: Atmos vendoring manifest
spec:
  sources:
    # <https://github.com/cloudposse/terraform-aws-codebuild>
    - component: "codebuild"
      source: "github.com/cloudposse/terraform-aws-codebuild.git"
      targets:
        - "components/terraform/codebuild"
      included_paths:
        - "**/*.tf"

I got this error after atmos vendor pull:

error downloading '<https://github.com/cloudposse/terraform-aws-codebuild.git>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git

I suspect I have wrong source syntax in vendor.yaml? I could clone successfully using git clone <https://github.com/cloudposse/terraform-aws-codebuild.git>; git version 2.34.1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make copies of 3rd-party components in your own repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to use source like these

github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}

github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Note the usage of the ///, which is to vendor from the root of the remote repository.

2024-03-13

Selçuk KUBUR avatar
Selçuk KUBUR

Hi everyone , I’m new into Atmos looking for a repo structure for Provisioning EKS Cluster within an Organizational Units.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Selçuk KUBUR you can start with thiese two docs:

https://atmos.tools/design-patterns/organizational-structure-configuration

Organizational Structure Configuration Atmos Design Pattern | atmos

Organizational Structure Configuration Atmos Design Pattern

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can clone the repo, and then use the Organizational Structure Configuration Atmos Design Pattern to extend it to have an Org and OU config. Also add more components that you need (e.g. EKS and all the releases that you want to deploy to EKS

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please also look at how to configure Atmos stacks naming convention https://atmos.tools/cli/configuration#stacks. (stacks.name_pattern in atmos.yaml)

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Selçuk KUBUR avatar
Selçuk KUBUR

Hi Again, @Andriy Knysh (Cloud Posse) thank u so much , I have configured repo structure for org/ou’s with guide and downloaded module for eks but in deployment I’m getting issue as below any idea how I can fix it?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cluster/providers.tf the module "iam_roles" is used to read the IAM role for Terraform to assume. You can use the module (it will require some effort from you to understand and configure), or just update the code here to use your own IAM role for Terraform (or no role if you want Terraform to use the role that you assume when provisioning the resources - you need to consider what you want)

provider "aws" {
  region = var.region

  assume_role {
    # WARNING:
    #   The EKS cluster is owned by the role that created it, and that
    #   role is the only role that can access the cluster without an
    #   entry in the auth-map ConfigMap, so it is crucial it is created
    #   with the provisioned Terraform role and not an SSO role that could
    #   be removed without notice.
    #
    # This should only be run using the target account's Terraform role.
    role_arn = module.iam_roles.terraform_role_arn
  }
}

module "iam_roles" {
  source = "../../account-map/modules/iam-roles"

  profiles_enabled = false

  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want your own role for Terraform (different from the role that you assume), then do this:

provider "aws" {
  region = var.region

  assume_role {
    # WARNING:
    #   The EKS cluster is owned by the role that created it, and that
    #   role is the only role that can access the cluster without an
    #   entry in the auth-map ConfigMap, so it is crucial it is created
    #   with the provisioned Terraform role and not an SSO role that could
    #   be removed without notice.
    #
    # This should only be run using the target account's Terraform role.
    role_arn = <Terraform IAM role with permissions to provision all the resources>
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to use the role that you assume, do this:

provider "aws" {
  region = var.region

}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

pay attention to the warning

    # WARNING:
    #   The EKS cluster is owned by the role that created it, and that
    #   role is the only role that can access the cluster without an
    #   entry in the auth-map ConfigMap, so it is crucial it is created
    #   with the provisioned Terraform role and not an SSO role that could
    #   be removed without notice.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it means that you have to pay attention to the role you use to provision the EKS cluster. If the role is lost/deleted for any reason, you will lose admin access to the cluster)

1
Kubhera avatar
Kubhera

It worked now :slightly_smiling_face: i had to add force protocol prefix as git::. source: "git::<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}"

1
Kubhera avatar
Kubhera

how do we hide sensitive data from being exposed in a stack file? is there anyway I can read from environment variable into stack?

Brian avatar

Terraform natively supports environment variables prefixed with TF_VAR_ for setting input variables, but I would not recommand using it. Because you want consistent outcomes across different execution environments, you should use a secrets manager.

For sensitive data, the recommended practice is to store it in a secrets manager (e.g., AWS Secrets Manager or SSM Parameter Store). You can then use Terraform’s data sources to retrieve these secrets during plan/apply, ensuring resources are configured consistently, regardless of the execution context.

However, using environment variables to configure Terraform providers is a common practice.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


You can then use Terraform’s data sources to retrieve these secrets during plan/apply, ensuring resources are configured consistently, regardless of the execution context.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the SSM/ASM paths to the secrets you can define in Atmos manifests

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

agree with @Brian, using ASM/SSM is one of the best and secure solutions (and it can be used from localhost and from CI/CD)

Kubhera avatar
Kubhera

git://CentralCIRepoToken>:{{env “CI_REPO_TOKEN”}}, but this works for me in vendor.yaml but same syntax not supporting in stack yamls

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stack manifests (yaml files) are not for vendoring, they are to configure components (e.g. terraform variables) and stacks (where the components are provisioned). What are you trying to achieve?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can then configure ssm_github_api_key in Atmos stack manifest for your component:

components:
  terraform:
    my-component:
      vars:
        ssm_github_api_key: "<SSM path to the secret>"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) shouldn’t this work with the latest atmos 1.66?

components:
  terraform:
    my-component:
      vars:
        super_sensitive: '{{env "CI_REPO_TOKEN"}}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This would rely on the Sprig env function.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(but @Andriy Knysh (Cloud Posse) is correct, that at Cloud Posse, we would generally read the secrets from SSM, rather than the ENV)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes it will work, all Sprig functions are supported

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
1
1
1

2024-03-14

cricketsc avatar
cricketsc

Is there a canonical way of setting booleans using stack manifest templating?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@cricketsc you are right, Go templates quote boolean values, I’ve just tested a few variants using a few functions, still the final string produced from a tremplate has quoted bools

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the string "true" will still work as a boolean value for a bool variable in Terraform

Terraform automatically converts number and bool values to strings when needed. It also converts strings to numbers or bools, as long as the string contains a valid representation of a number or bool value.

true converts to "true", and vice-versa
false converts to "false", and vice-versa
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) isn’t that because we are requiring the go template to be in encased in quotes?

foo: '{{ .... }''

That means anything returned by the go template, will be a string.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way around this is if we start supporting YAML modifiers.

We don’t have that today.

e.g.

foo: !boolean '{{ ... }}'
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) is correct, that terraform still has an awkward relationship with strings and booleans.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@cricketsc if you are encountering a problem, please share more details and maybe there’s an alternative.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and by “modifiers” I mean explicit types. https://github.com/cloudposse/atmos/issues/267

#267 Support YAML Explict Types

what

Support YAML “explicit types” used in numerous other projects, like Home Assistant.

why

• Greater flexibility to organize configurations • Greater extensibility; should be able to add more of these explicit types.

Examples

!env_var FOO will insert the value of the FOO environment variable (inspired by Home Assistant YAML) • !unset will delete the value from the stack configuration (#227) • !include FILE will insert the YAML contents of the file at that position with proper indentation • !include_dir DIR will insert all the YAML files in lexicographic order with the proper indentation • !secret aws/ssm FOO will read the value from AWS SSM and insert the value into the in-memory stack configuration

Set the CLOUDFLARE_API_KEY for the cloudflare provider.

env:
  CLOUDFLARE_API_KEY: !secret aws/ssm FOO

Related

#227

See Also

https://www.home-assistant.io/docs/configuration/splitting_configuration/https://www.home-assistant.io/docs/configuration/splitting_configuration/#advanced-usagehttps://www.home-assistant.io/docs/configuration/yaml/https://stackoverflow.com/questions/63567945/how-to-extend-go-yaml-to-support-custom-tags

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh!

TIL:

!!bool is a built-in tag in YAML used to explicitly specify that the data type of a value is boolean. 
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So try this, @cricketsc

foo: !!bool '{{ ... }}'
cricketsc avatar
cricketsc

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) thanks for taking a look at this!

2024-03-15

Andy Wortman avatar
Andy Wortman

Having some trouble with cloudposse/github-action-atmos-affected-stacks. Is there a dependency between that atmos version and the version of the atmos-affected-stacks action? I recently upgraded to atmos 1.65. The actions all worked for that commit, but recently we’ve started seeing the below error on every PR. I’ve tried the upgrading the versions of cloudposse/github-action-setup-atmos and loudposse/github-action-atmos-affected-stack, we’re using while staying on v1 of both. Same error as below. Upgrading to v2 of both took care of this error, but broke the matrix code that triggers the plan/apply steps.

Run atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch"
  atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch"
  affected=$(jq -c '.' < affected-stacks.json)
  printf "%s" "affected=$affected" >> $GITHUB_OUTPUT
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    ATMOS_CLI_PATH: /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos
    TERRAFORM_CLI_PATH: /home/runner/work/_temp/2b951b22-3979-4563-9e42-c061f9ebb96f
    ATMOS_CLI_CONFIG_PATH: atmos.yaml
Current working repo HEAD: ad0b5b3d6d9f6ce34f82c1222a47b77488982893 HEAD
Remote repo HEAD: 738d0df4ab18f9d845a20049457fca92ef47639e refs/heads/main
template: describe-stacks-all-sections:35: function "SessionName" not defined

Error: Process completed with exit code 1.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andy Wortman Atmos 1.66.0 introduces Go templates in Atmos stack manifests https://github.com/cloudposse/atmos/releases/tag/v1.66.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the error means that you use other Go templates in your YAML files (not intended for Atmos processing, but rather for the resources being provisioned)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the fix for that:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in all your Go templates, instead of using this

{{ ... }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

need to use this

{{`{{ .... }}`}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or this

{{ printf "{{ ..... }}" }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I understand this is an “inconvenience”, but that’s an issue in any tools that use Go templates by themselves and also allow configuring Go templates for the resources that they provision. Helm and helmfile use the same

{{`{{ .... }}`}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in Helm, people have been discussing it for years, see https://github.com/helm/helm/issues/2798

#2798 Can Helm support to ignore {{expr}} which is just for configuration but not render?

There is a use case: deploy Prometheus as StatefulSet and config alerting-rules as ConfigMap.

alerting-rules can take more detail on here: https://prometheus.io/docs/alerting/rules/#alerting-rules

it looks like:

  IF node_memory_Active >= 1
  FOR 1m
  LABELS { 
    service = "k8s_metrics", 
    alertname = "InstanceMemoryOverload" 
  }
  ANNOTATIONS {
    summary = "Instance {{ $labels.instance }} memory overload",
    description = "{{ $labels.instance }} memory overload for more than 1 minutes, now is {{ $value }}."
  }

Can Helm support to ignore {{expr}} which is just for configuration but not render?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in any case, we have now a situation where Go templates are used in diff contexts, and using this solves the issue

{{`{{ .... }}`}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Go templating does not process the templates in it, but rather just outputs them verbatim (which then goes to the provisioned resources)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if it solves the issue

Andy Wortman avatar
Andy Wortman

You said that was added in atmos 1.66 - I’m using 1.65 and getting this error.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, prob the GH action just downloads the latest version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can either pin Atmos in the GHA, or update your templates and Atmos to 1.66.0

Andy Wortman avatar
Andy Wortman
Setup atmos version spec 1.65.0
Attempting to download 1.65.0...
Found in cache @ /opt/hostedtoolcache/atmos/1.65.0/x64
Successfully set up Atmos version 1.65.0 in /opt/hostedtoolcache/atmos/1.65.0/x64
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this template: describe-stacks-all-sections:35: function "SessionName" not defined is a message from the Go templates in Atmos 1.66.0 (before that, that message did not exist)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

somehow/somewhere the GHA instalss the latest Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

btw, here’s how to pin the GHA to Atmos version

steps:
  - uses: hashicorp/setup-terraform@v2

  - name: Setup atmos
    uses: cloudposse/github-action-setup-atmos@v1
    with:
      version: 0.65.0
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the error you are seeing is from github-action-atmos-affected-stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check if Atmos version is pinned

steps:
        - uses: actions/checkout@v3
        - id: affected
          uses: cloudposse/github-action-atmos-affected-stacks@v3
          with:
            atmos-config-path: ./rootfs/usr/local/etc/atmos/
            atmos-version: .....
            nested-matrices-count: 1
Andy Wortman avatar
Andy Wortman

yeah, I think I found it! get-affected-stack calls its own copy of setup-atmos

1
Andy Wortman avatar
Andy Wortman

The setup-atmos call we defined was locked to 1.65.0, but the get-affected-stacks one wasn’t

Andy Wortman avatar
Andy Wortman

Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sorry for this inconvenience, but adding Go templates to Atmos manifests requires using

{{`{{ .... }}`}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for the “raw” mbedded templates, there is no way around it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me show you how you can modify the GHA to get the Atmos version from the Docker file from the repo, so those are always in sync (same version is used on localhost and in GHA):

1
Andy Wortman avatar
Andy Wortman

This is only a problem in atmos manifests - like stack yaml? If that’s the case, no big deal. We’re using Go templates in like 6 places. Easy to adjust.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
jobs:
  context:
    runs-on: [self-hosted, Ubuntu]
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Extract ATMOS_VERSION from Dockerfile
        id: extract_atmos_version
        run: |
          version=$(grep 'ARG ATMOS_VERSION=' Dockerfile | cut -d'=' -f2)
          echo "atmos_version=$version" >> "$GITHUB_ENV"

      - name: Atmos Affected Stacks
        uses: cloudposse/github-action-atmos-affected-stacks@v3
        with:
          atmos-version: ${{ env.atmos_version }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


This is only a problem in atmos manifests - like stack yaml? If that’s the case, no big deal. We’re using Go templates in like 6 places. Easy to adjust.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

exactly

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just use

{{`{{ .... }}`}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

instead of

{{ .... }}
Andy Wortman avatar
Andy Wortman

Thanks so much. Someday we’ll meet at a conference or something, and your beers are on me

3
Andrew Ochsner avatar
Andrew Ochsner

Curious what the right approach is or how to do what i’m trying to do. I’m in Azure land and defining policies via json that just gets jsondecoded and i create a resource…not unlike aws https://github.com/cloudposse/terraform-aws-service-control-policies/tree/main/catalog

I am trying to figure out where the right place to put those files is and initially i’m thinking stacks/catalog/policy-definnitions But i’m not sure how to get the right path to flow throught ot he terraform component that lives in components/terraform/policy-definitions Am i just stuck needing to define those in the component itself?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You are thinking along the right lines. One thing we like to do is not mix different type of config inside the catalog. So for our refarch, it’s strictly for “stack” manifests. We create a separate folder we called “policies” for our OPA policies. Now this is slightly different than what you’re trying to do because these policies relate to Atmos and Atmos knows where to find them in your case.

You’re defining policies that relate to a specific component and in a custom format, and those configurations only makes sense in the context of that component, so when we do this, we typically have a folder inside the component with the configuration options related to that component. Don’t treat this as Canon law, however, rules are meant to be broken. So to directly answer your specific question, I believe there’s a setting that is exposed, which contains the base path for Atmos and using the new features of Atmos 1.66 you should be able to also refer to anything in the entire context using double mustaches. Unfortunately, I am on my phone and do not know the setting off the top of my head. As soon as @Andriy Knysh (Cloud Posse) is around, he can probably share with that is.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) how to get the absolute base path of the atmos working directory? I suppose when @Andrew Ochsner is trying to access this config in the context of their module, they are in the temp module directory that terraform creates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are a few possible ways of doing this:

  1. Put the policies inside the component (e.g. components/terraform/<my-component>/modules/policy-definitions) and use them from the terraform code directly
  2. Put the policies directly in YAML in Atmos stack manifests as JSON strings in multi-line YAML (prob not a good idea)
  3. Put the policies in some other folder (e.g. stacks/catalog/policy-definnitions). In this case, the terraform code needs to know how to find that folder. This can’t be done directly in TF w/o hardcoding the path from the component to the folder. The only way to do it is to add another variable to the TF code and specify the path to the policies folder (that var can be configured in Atmos manifests).
  4. When you run atmos describe component <component> -s <stack>, you see
    atmos_cli_config:
      base_path: <absolute_path_to_repo>
      stacks:
     base_path: stacks
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use that in Atmos manifest for the component using Atmos 1.66.0 and Go templates (https://atmos.tools/core-concepts/stacks/templating/) by joining:

{{ .atmos_cli_config.base_path }}/{{ .atmos_cli_config.stacks.base_path }}/catalog/policy-definnitions
Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, you still need a separate Terraform variable to provide the path to the policies to the Terraform code, but that variable can be set in Atmos manifests using Go templates:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g.

components:
   terraform:
     my-component:
       vars:
         policies_path: "{{ .atmos_cli_config.base_path }}/{{ .atmos_cli_config.stacks.base_path }}/catalog/policy-definnitions"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Andrew Ochsner @Erik Osterman (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since Atmos supports the native Go templates as well as the Sprig functions in Atmos manifests, you can also use any functions from (if needed) https://masterminds.github.io/sprig/paths.html

Path and Filepath Functions

Useful template functions for Go templates.

Andrew Ochsner avatar
Andrew Ochsner

cool thanks for all of these options! had lost some momentum on this and will hopefully have time tomorrow or Friday to pick it back up so will noodle on it when i get there

1

2024-03-16

Ryan avatar

Hey all - new to the updated Atmos - I’m trying to create a local repo to bounce a few ideas around but I’m unsuccessful thus far getting Atmos to path through a basic name pattern. It works in my work environment but I didn’t do initial config, maybe I’m missing something.

Atmos.yaml -

components:
  terraform:
    base_path: "components/terraform"

stacks:
  base_path: "stacks"
  name_pattern: "{stage}"

schemas:
  jsonschema:
    base_path: "stacks/schemas/jsonschema"
  opa:
    base_path: "stacks/schemas/opa"
  atmos:
    manifest: "stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json"

stacks\example.yaml -

import: []
vars:
  stage: example


terraform:
  vars: {}

helmfile:
  vars: {}

components:
  terraform:
    fetch-location:
      vars: {}

    fetch-weather:
      vars: {}

    output-results:
      vars:
        print_users_weather_enabled: true

  helmfile: {}

command -> atmos terraform plan fetch-location -s example

Thanks everyone, have a good weekend otherwise.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is what we use to generate the live examples on our atmos.tools landing page

Ryan avatar

Thank you, will give it a review.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it looks pretty similar to your example, so it should be a good starting point

Ryan avatar

Sweet yea now I’m good thank you.

2024-03-19

Selçuk KUBUR avatar
Selçuk KUBUR

Hello everyone , I’m trying to provision eks component but getting below issue when running atmos plan on this component “module.eks.data.utils_component_config.config” and it says failed to find a match for the import etc. any idea how I can fix that issue ?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Could you share the stack file under orgs/* that has an import for provisioning the eks component? And to be clear, when you say eks, I assume you mean eks/cluster, as linked here

cluster | The Cloud Posse Developer Hub

This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

There are three things I would double check:

• that you have a file under orgs, such the following:

import:
- stacks/catalog/eks/cluster.yaml

• that the catalog file specifies a component name, or the component in its metadata:

components:
  terraform:
    eks/cluster: {}
Selçuk KUBUR avatar
Selçuk KUBUR

correct , eks/cluster here is under orgs.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

and lastly, I would assume you ran an atmos command such as: atmos terraform plan eks/cluster -s primetech-use2-dev

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

basically, if atmos isn’t finding the component, it’ll spit out the config for you to verify, but this likely means that the name you used can’t be found, or the stack file in orgs is not expected tenant-environment-stage

Selçuk KUBUR avatar
Selçuk KUBUR

I run command , atmos terraform plan eks --stack primetech-main-primetech-euc1-prod

Selçuk KUBUR avatar
Selçuk KUBUR

I have updated atmos file to find my stage.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

so could you run a couple commands… • atmos describe stacks --components eks --sections noneatmos describe stacks --components eks/cluster --sections none

1
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

that should list out where the components are… if there are in fact any orgs importing them

Selçuk KUBUR avatar
Selçuk KUBUR

for eks can see list of eks components,

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

gotcha… so, do you have any metadata blocks in the stacks/catalog/eks/cluster.yaml file?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

so, I would expect that you’d have something like:

components:
  terraform:
    eks:
      metadata:
        component: eks/cluster
Selçuk KUBUR avatar
Selçuk KUBUR

I wasn’t have that file , just now created under stacks/catalog/eks/

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I admit, there are a lot of ways to use atmos… so forgive me if my conventions seem strange. I’m following the practices from the atmos.tools site: https://atmos.tools/design-patterns/component-catalog

Component Catalog Atmos Design Pattern | atmos

Component Catalog Atmos Design Pattern

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it probably wants the vpc component to read the remote state from. You need to define it in Atmos stacks, or update the EKS component to provide the VPC parameters for the cluster

2024-03-20

Shiv avatar

Hi Team,

Could you explain how changing a component’s location or stacks location in our atmos/Cloud Posse setup affects its state? Are there any recommended practices or considerations we should be aware of when moving components to different locations or environments to ensure ? If there is a documentation goes in detail that will do as well?

Thanks for your help.

Shiv avatar

particularly the debate around “one big state of shared resources” versus “tiny states with atmos workflows/CP dependencies”. How do we best balance these approaches, especially considering the overlap between data layer services and storage resources deployed with apps? My understanding is that, apart from core infrastructure components like security, VPC, EKS, and SSM keys, most other elements are app-specific dependencies. These can either be exclusive to an app or shared across several, impacting how we manage their lifecycle. Could we discuss how to efficiently organize and differentiate these resources within our state management strategy?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Shiv, someone will likely get back to you tomorrow. I am afk. TL;DR by default we compute the backend bucket prefix using the components relative path. If components change physical disk location, those relative paths change. We support overwriting this key prefix via configuration, if you want to ensure it’s static. Alternatively, files can be renamed via S3 (assuming that backend). Atmos does not automate relocation of state, due to the number of backends out there and the number different types of potential state move operations.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In terms of backend architecture, there are two approaches we recommend. The one we chose by default is the simplest: single backend. It simplifies a lot but has limitations. It’s complicated to give fine-grained IAM permissions across accounts and regions. However practicing gitops this limitation is really moot, since it’s up to CI/CD layer to enforce. From a multi-region DR perspective, replication of the S3 backend is possible, but dynamodb replication while Possible does not work. We have investigated this thoroughly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The better pattern is a hierarchical backend architecture. One root bucket keeps the state of all other buckets. Provision one bucket per account per region. This simplifies the DR and permissions story. It complicates the remote state data source story. We have a solution coming out for this in April.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When i say “bucket” i am referring to proper tfstate backends, like the ones provisioned by the cloud posse modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Shiv if you are asking about what happens in you change the location of the Terraform component in components/terarform folder or Atmos component in stacks folder, here’s what happens:

  1. If you change the component location in components/terraform (e.g. from vpc to vpc2), then nothing will change. you just need to pint the Atmos component to the new location of the Terraform component, e.g.
    components:
      terraform:
     vpc:
       metadata:
         # Terraform component. Must exist in `components/terraform` folder.
         component: "vpc2"
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can change Atmos stack names and locations anytime w/o affecting anything (just update the imports if those imported files change)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can rename an Atmos component. For the backends, the Atmos component name is the workspace_key_prefix (for s3) and prefix for GCP). If you want to change the Atmos component name, but keep the same key_prefix (to not destroy the already deployed component), you can use backend.s3.workspace_key_prefix
    components:
      terraform:
     vpc-new:
       backend:
         s3:
           workspace_key_prefix: vpc
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the example above, the old name of the Atmos component was vpc, and now we changed it to vpc-new but want to keep the same (opld) workspace_key_prefix as vpc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in general, changing stacks (names, locations, etc.) can be done anytime. Those file names and folder structure are for people to organize the stack configurations and make it DRY

2024-03-21

2024-03-24

Rafael Oliveira avatar
Rafael Oliveira

Hi team, I’m looking to build a single ECS cluster on a multi-tenant setup which has multiple different domains. However, the component appears to be tied to a single DNS zone according to the children (dns-primary and dns-delegated)

As I have these domains on an external registrar, I’ve tried to disable dns-delegated but it throws the error below:

Error: Attempt to get attribute from null value
│ 
│   on main.tf line 9, in locals:
│    9:   acm_certificate_domain = try(length(var.acm_certificate_domain) > 0, false) ? var.acm_certificate_domain : try(length(var.acm_certificate_domain_suffix) > 0, false) ? format("%s.%s.%s", var.acm_certificate_domain_suffix, var.environment, module.dns_delegated.outputs.default_domain_name) : format("%s.%s", var.environment, module.dns_delegated.outputs.default_domain_name)

I’m assuming by checking the code that it’s a 1-1 relationship, but I’m wondering if I’m doing something wrong or there’s an alternative to use a single ECS cluster with multiple (isolated) clients.

References:

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/ecs

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dns-primary

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dns-delegated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We distinguish between what we call service discovery domains and vanity domains

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our components predominantly deal with service discovery domains. Then associate vanity domains with the SD domain.

Rafael Oliveira avatar
Rafael Oliveira

@Erik Osterman (Cloud Posse) thanks for the answer. If I understood correctly, I’ll have a single service discovery domain to communicate between the ECS containers and then I can work through multiple vanity domains, right? I’ll move forward with this setup, thank you!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so each cluster’s ALB gets a DNS name via the service discovery domain. Then how you route it could depend on if you’re using Global Accelerators, CloudFront, API Gateways, or other ALBs, etc.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Because we support all of these various configurations, we’ve needed to be very flexible

1

2024-03-25

Marat Bakeev avatar
Marat Bakeev

Hi everyone, atmos newbie here. Could anyone explain, why would I want to exclude [providers.tf](http://providers.tf) in atmos’s vendor.yaml file? Example file excludes it, and I’ve been blindly copying it ever since, but want to understand the why… thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

any examples where some files are excluded are just examples to show how to do it. You would want to exclude a file from vendoring (downloading) if you have the same file with your custom logic/code and you don’t want to override it with the vendored code, You also might want to exclude some other artifacts like, for example, docs or images, if you don’t want to download them. In short, you would exclude files if:

  1. You already have the same files with custom code and don’t want to override them
  2. You would download the same file from another source (e.g. vendor the entire component from one remote source, but exclude a file, then vendor the file from another remote or local path)
  3. You don’t want to download some files from remote sources (b/c they are not directly related to the code)
1

2024-03-26

Roy avatar

Hey Guys! I’m currently exploring Atmos for our company. I have one question regarding stack’s var sections – is there any equivalent of jsonencode (or just possibility of passing pure yaml)? As we are using TF also to configure our SaaS solutions it is sometimes impossible to fit into TF type system, for that cases we stringify input and this functionality is quite hard requirement then. I couldn’t find any clue on the web. Thanks for any info!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Roy this can be done in a few ways:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Using YAML multi-line strings (https://yaml-multiline.info/)
    components:
      terraform:
     my-component:
       vars:
         var1: |
           <YAML or JSON here>
    
YAML Multiline Strings

Find the right syntax for your YAML multiline strings.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use raw YAML or raw JSON in a YAML multi-line variable, and it will be sent to Terraform var as a string (which you can then as verbatim or decode)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Since Atmos supports Sprig functions in Atmos templates, you can use any functions from https://masterminds.github.io/sprig/, including the JSON functions https://masterminds.github.io/sprig/defaults.html
Sprig Function Documentation

Useful template functions for Go templates.

Default Functions

Useful template functions for Go templates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can combine the two methods (using YAML multi-line and Go templates and Sprig functions). For example, something like this
    components:
      terraform:
     my-component:
       settings:
         my-config:
           c1: "a"
           c2: "b"
           c3:
             d1: 1
             d2: 2
       vars:
         var2:
            a: 1
            b: 2
         var1: "{{ toJson .vars.var2 }}"
         var3: "{{ toJson .settings.my-config }}"
         var4: |
            <YAML or JSON here>
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if any help is needed, thanks

Roy avatar

hey @Andriy Knysh (Cloud Posse), thanks a lot! I can see some possible disadvantages:

  1. There probably I won’t be able to use validation capabilities (at least with jsonschema), isn’t it?
  2. Usage of settings can be tricky for other users of the code.
  3. It cannot be like var2 to var1, as it will still result in type error for var2. I will explore it further on real-world examples and let You know, probably not sooner than next week, bests!
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

JSON schema validation is based on the final values, so it should work fine

1

2024-03-27

2024-03-28

Monish Devendran avatar
Monish Devendran

is there atmos docker image ?

Chris King-Parra avatar
Chris King-Parra

I’m not sure but there is a Dockerfile in the quick-start repo here: https://github.com/cloudposse/atmos/tree/master/examples/quick-start

2024-03-29

2024-03-31

Chris King-Parra avatar
Chris King-Parra

Where do I configure which accounts correspond to dev/stage/prog?

jose.amengual avatar
jose.amengual

those are variables that are passed to your provider

jose.amengual avatar
jose.amengual

depends on what role_arn you use etc

Chris King-Parra avatar
Chris King-Parra

So right now I’m using a conditional in the provider block to choose a different account profile name based on the input of var.stage.

Does this look right? https://github.com/kingparra/atmos-test

kingparra/atmos-test
jose.amengual avatar
jose.amengual

looks right, you can use profiles or role_arn whatever is better for you

Chris King-Parra avatar
Chris King-Parra

Thanks for your help. I’m sure I’ll be pestering this channel with a lot of questions as I figure out how to use this thing.

jose.amengual avatar
jose.amengual

no problem

    keyboard_arrow_up