#atmos (2023-12)

2023-12-04

Leo avatar

Hi folks, I’m trying out atmos (exciting times! :rocket:) and have a few questions that I’m hoping you can help with:

  1. How do I pass the outputs of one component as inputs to another? (within the same stack)
  2. Does atmos automatically resolve dependencies? For example: say I have a stack with components A and B, with B taking the outputs of A as inputs. Component A changes and so do its outputs: can atmos redeploy B automatically? Similar question: does atmoshave a command to deploy all components in a stack?
  3. Where can I find the full schema for stacks, workflows and the atmos.yaml file?
  4. Do you know where I could find a production-ready repo that uses atmos?
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hey @Leo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


How do I pass the outputs of one component as inputs to another? (within the same stack)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://atmos.tools/core-concepts/components/remote-state - Atmos does not use YAML to define component remote state, it uses the TF remote-state module and YAML to define the input parameters

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Does atmos automatically resolve dependencies?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos describe dependents | atmos

This command produces a list of Atmos components in Atmos stacks that depend on the provided Atmos component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use settings.depends_on for the components

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


does atmoshave a command to deploy all components in a stack?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use Atmos workflows https://atmos.tools/core-concepts/workflows/, and/or Atmos custom commands https://atmos.tools/core-concepts/subcommands/ - a workflow can call any script, any Atmos native or custom command, etc. Similarly, a custom command can call any script, any workflow, or Atmos native or other custom commands

Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Atmos Subcommands | atmos

Atmos can be easily extended to support any number of custom commands, what we call “subcommands”.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Where can I find the full schema for stacks, workflows

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on that (and supporting the schemas in IDEs and in https://atmos.tools/cli/commands/validate/stacks command , we’ll have a new release this week with all the support and all the docs

atmos validate stacks | atmos

Use this command to validate all Stack configurations.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Do you know where I could find a production-ready repo that uses atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

an example of such a repo (not a full-blown infra, something to start with) will be in the new release as well. Take a look at https://atmos.tools/category/quick-start which describes the steps to take to start

Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Leo avatar

Hi @Andriy Knysh (Cloud Posse), thanks for your help: I’ve got a atmos POC up and running now! :rocket:

I see that settings.depends_on is quite handy when describing components - does it play any role during terraform apply too?

Looking forwards to the new release!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

settings.depends_on is not used directly in terraform apply, but it’s used in https://atmos.tools/cli/commands/describe/dependents, so you can create a custom command (and/or workflow) to use describe dependents and then plan/apply all the dependencies

atmos describe dependents | atmos

This command produces a list of Atmos components in Atmos stacks that depend on the provided Atmos component.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

depends_on is also used in CI/CD, e.g. GitHub Actions https://github.com/cloudposse?q=action-atmos&type=all&language=&sort= , or for Spacelift to automatically create stack dependencies https://docs.spacelift.io/concepts/stack/stack-dependencies from Atmos manifests

Stack dependencies - Spacelift Documentation

Collaborative Infrastructure For Modern Software Teams

1
Alcp avatar

Hi have couple of questions

  1. The stack pattern {tenant}-{environment}-{stage}, is it possible to use different delimiters other than “-“ because my stage or tenant names have dashes in them
  2. How do I deploy all components in a stack with atmos command? workflow is the only option?
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because my stage or tenant names have dashes in them - that will work with the dash in the stack name pattern

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


How do I deploy all components in a stack with atmos command? workflow is the only option?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

workflows or custom commands. We could add a native atmos command terraform deploy all or similar, but since it could be very different for diff use-cases, you can do the same with custom commands according to your needs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


deploy all components in a stack
that would be very different for diff use cases even if you used plain terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with custom commands, you can do something like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

add these commands to atmos.yaml:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# Custom CLI commands
commands:

  - name: list
    description: Execute 'atmos list' commands
    # subcommands
    commands:
      - name: stacks
        description: |
          List all Atmos stacks.
        steps:
          - >
            atmos describe stacks --sections none | grep -e "^\S" | sed s/://g
      - name: components
        description: |
          List all Atmos components in all stacks or in a single stack.

          Example usage:
            atmos list components
            atmos list components -s plat-ue2-dev
            atmos list components --stack plat-uw2-prod
            atmos list components -s plat-ue2-dev --type abstract
            atmos list components -s plat-ue2-dev -t enabled
            atmos list components -s plat-ue2-dev -t disabled
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: false
          - name: type
            shorthand: t
            description: Component types - abstract, enabled, or disabled
            required: false
        steps:
          - >
            {{ if .Flags.stack }}
              {{ if eq .Flags.type "enabled" }}
                atmos describe stacks --stack {{ .Flags.stack }} --format json | jq '.[].components.terraform | to_entries[] | select(.value.vars.enabled == true)' | jq -r .key
              {{ else if eq .Flags.type "disabled" }}
                atmos describe stacks --stack {{ .Flags.stack }} --format json | jq '.[].components.terraform | to_entries[] | select(.value.vars.enabled == false)' | jq -r .key
              {{ else if eq .Flags.type "abstract" }}
                atmos describe stacks --stack {{ .Flags.stack }} --format json | jq '.[].components.terraform | to_entries[] | select(.value.metadata.type == "abstract")' | jq -r .key
              {{ else }}
                atmos describe stacks --stack {{ .Flags.stack }} --format json --sections none | jq ".[].components.terraform" | jq -s add | jq -r "keys[]"
              {{ end }}
            {{ else }}
              {{ if eq .Flags.type "enabled" }}
                atmos describe stacks --format json | jq '.[].components.terraform | to_entries[] | select(.value.vars.enabled == true)' | jq -r '[.key]' | jq -s 'add' | jq 'unique | sort' | jq -r "values[]"
              {{ else if eq .Flags.type "disabled" }}
                atmos describe stacks --format json | jq '.[].components.terraform | to_entries[] | select(.value.vars.enabled == false)' | jq -r '[.key]' | jq -s 'add' | jq 'unique | sort' | jq -r "values[]"
              {{ else if eq .Flags.type "abstract" }}
                atmos describe stacks --format json | jq '.[].components.terraform | to_entries[] | select(.value.metadata.type == "abstract")' | jq -r '[.key]' | jq -s 'add' | jq 'unique | sort' | jq -r "values[]"
              {{ else }}
                atmos describe stacks --format json --sections none | jq ".[].components.terraform" | jq -s add | jq -r "keys[]"
              {{ end }}
            {{ end }}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then implement another custom command to use these custom commands:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  - name: terraform
    description: Execute 'terraform' commands
    # subcommands
    commands:
      - name: deploy-all
        description: This command deploys all components in a stack
        flags:
          - name: stack
            shorthand: s
            description: Name of the stack
            required: true
        steps: |-
           <loop through all stacks from 'atmos list stacks'>
           <loop through all components from the stack 'atmos list components -s <stack-from-the loop>'>
           <execute `atmos terraform apply <component-from-the-loop> -s <stack-from-the-loop>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can modify and improve this command anytime (e.g. execute each step sequentially or concurrently, outputting debug/trace messages) w/o waiting on native implementation in Atmos (which is not easy to do to cover all possible use cases)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then execute atmos terraform deploy-all -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can combine custom commands with workflows, or workflows with custom commands in any combination

Alcp avatar

Thanks for the details…will get back after trying it

2023-12-06

2023-12-08

Alcp avatar

Hi few more questions

  1. Atmos stacks could it be used with Terraform enterprise or any documentation with that regard about integration?
  2. Could templates be used in stack configuration files
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Could templates be used in stack configuration files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Atmos stacks could it be used with Terraform enterprise or any documentation with that regard about integration?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what is special about Terraform enterprise? TF backend? Atmos can auto-generate TF backend from the stack manifests. But you can disable the auto backend generation and add a separate backend config file to each Terraform component (so you can use any backend besides those directly supported by Atmos)

Alcp avatar

@Andriy Knysh (Cloud Posse) thanks for this feedback. My followup question is, let’s say we develop catalogs & stacks based on atmos configuration..in future would we be able to use the same with Terraform enterprise? hopefully this questions make some sense, I am totally ignorant of Terraform Enterprise capabilities. I am trying to figure if we go down the path with atmos and in future if Terraform enterpise becomes an option would we have to start over or use any of the existing work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Alcp the short answer to “if we develop catalogs & stacks based on atmos configuration..in future would we be able to use the same with Terraform enterprise” is yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the long answer:

Terraform Enterprise is a self-hosted version of Terraform Cloud.

With regular OSS Terraform, you specify the backend, for example s3 in a file, similar to this (in the file backend.tf.json)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
{
  "terraform": {
    "backend": {
      "s3": {
        "acl": "bucket-owner-full-control",
        "bucket": "xxx-ue2-root-tfstate",
        "dynamodb_table": "xxx-ue2-root-tfstate-lock",
        "encrypt": true,
        "key": "terraform.tfstate",
        "region": "us-east-2",
        "role_arn": "arn:aws:iam::xxxxx:role/xxx-gbl-root-terraform",
        "workspace_key_prefix": "my-component"
      }
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or, you add a setting to atmos.yaml to auto-generate the backend files automatically for each component (so no need to deal with the backend files0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now, with Terraform Cloud (and I guess with Terraform Enterprise), they have introduced the cloud block to be used instead of the backend block, similar to:

terraform {
  cloud {
    organization = "example_corp"
    ## Required for Terraform Enterprise; Defaults to app.terraform.io for Terraform Cloud
    hostname = "app.terraform.io"

    workspaces {
      tags = ["app"]
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can define it in the file [backend.tf](http://backend.tf) (or backend.tf.json) in each terraform component folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently, Atmos does not support autogeneration of TF cloud blocks (so it has to be added manually), but we are planning to add it to Atmos soon

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to summarize:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. When using the s3 backend, you can 1) provide the backend file for each component; or 2) ask Atmos to auto-generate it for each component automatically. The s3 backend can be used with the regular OSS Terraform, on the command line, or in CI/CD systems like GitHub Actions, Spacelift etc.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. When using Terraform Cloud or Terraform Enterprise, you will probably use the cloud block. Right now, you can provide the cloud block for each component in a separate file, and we’ll add the auto-generation of the cloud block to Atmos soon
Alcp avatar

Ok cool thanks again for answering, so with backend change. The way we deploy stacks with atmos be the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, I think the same (the only diff would be the backend config, but it’s not related to components and stacks)

Alcp avatar

Okay nice, I like atmos way of organizing components/catalog/stacks so it makes perfect sense to go the monorepo setup with atmos and in future the main impact would be the backend if enterprise comes in to picture

Alcp avatar

Once again thanks for the help and support.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, when running in the SaaS systems like Spacelift or Terraform Cloud, they don’t know anything about Atmos (nor terragrunt or other wrappers), they just execute terraform commands. But we use the hooks that they provide to execute atmos generate varfile (https://atmos.tools/cli/commands/terraform/generate-varfile) and atmos generate backend (https://atmos.tools/cli/commands/terraform/generate-backend) commands before they execute terraform plan/apply commands

atmos terraform generate varfile | atmos

Use this command to generate a varfile (.tfvar ) for an Atmos terraform component in a stack.

atmos terraform generate backend | atmos

Use this command to generate a Terraform backend config file for an Atmos terraform component in a stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Hooks - Terraform Cloud Agents - Terraform Cloud and Terraform Enterprise | Terraform | HashiCorp Developerattachment image

Terraform Cloud Agent hooks are custom programs that run at strategic points during Terraform runs.

2023-12-09

Release notes from atmos avatar
Release notes from atmos
03:14:34 PM

v1.51.0 what

Add examples/quick-start folder with an example infrastructure configuration

Update Quick Start docs

Add JSON Schema for Atmos manifests validation. Update atmos validate stacks to use the Atmos manifests validation JSON Schema. Update docs:

https://atmos.tools/reference/schemas/ <a href=”https://atmos.tools/cli/commands/validate/stacks/“…

Release v1.51.0 · cloudposse/atmosattachment image

what

Add examples/quick-start folder with an example infrastructure configuration

Update Quick Start docs

Add JSON Schema for Atmos manifests validation. Update atmos validate stacks to use t…

Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

atmos validate stacks | atmos

Use this command to validate all Stack configurations.

RB avatar

Will root components ever be available for brown field environments?

RB avatar

Thinking specifically about the account component so the account-map and aws-team* components can be used without a new org

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Well, those are precisely the components that won’t work well in brownfield, at least the brownfield we plan to address.

However our plan for next quarter (2024) is to refactor our components for ala carte deployment in brown field settings. E.g. enterprise account management team issues your product team 3 accounts from a centrally managed Control Tower. You want to deploy Cloud Posse components and solutions in those accounts, without any sort of shared s3 bucket for state, no account map, no account management.

1
RB avatar

Oh very cool! Looking forward to it

RB avatar

Would it be a difficult transition to migrate from cloudposse atmos the way it’s setup now to the new landing zone setup?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It can be done in varying degrees of adoption. We will be updating all components to write to SSM and have setup a mechanism to replicate designated shared SSM parameters across all accounts and regions

2
jose.amengual avatar
jose.amengual

Keep in mind that some enterprises do allow people to create roles ( not users) so whatever solution you guys create should allow you to create the roles and support Permission boundaries

2
RB avatar

Hmm if everything is saved to ssm does that mean the components will rely less on remote state so the remote-state.tf files will be replaced by data sources and ssm params?

1
RB avatar

One gap I’ve noticed in the readmes is that the output (and in the future ssm params), which is used by the remote state modules in descending components, are not documented in the readme.

It’s true that the outputs.tf is documented and can be looked at but only if the values are single scalar values. If they are maps, then it’s far harder to see what the output looks like unless you have access to an environment where it’s already deployed.

It would be nice to have a separate section in the readme to show the full sample output structure somehow.

I’m trying to rework the components for the brownfield env and one of the hardest things I’m trying to figure out is how to recreate the outputs for account component so the descending components will be and to read its remote state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It will all be wrapped in reusuable modules

2
jose.amengual avatar
jose.amengual

all the outputs of the account-map component are not really secrets so it could be possible to just output to ssm to all accounts and then use that info to feed back into the existing modules without many changes

1
RB avatar

for me, i’m trying to do it for the account component since i do not know the format of the account-map output

unless there is an easy way to construct the account-map without the account component ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) do you recall how we could add existing accounts to the account component? I think we did that for one customer that was using Control Tower.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This was without importing.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think we had to specify the account_id in the YAML or something

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos supports a backend of type static where we just define the values as if they were provisioned by other components

this1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it’s possible to “model” the entire account-map by using the static backend (as if the account-map was a real provisioned component)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s not documented in https://atmos.tools/, need to add a section about it)

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

RB avatar

Thats nice! I didn’t know about that type

The root issue is that we don’t know what the values need to look like for the component output. If we had that, then constructing a local account or account-map replacement would be straight forward

RB avatar

The other alternative, probably faster, is to add a way to NOT create an org or an account, for the account component

          organizational_units:
            - name: core
              enabled: false              # this will prevent creation of the OU
              accounts:
                - name: core-artifacts
                  enabled: false          # this will prevent creation of the account
                  tenant: core
                  stage: artifacts
                  tags:
                    eks: false
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "organization_enabled" {
RB avatar

thats only for organization tho, not all accounts? or maybe im mistaken? ill give it a try! ty andriy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, that’s for Org only (to not create it if you want to use an existing one)

RB avatar

ah ok so ill need to do it for accounts too. ill play with it, if i can get it to work, ill upstream my changes

1
jose.amengual avatar
jose.amengual

yesterday I imported 2 existing accounts into the accountcomponent

jose.amengual avatar
jose.amengual

I’m about to apply soon, so I can keep you posted

RB avatar

Did the emails have to align to import them without recreating?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual did you fix the issues you were having with the import?

jose.amengual avatar
jose.amengual

yes, I posted it here

jose.amengual avatar
jose.amengual

pretty sure is a TF bug

jose.amengual avatar
jose.amengual

yes they have to

jose.amengual avatar
jose.amengual

example :

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # aws_organizations_account.organizational_units_accounts["pepe-nonprod"] must be replaced
-/+ resource "aws_organizations_account" "organizational_units_accounts" {
      ~ arn                        = "arn:aws:organizations::111111111:account/o-za5r5sz4rw/22222222222" -> (known after apply)
      + close_on_deletion          = false
      + create_govcloud            = false
      ~ email                      = "[email protected]" -> "[email protected]" # forces replacement
      + govcloud_id                = (known after apply)
      + iam_user_access_to_billing = "ALLOW"
      ~ id                         = "22222222222" -> (known after apply)
      ~ joined_method              = "INVITED" -> (known after apply)
      ~ joined_timestamp           = "2023-11-09T20:34:30Z" -> (known after apply)
      ~ name                       = "pepe-Non Prod" -> "pepe-nonprod" # forces replacement
      ~ status                     = "ACTIVE" -> (known after apply)
      ~ tags                       = {
          + "Environment"   = "global"
          + "Name"          = "pepe-nonprod"
          + "Namespace"     = "pepe"
          + "Stage"         = "root"
          + "component"     = ""
          + "contact"       = "[email protected]"
          + "expense-class" = ""
        }
      ~ tags_all                   = {} -> (known after apply)
        # (1 unchanged attribute hidden)
    }
jose.amengual avatar
jose.amengual

I’m changing the account name ( because it has a space) and the email

RB avatar

Ah so the emails did change

RB avatar

And that’s causing the accounts to recreate…

jose.amengual avatar
jose.amengual

this accounts have emails that do not match the new standard

jose.amengual avatar
jose.amengual

so that needs to be changed

RB avatar

I wonder if adding a new key to the yaml to override the email convention to no-op the tf would do the trick

RB avatar

I think a hcl change would be needed to but i think that would do it

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

by the way @RB the import worked after matching email and account name and I was able to add it to account map and sso and such

RB avatar

right but you had to match the email which means you had to update the email in the account itself to no-op it ?

RB avatar

I don’t want to change the email to the cp convention. I just want to reuse the existing accounts

jose.amengual avatar
jose.amengual

I think you can add to the account component an email attribute to override

RB avatar

beautiful! exactly what i wanted

jose.amengual avatar
jose.amengual

you will just need to change this :

# Provision Accounts for Organization (not connected to OUs)
resource "aws_organizations_account" "organization_accounts" {
  for_each                   = local.organization_accounts_map
  name                       = each.value.name
  email                      = format(each.value.account_email_format, each.value.name)
  iam_user_access_to_billing = var.account_iam_user_access_to_billing
  tags                       = merge(module.this.tags, try(each.value.tags, {}), { Name : each.value.name })

  lifecycle {
    ignore_changes = [iam_user_access_to_billing]
  }
}
RB avatar

hmm I don’t think that will work unless the email contains the full name of the account

RB avatar

it will take some tweaking but i can make it work

RB avatar

ill also need to change the other resources too like the OUs

jose.amengual avatar
jose.amengual

why can’t just read it from the organizational_units if defined?

jose.amengual avatar
jose.amengual
- name: security
              accounts:
                - name: log-archive
                  email: [email protected]
                  stage: log-archive
                - name: sec-tooling
                  stage: sec-tooling
                - name: audit
                  stage: audit
jose.amengual avatar
jose.amengual

I did something similar for the Child OUs support I added

jose.amengual avatar
jose.amengual
# Provision Accounts connected to Organizational Units
resource "aws_organizations_account" "organizational_units_accounts" {
  for_each                   = local.organizational_units_accounts_map
  name                       = each.value.name
  parent_id                  = each.value.parent_ou != "none" ? aws_organizations_organizational_unit.child[each.value.ou].id : aws_organizations_organizational_unit.this[local.account_names_organizational_unit_names_map[each.value.name]].id
  email                      = try(format(each.value.account_email_format, each.value.name), each.value.account_email_format)
  iam_user_access_to_billing = var.account_iam_user_access_to_billing
  tags                       = merge(module.this.tags, try(each.value.tags, {}), { Name : each.value.name })

  lifecycle {
    ignore_changes = [iam_user_access_to_billing]
  }
}
jose.amengual avatar
jose.amengual

look at parent_id

1
jose.amengual avatar
jose.amengual

if you add to local.organizational_units_accounts_map the email if defined in the account , you could just pass that or default to the each.value.account_email_format

jose.amengual avatar
jose.amengual

right here:

 # Organizational Units' Accounts list and map configuration
  organizational_units_accounts = flatten([
    for ou in local.organizational_units : [
      for account in lookup(ou, "accounts", []) : merge({ "ou" = ou.name, "account_email_format" = lookup(ou, "account_email_format", var.account_email_format), parent_ou = contains(keys(ou), "parent_ou") ? ou.parent_ou : "none"}, account)
    ]
  ])
jose.amengual avatar
jose.amengual

line 16 of the component

jose.amengual avatar
jose.amengual

sorry I could send you the link in the code but I’m doing something else

RB avatar

Looks promising. I’ll dig more into that soon. Thanks Pepe! I’m hoping to put in a pr for it all this week and that should unlock all the other components

jose.amengual avatar
jose.amengual

definitely and if you feel like I think that local could use some better formatting and move some of those lookups can be moved to other locals to make it easier to read

RB avatar

I have account and account-map working in a brownfield env. Could someone give me a review on the upstreamed changes?

https://github.com/cloudposse/terraform-aws-components/pull/943

#943 feat: use account component for brownfield env

what

• feat: use account component for brownfield env

why

• Allow brownfield environments to use the account component without managing org, org unit, or account creation • This will allow adding to the account outputs so account-map can ingest it without any changes

references

• Slack conversations in sweetops https://sweetops.slack.com/archives/C031919U8A0/p1702135734967949

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB thanks for the PR, looks good, a few comments

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB can you update the README with an example of how to configure brownfield accounts in the YAML?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(otherwise this will be lost)

RB avatar

Thanks @Andriy Knysh (Cloud Posse) and @Erik Osterman (Cloud Posse) for reviewing. I made the necessary changes, tested in a few cases, and updated the readme to reflect the additions and how to use them.

Please re-review.

2
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@RB cc @Andriy Knysh (Cloud Posse) I’m very wary of modifying account like this, because it is extremely difficult to test with greenfield deployments, and we have seen plenty of issues with mixing data sources and resources, even when we try to be careful with enabling/disabling them via count.

Can we resolve this instead by better documenting the required output of account and showing people how to supply that to remote-state as input to account-map?

We can modify account-map to remove the need for anything but account_info_map from accounts.

RB avatar

Thanks for reviewing Jeremy.

Could the pr’ed version be tested with current greenfield environments by cloudposse? I’m almost positive it would result in no-changes.

However, if you folks don’t want to risk it. If the account component output was documented, we’d still need a component to output that format which would end in a different flavor of the component.

Would you folks then be willing to allow an upstream of a separate component (a brownfield flavor of account) instead of modifying the existing one to work with both existing and new accounts?

RB avatar

The hope is that we could upstream this to unlock a new suite of users to atmos and the cloudposse refarch. Up until now, brownfield users, the majority of cloud users, have been excluded and difficult to adopt the components without creating their own due to the dependence of the core components

this2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the changes in this PR are directly inline with our stated objectives to support brownfield environments. Our medium-term objective (big project) is to elminate reliance on account and account-map components. However, in order to accomodate brownfield environments in the shorter term, I am amenable to this type of change by @RB. @Jeremy G (Cloud Posse) brings up good points about the challenges of working with count, but I wonder if that matters a whole lot less, if taking into account that destruction of accounts barely happens.

I think @Jeremy G (Cloud Posse) brings up a good point about:

Can we resolve this instead by better documenting the required output of account and showing people how to supply that to remote-state as input to account-map? This is how we previously solved it. Although keeping it in account makes it a bit more consistent from a DX.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe @Ben Smith (Cloud Posse) @Dan Miller (Cloud Posse) @Jeremy White (Cloud Posse) can test this PR in one of our upcoming implementations.

2
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

My concern with testing account is that we have to test that it works to create accounts where none but the root exist, and if we find a problem, we don’t have an easy way to create a new test environment. This has made it the most difficult component to get right. Adding data sources that reference the same resources that the component creates is asking for hard-to-find and hard-to fix bugs, because data sources fail when they reference resources that do not exist, which is only going to be the case when we are first provisioning accounts; it will not suffice to ensure the component works after accounts have already been created. Because of these things, I am very reluctant to make substantial changes to account.

If you wanted to create a separate component, say account-data, and build it to support brownfield only, I would be more open to that. You deploy account to create and manage accounts and organizations, and account-data to use existing ones. While it would mean maintaining 2 components in sync, account is very stable, so it should not be a big job to keep them in sync, and we can and will resolve this issue differently in the v2 architecture.

I also note that the PR regarding changes to account-map is unacceptable as-is because it removes support for the existing root account having a different name in AWS than it does in Atmos stacks. It’s kind of subtle, but it points to the kind of trouble we can be getting ourselves into without noticing.

Release notes from atmos avatar
Release notes from atmos
03:34:38 PM

v1.51.0 what

Add examples/quick-start folder with an example infrastructure configuration

Update Quick Start docs

Add JSON Schema for Atmos manifests validation. Update atmos validate stacks to use the Atmos manifests validation JSON Schema. Update docs:

https://atmos.tools/reference/schemas/ <a href=”https://atmos.tools/cli/commands/validate/stacks/“…

2023-12-10

2023-12-11

Amit avatar

Hi Everyone, Can some help me with account-map config or provide some good tutorial ? I am new here trying to test the atmos tool on my local with my AWS account I was following the tutorial https://atmos.tools/category/quick-start But now i am getting below error

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in https://atmos.tools/category/quick-start, we used just a few TF components to show how to configure Atmos. The quick start does not show a full infrastructure b/c it would be too big for a quick start

Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the account-map component is used to store information about all the accounts https://atmos.tools/category/quick-start

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

before account-map, the account component needs to be provisioned https://github.com/cloudposse/terraform-aws-components/tree/main/modules/account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, the account-map component is used in all other components to get the terraform IAM roles, for example https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/providers.tf#L17

  source  = "../account-map/modules/iam-roles"
Amit avatar

Hi @Andriy Knysh (Cloud Posse) Thanks for sharing this information. Does this mean it’s mandatory to use account module and provision whole AWS Organization and AWS account, role etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use your own terraform modules

Amit avatar

So if i already have AWS account how do it proceed in that case ?

Amit avatar

Can i still be use account-map to store the information about account and roles ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are many diff things here. I’ll try to explain:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Atmos is not involved here, it’s all about terraform modules and components (root-modules)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If you are using Cloud Posse terraform components https://github.com/cloudposse/terraform-aws-components/tree/main/modules, then all of them use the account and account-map modules to provision the accounts and store info about accounts
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. The components use the information about the accounts and roles from the account-map component, for example https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/providers.tf
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = assume_role.value
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If you are using your own terraform modules/components, then it’s not required to use account-mao , but in this case you need to provide a Role for Terraform to assume to be able to access the resources in the account, as done here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc/providers.tf#L7
  dynamic "assume_role" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. If you want to use the caller role (the role you assume then executing terraform plan/apply ) as the Terraform role, then you don’t need to specify a separate role for the aws provider, and the code becomes
    provider "aws" {
      region = var.region
    }
    
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so here you need to decide what IAM roles you want to use and how to configure all of them. Do you want to use the caller role, or do you want to specify separate Terraform and Terraform backend roles to access the resources in different accounts and access the Tf backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so each component then just gets the required IAM role for Terraform to assume when accessing the resourcs in the corresponding account (e.g. https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ecr/providers.tf)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is definitely a complicated topic. If you use or want to use the CloudPosse components, then yes, you should use the account-map component to store the info b/c all other components use it to get the IAM roles (they usually diff for all accounts, e.g. dev, staging, prod)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are uisng your own Terraform modules and components, then it’s not nesessary to use account-map, and as mentioned above, you can provide your own roles in

provider "aws" {
  region = var.region

  assume_role {
      role_arn = <role to access the AWS resources in the account>
    }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or, if you want to use the caller role (the role that YOU assume when executing terraform plan/apply), then the provider code becomes just

provider "aws" {
  region = var.region
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in other words, if you already have an account, and you want just to provision a VPC as shown in the Atmos quick start, and you want to use your caller role, then you don’t need the account- map module. This is simple and you can play with it, but obviously it’s not a production ready configuration (production ready configuration is all about roles, permissions and access control)

Amit avatar

Thank you @Andriy Knysh (Cloud Posse) for this information now i got it. I will play with it more and hopefully i will go for production with roles and permissions

Amit avatar

Do you have a article that can help with setup roles and permissions using account-map ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no such public docs, but that’ involves not only the account-map , but other components like aws-teams and aws-team-roles if you are planning to use the CloudPose component https://github.com/cloudposse/terraform-aws-components/tree/main/modules. Or you can implement the roles yourself (starting with a simpler setup, e.g. diff terraform roles for dev, prod and staging)

2023-12-12

jose.amengual avatar
jose.amengual

using atmos one of my coworkers got this:

 Error: Plugin did not respond
│ 
│   with module.account_map.data.utils_component_config.config[0],
│   on .terraform/modules/account_map/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
1
jose.amengual avatar
jose.amengual

@marcelo.eguino

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i answered you in DM what could cause it

jose.amengual avatar
jose.amengual

atmos validate stacks comes out clean

jose.amengual avatar
jose.amengual

we are using the same repo

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@jose.amengual are you still blocked?

jose.amengual avatar
jose.amengual

no, this is related to my coworkers using a Mac m2 CPU

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

I meant if Andriy’s answer via DM solved the issue?

jose.amengual avatar
jose.amengual

yes

1
jose.amengual avatar
jose.amengual

FYI

jose.amengual avatar
jose.amengual

in docker on a Mac M2/M1 that fixes that issue

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice, thank you, we need to document this

kevcube avatar
kevcube

Docker recently published an update saying it resolved most Rosetta/x86 emulation issues. Hopefully this issue is included

2023-12-13

2023-12-14

2023-12-19

kevcube avatar
kevcube

~Terraform is failing under atmos because a vars file is missing, but it is a vars file that atmos doesn’t mention.. see snippet~

kevcube avatar
kevcube

oh sorry just noticed I was inside a atmos terraform shell and trying to run atmos against a different stack

RB avatar

ah that will do it

RB avatar

maybe the PS1 can be modified when in the atmos terraform shell so it’s more obvious that you’re in a subshell for a specific region dash component (stack)

1
RB avatar
#495 Make it more obvious in the terminal when launching the subshell via `atmos terraform shell`

Describe the Feature

Sometimes we’re in the subshell and forget that the subshell is specific to a stack and then try to plan/apply a different stack which leads to errors.

If the PS1 or something can be modified to show that atmos is in the atmos terraform shell mode, it would be very helpful.

Expected Behavior

For example, if you’re in the shell stack ue1-eks, then atmos should throw a warning if you’re trying to run atmos within the shell against another stack.

Use Case

Forgetting that you’re in a subshell

Describe Ideal Solution

Maybe a warning or a PS1 change to show you’re in a subshell

Alternatives Considered

No response

Additional Context

No response

2

2023-12-20

shmileee avatar
shmileee

Guys, I’m trying to follow this tutorial and immediately after cloning [email protected]:cloudposse/tutorials.git, running atmos terraform plan fetch-location --stack=example gives:

invalid 'import' section in the file 'catalog/example.yaml' 
shmileee avatar
shmileee
shmileee avatar
shmileee

Removing import: [] from 02-atmos/stacks/catalog/example.yaml solved it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a supported doc, it gets reviewed and updated often (and wil be improved as Atmos evolves), and it points to the repo which you can start using

Zain Zahran avatar
Zain Zahran

Hello. I’m wondering what is the best way to version Atmos terraform components? …

Zain Zahran avatar
Zain Zahran

We have different releases of infrastructure environments which means we’re not JUST doing the standard dev/test/prod

The current set up is something like this: atmos/components/terraform/{{release_version}}/vpc

Where release_version could be something like v1.0.0 or v2.0.0

The reason in doing this is so that it allows me to specify what version of the component I need in my atmos configuration.. like below:

import:
 - path: catalog/interfaces/components/vpc-interface
components:
 terraform:
   vpc:
     metadata:
       component: "{{ .release_version }}/vpc"
       inherits:
         - vpc-interface

The drawback is that I would have to duplicate files to across new versions and that would mean duplicating code and not keeping things DRY.

Is there a simple solution to this that I’m not seeing and do you all have any suggestions/ideas around that? Seems like things point to vendoring, but I’m not sure if that would suffice as it would require constant vendor pulls and actual copies of the components which need to be kept updated..

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what you have described is a correct way of doing it (we do the same as well). There are a few notes:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• If you can add some flags to the component to make it support different functionality, that would be better since you have only one version of TF code to support. Then use the flag in vars to select the different functionality

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• If you can’t modify the component, then yes, you can have multiple folders with different versions. You can use components/terraform/{{release_version}}/vpc or components/terraform/vpc/{{release_version}} or any other variants

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Or, you can use Atmos vendoring to vendor the core of the component, then add some mixins to add some custom files (the mixins can be remote or local). See https://atmos.tools/core-concepts/components/vendoring and https://atmos.tools/core-concepts/vendoring/. This way, you can vendor diff files into diff folders. This is similar to #2, but you don’t maintain all the component’s code, you maintain just mixins

Component Vendoring | atmos

Use Component Vendoring to make copies of 3rd-party components in your own repo.

Vendoring | atmos

Use Atmos vendoring to make copies of 3rd-party components, stacks, and other artifacts in your own repo.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are using 3rd-party components (e.g. https://github.com/cloudposse/terraform-aws-components/tree/main/modules), then #2 (which you explained) is the best way to do it (you just consider the diff versions of the component as diff components)

Zain Zahran avatar
Zain Zahran

Ah I see. I like the possibility of flags to support different functionalities and even a slightly different variation of the folder structure. Also, it helps to know it’s a valid approach to this. This is all helpful thanks!

2023-12-21

Mark avatar

Are there any configs for tuning the auto-generated backend?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:

    # Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE' ENV var, or '--auto-generate-backend-file' command-line argument
    auto_generate_backend_file: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure CLI | atmos

In the previous step, we’ve decided on the following:

Mark avatar

Yah, I saw that, but I was curious if it could be tuned in any way. How does it auto-generate the backend? Does it, or can it be told, to pick S3? Can a specific s3 bucket be specified while the key and everything else is dynamic?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, the whole thing consists of three steps (we’ll document it in https://atmos.tools/ in the net release):

Introduction to Atmos | atmos

Atmos is a workflow automation tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Add auto_generate_backend_file: true to atmos.yaml in components.terraform section
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Configure the backend in one of the _defaults.yaml manifests. You can configure it for the entire Org, or per OU/tenant, or per region, or per account. See https://github.com/cloudposse/atmos/blob/master/examples/tests/stacks/orgs/cp/_defaults.yaml#L7 as an example
  backend_type: s3 # s3, remote, vault, static, azurerm, etc.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform:
  backend_type: s3
  backend:
    s3:
      encrypt: true
      bucket: "cp-ue2-root-tfstate"
      key: "terraform.tfstate"
      dynamodb_table: "cp-ue2-root-tfstate-lock"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
      role_arn: xxxxxx
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. (This is optional) For each component, you can add workspace_key_prefix similar to https://github.com/cloudposse/atmos/blob/master/examples/tests/stacks/catalog/terraform/vpc.yaml
components:
  terraform:
    infra/vpc:
      metadata:
        component: infra/vpc
      backend:
        s3:
          workspace_key_prefix: infra-vpc
      settings:
        spacelift:
          workspace_enabled: true
        # Validation
        # Supports JSON Schema and OPA policies
        # All validation steps must succeed to allow the component to be provisioned
        validation:
          validate-infra-vpc-component-with-jsonschema:
            schema_type: jsonschema
            # 'schema_path' can be an absolute path or a path relative to 'schemas.jsonschema.base_path' defined in `atmos.yaml`
            schema_path: "vpc/validate-infra-vpc-component.json"
            description: Validate 'infra/vpc' component variables using JSON Schema
          check-infra-vpc-component-config-with-opa-policy:
            schema_type: opa
            # 'schema_path' can be an absolute path or a path relative to 'schemas.opa.base_path' defined in `atmos.yaml`
            schema_path: "vpc/validate-infra-vpc-component.rego"
            # An array of filesystem paths (folders or individual files) to the additional modules for schema validation
            # Each path can be an absolute path or a path relative to `schemas.opa.base_path` defined in `atmos.yaml`
            # In this example, we have the additional Rego modules in `stacks/schemas/opa/catalog/constants`
            module_paths:
              - "catalog/constants"
            description: Check 'infra/vpc' component configuration using OPA policy
            # Set `disabled` to `true` to skip the validation step
            # `disabled` is set to `false` by default, the step is allowed if `disabled` is not declared
            disabled: false
            # Validation timeout in seconds
            timeout: 10
      vars:
        enabled: true
        name: "common"
        nat_gateway_enabled: true
        nat_instance_enabled: false
        max_subnet_count: 3
        map_public_ip_on_launch: true
        dns_hostnames_enabled: true

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    infra/vpc:
      metadata:
        component: infra/vpc
      backend:
        s3:
          workspace_key_prefix: infra-vpc
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that this is optional - if you don’t add backend.s3.workspace_key_prefix to the component, the Atmos component name will be used automatically (which is this example is infra/vpc, but / will get replaced with -, so workspace_key_prefix will still be infra-vpc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we usually don’t specify workspace_key_prefix for each component and let Atmos to use the component name as workspace_key_prefix.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that I mentioned that these sections

  backend_type: s3
  backend:
    s3:
Mark avatar

Ahh .. thank you, this is exactly what I needed to know.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be defined at any level: Org, OU, region, account (and even component). They participate in all the inheritance https://atmos.tools/core-concepts/components/inheritance

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

meaning that part of the config can be defined at Org level, part at the OU level, part at the account level, etc. - then all the pieces will be deep-merged together into the final backend config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’m mentioning that b/c you will want to define different IAM roles to assume for each account (dev, staging, prod) in role_arn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so this part

terraform:
  backend_type: s3
  backend:
    s3:
      encrypt: true
      bucket: "cp-ue2-root-tfstate"
      key: "terraform.tfstate"
      dynamodb_table: "cp-ue2-root-tfstate-lock"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be defined at the Org level

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but this part

terraform:
  backend:
    s3:
      role_arn: xxxxxx
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be defined at the OU or account level with diff IAM roles

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos will deep-merge everything together and generate the final backend.tf.json file for each component taking the pieces from diff levels of the config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, you might want to have diff S3 buckets for the backend for diff Orgs (if you have multiple Orgs), or different accounts or OUs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in which case, this part goes to the Org level (_defults.yaml)

terraform:
  backend_type: s3
  backend:
    s3:
      encrypt: true
      key: "terraform.tfstate"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and this part goes to the OU and/or account level

terraform:
  backend:
    s3:
      bucket: "xxxxxx"
      dynamodb_table: "xxxxxx"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so diff parts can be configured in diff manifests (Org, OU, region, account) depending on your needs

Mark avatar

Wonderful. Thank you for the thorough explanation. This has been a huge help.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Mark also note, that why auto-backend generation is a nice feature to have/use (it saves you from creating those [backend.tf](http://backend.tf) files in each component, it actually becomes a requirement when you want to provision multiple Atmos component instances using the same Terraform component (e.g. many VPCs into the same account/region with diff names and settings)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Final Notes | atmos

Atmos provides unlimited flexibility in defining and configuring stacks and components in the stacks.

Mark avatar

Does it auto generate the buckets as well?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you create ` backend file manually, you’ll hardcode workspace_key_prefix for the component. But if you provision ultiple components using the same TF code, one of them will not work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Does it auto generate the buckets as well?

Mark avatar

Yah, trying to sort of wrap my head around the total scope of how much it can do unattended. I am, in part, trying understand if there is some behind-the-scene magic for bootstrapping a bucket into IAC that is defined by the IAC for use by the IAC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

buckets, no. Atmos nor TF does not generate anything in AWS, you need to provision it. See https://github.com/cloudposse/terraform-aws-components/tree/main/modules/tfstate-backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

tfstate-backend is another Atmos component that uses the terraform component

Mark avatar

Interesting

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have all components in components/terraform including tfstate-backend - this is the code (logic) that can be deployed anywhere (any Org, any OU, any region, any account)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have stacks - this is the configuration of the components for diff environments (regions, Orgs, OUs, accounts) - managed by Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so all of that is about the separation of the code (logic) from the configuration, so the code is generic and can be deployed anywhere

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in many cases, with enterprise-grade infras (multi-org, multi-tenant, multi-account, multi-region, multi-team), the configuration is much more complicated than the code. That’s what Atmos is trying to solve - to make the configuration manageable, reusable (by using https://atmos.tools/core-concepts/stacks/imports and https://atmos.tools/core-concepts/components/inheritance) and DRY, and to make the code completely generic

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Mark avatar

Yah, I am looking at solving some multi account/team/tenant issues, which is pretty much exactly how I found atmos. Trying to conceptualize a best-practices starting point to capture all of my goals. Quite a bit of research and thinking and drawing things out.

Mark avatar

Alo, that PR safe-guards the change to only effect debian-based systems.

Mark avatar

I am having an interesting issue with uid:gid mappings between the geodesic and the host. The status message at startup makes it look like everything will be fine.

# BindFS mapping of /localhost.bindfs to /localhost enabled.
# Files created under /localhost will have UID:GID 1047026499:1047000513 on host.

But doing anything w/ git results in errors:

│ Error: Failed to download module
│ 
│   on context.tf line 21:
│   21: module "this" {
│ 
│ Could not download module "this" (context.tf:21) source code from "git::<https://github.com/cloudposse/terraform-null-label?ref=0.24.1>": error downloading '<https://github.com/cloudposse/terraform-null-label?ref=0.24.1>': /usr/bin/git exited with
│ 128: fatal: detected dubious ownership in repository at '/localhost/Development/atmos/tutorials/03-first-aws-environment/components/terraform/tfstate-backend/.terraform/modules/this'
│ To add an exception for this directory, call:
│ 
│ 	git config --global --add safe.directory /localhost/Development/atmos/tutorials/03-first-aws-environment/components/terraform/tfstate-backend/.terraform/modules/this
│ .
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# Geodesic: <https://github.com/cloudposse/geodesic/>
ARG GEODESIC_VERSION=2.8.0
ARG GEODESIC_OS=debian

# atmos: <https://github.com/cloudposse/atmos>
ARG ATMOS_VERSION=1.51.0

# Terraform: <https://github.com/hashicorp/terraform/releases>
ARG TF_VERSION=1.6.5

FROM cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}

# Geodesic message of the day
ENV MOTD_URL="<https://geodesic.sh/motd>"

# Geodesic banner message
ENV BANNER="atmos"

ENV DOCKER_IMAGE="cloudposse/atmos"
ENV DOCKER_TAG="latest"

# Some configuration options for Geodesic
ENV AWS_SAML2AWS_ENABLED=false
ENV AWS_VAULT_ENABLED=false
ENV AWS_VAULT_SERVER_ENABLED=false
ENV GEODESIC_TF_PROMPT_ACTIVE=false
ENV DIRENV_ENABLED=false
ENV NAMESPACE="acme"

# Enable advanced AWS assume role chaining for tools using AWS SDK
# <https://docs.aws.amazon.com/sdk-for-go/api/aws/session/>
ENV AWS_SDK_LOAD_CONFIG=1
ENV AWS_DEFAULT_REGION=us-east-2

# Install specific version of Terraform
ARG TF_VERSION
RUN apt-get update && apt-get install -y -u --allow-downgrades \
  terraform-1="${TF_VERSION}-*" && \
  update-alternatives --set terraform /usr/share/terraform/1/bin/terraform

# Install atmos
ARG ATMOS_VERSION
RUN apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"

COPY rootfs/ /

WORKDIR /

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

Mark avatar

Yah, same problem

Mark avatar
$ docker pull cloudposse/geodesic:latest
...
$ docker run -it --rm --volume "${PWD}:/workdir" --volume "${HOME}:/localhost" cloudposse/geodesic:latest init|bash
...
$ geodesic
Mark avatar
# Host filesystem device detection failed. Falling back to "path starts with /localhost".           

?

Mark avatar

@Andriy Knysh (Cloud Posse) any suggestions?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t now what the error is about. Try not to run geodesic directly, but instead copy this Dockerfile (https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Dockerfile) and Makefile (https://github.com/cloudposse/atmos/blob/master/examples/quick-start/Makefile) and run make all

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Mark avatar

Yah, I have kinda of tried both, it breaks either way.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you run git config --global --add safe.directory as the message above suggests?

Mark avatar

yah, and that works, but it has to be done every time I jump into the container.

Mark avatar

like, out of the box it doesn’t work, and if I exit the container and retart it then I have to re-run that config

Mark avatar

which feels like something is broken

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Git submodule update failed with 'fatal: detected dubious ownership in repository at'

I mounted a new hard disk drive in my Linux workstation. It looks like it is working well. I want to download some repository in the new disk. So I execute git clone XXX, and it works well. But whe…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are many suggestions in there

Mark avatar

Yah, we don’t have any submodules or anything. This happens out of the box w/out me doing anything w/ the container.

Mark avatar

In fact, this happens w/ atmos repo out-of-the-box.

Mark avatar

and it doesn’t happen outside of the atmos container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you mean geodesic?

Mark avatar

yah, that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jeremy G (Cloud Posse) do you have any insight on the error above?

Mark avatar

so yah… did …

$ git clone <ssh://[email protected]/cloudposse/atmos.git> atmos.git
$ cd atmos.git/examples/quick-start
$ make all
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when I start the same, I don’t see the messages

# BindFS mapping of /localhost.bindfs to /localhost enabled.
# Files created under /localhost will have UID:GID 1047026499:1047000513 on host.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something is wrong with permissions?

Mark avatar

I am not certain there. This is default ubuntu install, nothing really special in the end.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#594 Files Written to Mounted Linux Home Directory Owned by Root User

what

• The user’s shell inside Geodesic runs as root • The script that launches Geodesic bind-mounts the host user’s $HOME to /localhost to provide access to configuration files and allow for editing of host files • Depending on the way Docker is set up, it is possible that files created under /localhost from within Geodesic will be set to the same owner UID and GID (that is, owned by root) on the host as they have within Geodesic. • This appears to affect *only* users running the Docker daemon as root under Linux. It does not affect Docker for Mac or Docker for Windows, nor does it affect Docker for Linux when run in “rootless” mode.

Resolution

The recommended solution for Linux users is to run Docker in “rootless” mode. In this mode, the Docker daemon runs as the host user (rather than as root) and files created by the root user in Geodesic are owned by the host user on the host. Not only does this configuration solve this issue, but it provides much better system security overall.

Geodesic, as of v0.151.0, provides an alternative solution: BindFS mapping of file owner and group IDs. To enable this solution, either set (and export) the shell environment variable GEODESIC_HOST_BINDFS_ENABLED=true or launch Geodesic with the command line option --geodesic-host-bindfs-enabled. When this option is enabled, Geodesic will output

# Enabling BindFS mapping of file system owner and group ID.

among its startup messages. Note that if you enable BindFS mapping while running in “rootless” mode, it will actually cause files on the host to be created with a different owner and group, not root and not the host user. If you see this behavior, do not use BindFS mapping.

Mark avatar

I think that host filesystem device detection failed. is the only thing that stands out

Mark avatar

Which sort of makes sense as this is a btrfs subvolume

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on what OS are you running the container?

Mark avatar
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.3 LTS
Release:	22.04
Codename:	jammy
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so your host OS is Ubuntu?

Mark avatar

yes

Mark avatar

I have pretty much only run Linux since 1993

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hmm, we usually run on MacOS. Let’s ask @Jeremy G (Cloud Posse) again if he has anything to say about the issue

Mark avatar

trying to figure out what is doing the rootfs probe

Mark avatar

oh .. found that …

Mark avatar

interesting

Mark avatar

uhhhh .. funny

Mark avatar
if [[ $GEODESIC_LOCALHOST_DEVICE == "disabled" ]]; then
  red "# Host filesystem device detection disabled."
elif df -a | grep -q " ${GEODESIC_LOCALHOST:-/localhost}\$"; then
  export GEODESIC_LOCALHOST_DEVICE=$(_file_device "${GEODESIC_LOCALHOST:-/localhost}")
  if [[ $GEODESIC_LOCALHOST_DEVICE == $(_file_device /) ]]; then
    red "# Host filesystem device detection failed. Falling back to \"path starts with /localhost\"."
    GEODESIC_LOCALHOST_DEVICE="same-as-root"
  fi
else
  export GEODESIC_LOCALHOST_DEVICE="missing"
fi
Mark avatar
$ df --output=source /
Filesystem
/dev/nvme0n1p2
$ df --output=source /localhost
Filesystem
/localhost.bindfs

Which makes sense .. the startup scripts are kicking out the bindfs mount

Mark avatar

does OSX not return the bind point? that seems an odd expectation

Mark avatar

I am not super tickled with this test .. I have no way to manually specify same-a-root and bipass the check

Mark avatar
function file_on_host() {
  if [[ $GEODESIC_LOCALHOST_DEVICE =~ ^(disabled|missing)$ ]]; then
    return 1
  elif [[ $GEODESIC_LOCALHOST_DEVICE == "same-as-root" ]]; then
    [[ $(readlink -e "$1") =~ ^/localhost ]]
  else
    local dev="$(_file_device "$1")"
    [[ $dev == $GEODESIC_LOCALHOST_DEVICE ]] || [[ $dev == $GEODESIC_LOCALHOST_MAPPED_DEVICE ]]
  fi
}
Mark avatar

Okay.. so if GEODESIC_LOCALHOST_DEVICE == disabled .. then we never check GEODESIC_LOCALHOST_MAPPED_DEVICE ?

Mark avatar

that seems like a bug…

Mark avatar

trying to understand why that test excludes a variable set in a different script

Mark avatar

oh .. even weirder…

env | grep ^GEO
GEODESIC_LOCALHOST=/localhost.bindfs
GEODESIC_LOCALHOST_DEVICE=same-as-root
GEODESIC_PORT=49406
GEODESIC_WORKDIR=/localhost/Development/atmos/atmos.git/examples/quick-start
GEODESIC_AWS_HOME=/localhost/.aws
GEODESIC_SHELL=true
GEODESIC_LOCALHOST_MAPPED_DEVICE=/localhost.bindfs
GEODESIC_TF_PROMPT_ACTIVE=false
GEODESIC_HOST_CWD=/home/mark.ferrell/Development/atmos/atmos.git/examples/quick-start
GEODESIC_HOST_UID=1047026499
GEODESIC_DEV_VERSION=
GEODESIC_CONFIG_HOME=/localhost/.geodesic
GEODESIC_OS=debian
GEODESIC_HOST_GID=1047000513
GEODESIC_VERSION=2.8.0
Mark avatar

oh .. nope .. that makes sense… disregard the previous

Mark avatar

oh .. the container isn’t mapping the getent data from the host

Mark avatar

found it

Mark avatar

Sooo .. yah .. this breaks w/ anything using nis, nis+, ldap, sssd, activedirectory, etc..

Mark avatar

/etc/profile.d/user.sh has a test that makes no sense

Mark avatar
id "${USER}" &>/dev/null
if [[ "$?" -ne 0 ]]; then
  if [[ -n "${USER_ID}" ]] && [[ -n "${GROUP_ID}" ]]; then
    adduser -D -u ${USER_ID} -g ${GROUP_ID} -h ${HOME} ${USER} &>/dev/null
  fi
fi
Mark avatar

that id "${UISER}" is bad

Mark avatar

that block will work if this is done as:

if [[ -n "${USER}" ]] && [[ -n "${USER_ID}" ]] && [[ -n "${GROUP_ID}" ]]; then  
  adduser -D -u ${USER_ID} -g ${GROUP_ID} -h ${HOME} ${USER} &>/dev/null  
fi  
Mark avatar

or .. no .. hold it .. the test was right, but it isn’t being executed on my container?

Mark avatar
$ id "${USER}"
id: 'mark.ferrell': no such user
$ echo $?
1
Mark avatar
cat<<EOF
> user: ${USER}
> uid: ${USER_ID}
> gid: ${GROUP_ID}
> home: ${HOME}
> EOF
user: mark.ferrell
uid: 1047026499
gid: 1047000513
home: /conf
Mark avatar
adduser -D -u ${USER_ID} -g ${GROUP_ID} -h ${HOME} ${USER}
Option d is ambiguous (debug, disabled-login, disabled-password)
Option g is ambiguous (gecos, gid, group)
Mark avatar

oh funny .. the code doesn’t grab the group name… userful…

Mark avatar

yah .. so that is a problem…

Mark avatar

okay .. I think I have a fix for this .. but it requires patching geodesic and atmos

Mark avatar

so … there are 2 bugs…

  1. this particular use of adduser is not really portable
  2. the code expects users to be using existing groups for some reason, so if the group doesn’t already exist then adduser fails (particularly for nis/nis+/ldap/ad/sssd groups)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if that’s a solution, we can update geodesic (@Jeremy G (Cloud Posse) is a maintainer). (not related to Atmos)

Mark avatar

yah, I am trying to work through the build process so I can test this locally

Mark avatar

for some odd reason running make build in the geodesic directory ends up installing packages for the wrong target..

Mark avatar

$ curl -L "<https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_amd64/session-manager-plugin.deb>"
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>TDHJEKZTRN72G72Q</RequestId><HostId>zqm91cObWUa/44r7iWWmFvBlg9UzRXpLHxylkTzNF8+rNFho/PtyLVoxeIitp0PIbrkJFZ98Vnw=</HostId></Error>
Mark avatar

oh .. I misread part of the output ..

Mark avatar

I hate being sick

Mark avatar
#900 fix: handling of user+group creation on Debian

The current handling of user, uid, and gid are incomplete and subject to failure.

• The adduser/addgroup commands on Debian based systems are much more pedantic than other platforms and will error out on some surprising conditions. • We want to support situations in which the username or groupname is pulled from systems which support characters such a spaces in the names which are prohibited in the POSIX passwd/group files. • We need to handle the condition in which the group the user belongs to does not already exist.

what

• Pass the groupname via the GROUP env variable in the wrapper script • Create the group if it does not already exist • Properly create the user on debian based systems

why

Screenshot from 2024-01-10 12-05-57

Mark avatar

I am sort of wondering how this worked at all on a rather large stack of systems..

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Mark There are a few things going on here, none related to Atmos, but rather Geodesic, of which I am the lead maintainer.

First of all, the bindfs mount and uid and guid mappings are all workarounds for having Docker installed as a daemon running as root, which is not recommended, and has been discouraged for a long time. I think all of these problems (except for the AWS session manager plugin) will go away for you if you install Docker in “rootless” mode.

What I’d like to know is what user and group names you had that were giving you problems, so I can try to reproduce it.

Run the Docker daemon as a non-root user (Rootless mode)

Run the Docker daemon as a non-root user (Rootless mode)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@MarkYou point out

the container isn’t mapping the getent data from the host That is correct. It is not intended to.

Geodesic is 8 years old and still has a lot of outdated cruft that we leave around just in case someone is relying on it, as long as it isn’t interfering with anything. The user.sh script you updated was a hack for ansible that complained if id "${USER}" failed. It was written for Alpine and never updated for Debian (and never worked on Debian). Its point was to register the username as an alias for user root so that id "${USER}" would return 0 and not an error. In general it should not be important anymore. While your PR may get this working on Debian, it will break it on Alpine and I’d rather remove the whole thing. Why did you think this was needed? Is 1047026499:1047000513 the right UID:GID for your host?

As for why the “Host filesystem detection” fails, I’m at a loss. I’m guessing it has something to do with Ubuntu 22, which we have not tested yet. I cannot reproduce it on Ubuntu 20. From within Geodesic what does

df --output=source /localhost.bindfs

output?

Mark avatar

Soo .. the problem here is that git is refusing to work because the host rootfs UID and GID file ownership is not being mapped into the container.

Mark avatar

Updating the user.sh fixes the mapping and allows git to work from inside the container.

Mark avatar

And .. bindfs is running as a user, not as root

Mark avatar

I suspect the files system detection failed because the behavior you are expecting is not universal to all filesystems, it isn’t related to Ubuntu, or Linux actually.

Mark avatar

but it is also a red herring. The larger issue here is that:

  1. there was no user mapping created for the UID of the files which git was looking at. So git wont work sanely unless the UID:GID can be looked up.
  2. the adduser command was invalid for debian based systems
  3. the user.sh script wasn’t attempting to create group mappings
Mark avatar

and yes, that UID:GID are the correct mappings

Mark avatar

and yes, you will need to deal with getent if you want any sane software to work from w/in a container such that it can trust the UID:GID mappings

Mark avatar
bindfs -o nonempty --create-for-user=1047026499 --create-for-group=1047000513 /localhost.bindfs /localhost
Mark avatar

W/out the above changes, using the default atmos geodesc:

$ atmos
... clipped all the cruft

 ⧉  geodesic 
 ✗ . [none] (HOST) quick-start ⨠ git status
fatal: detected dubious ownership in repository at '/localhost/Development/atmos/atmos.git'
To add an exception for this directory, call:

	git config --global --add safe.directory /localhost/Development/atmos/atmos.git
Mark avatar

Has anyone tried removing user.sh to see if git still works for them?

Mark avatar

My expectation is that it shouldn’t work if user.sh is removed, and even if it was added a an ansible hack at some point, it has likely kept this working accidentally for some time.

Mark avatar

But, whichever

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Mark Not to discount your problem, but to answer your question, we have many users successfully using Geodesic and Atmos and git on Ubuntu, WSL, and macOS, both ARM64 and AMD64 (Apple M*). All our engineers and many of our customers’ engineers use it every workday. Our standard operating procedure is to run Atmos from within Geodesic against a git repository hosted on Host system and mounted into the Docker container.

What your are experiencing is and issue with: • Running Docker as root • Requiring bindfs to do ownership translation between the owner in Geodesic (which must be root) and the owner of the files on the host system • A relatively new addition to git where it checks ownership of directories where it reads configuration • And an issue where we (I) used the wrong options to set up the bindfs mount. All the behavior makes sense to me except for why you see “Host filesystem device detection failed.” And since I can’t reproduce your problem exactly, I’m asking again, what do you get as the output of

df --output=source /localhost.bindfs
Git security vulnerability announcedattachment image

Upgrade your local installation of Git, especially if you are using Git for Windows, or you use Git on a multi-user machine.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Mark We have released Geodesic v2.8.1 which should fix this issue for you. LMK if you still have issues after updating.

Mark avatar

Honestly, I am very thankful for the information you have provided and all of the feedback. Based on information you previously provided I did find information on installing the rootless docker packages for Ubuntu 22.04.

Mark avatar

So far as the file_device() oddity, when I was stepping through the code I thought I found that the code was doing the following:

$ df --output=source /
Filesystem
/dev/nvme0n1p2
$ df --output=source /localhost
Filesystem
/localhost.bindfs

But after looking at my own log more I realized that it wasn’t the case.

Mark avatar

My brain has been particularly broken this week. Anyway, my mis-identifying the second step lead me astray.

Mark avatar
$ df --output=source /localhost.bindfs
Filesystem
/dev/nvme0n1p2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you try the new geodesic release 2.8.1?

Mark avatar

Not yet, been juggling a few things.

ballew avatar

Is there any other atmos examples out there other than what’s in the quick-start in the repo? I see there’s a lot more validate tests for atmos and wondering if anyone has some to share

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all these Terraform components (Terraform root modules) are actually configured with Atmos (see the READMEs) https://github.com/cloudposse/terraform-aws-components/tree/main/modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that those are “real” production-ready components used for many deployments, and they are all configured with Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

adding something more complicated to https://atmos.tools/category/quick-start would make it more difficult for new people to understand the basics

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I see there’s a lot more validate tests for atmos

ballew avatar

I’ll take a look, it’s been a while since I’ve used atmos and I’m going to rebuild this site around it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@ballew what exactly are you referring to? Maybe we can provide you with more information

ballew avatar
{
  "$id": "vpc-component",
  "$schema": "<https://json-schema.org/draft/2020-12/schema>",
  "title": "vpc component validation",
  "description": "JSON Schema for the 'vpc' Atmos component.",
  "type": "object",
  "properties": {
    "vars": {
      "type": "object",
      "properties": {
        "region": {
          "type": "string"
        },
        "cidr_block": {
          "type": "string",
          "pattern": "^([0-9]{1,3}\\.){3}[0-9]{1,3}(/([0-9]|[1-2][0-9]|3[0-2]))?$"
        },
        "map_public_ip_on_launch": {
          "type": "boolean"
        }
      },
      "additionalProperties": true,
      "required": [
        "region",
        "cidr_block",
        "map_public_ip_on_launch"
      ]
    }
  }
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you are referring to https://github.com/cloudposse/atmos/tree/master/examples/tests, then yes, that’s for ALL Atmos tests, and there are much more to Atmos than explained in the Quick Start

ballew avatar

Working off the quick-start there’s some stuff that doesn’t work out of the box, but it’s been a good way for me to figure out how to label things and separate out prod and dev

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configure Validation | atmos

Atmos supports Atmos Manifests Validation and Atmos Components Validation

ballew avatar

perfect, I’ll dig in from there

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

2023-12-22

asierra avatar
asierra

hey there, I’m using atmos with terraform 1.6.6 on darwin_arm64, and I’m getting this error every time I run my plan:

╷
│ Error: Failed to install provider
│ 
│ Error while installing hashicorp/template v2.2.0: the local package for registry.terraform.io/hashicorp/template 2.2.0 doesn't match any of the checksums previously recorded in the dependency
│ lock file (this might be because the available checksums are for packages targeting different platforms); for more information: <https://www.terraform.io/language/provider-checksum-verification>
╵

In the past when I got this error in terraform (alone) I just went to the terraform folder and used https://github.com/kreuzwerker/m1-terraform-provider-helper to fix it, but this doesn’t work with atmos. Do you know what I can do to fix it on atmos?

kreuzwerker/m1-terraform-provider-helper

CLI to support with downloading and compiling terraform providers for Mac with M1 chip

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not related to Atmos, it’s a terraform issue. Try to delete the .terraform folder and the lock file

kreuzwerker/m1-terraform-provider-helper

CLI to support with downloading and compiling terraform providers for Mac with M1 chip

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and maybe try to delete HOME/.terraform.d folder where terraform stores all the cached providers

asierra avatar
asierra

that did it! thank you Andriy

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also use this command https://atmos.tools/cli/commands/terraform/clean (instead of going to the component folder and deleting the .terraform foder and the lock file manually)

atmos terraform clean | atmos

Use this command to delete the .terraform folder, the folder that TFDATADIR ENV var points to, .terraform.lock.hcl file, varfile

2023-12-27

Guus avatar

Hi, we’re currently using atmos & cloudposse components to setup most of our infrastructure. We’ve setup an AWS Org with multiple accounts and two of those accounts have a VPC configured using the Cloudposse vpc component. I would now like to add vpc peering between these vpc’s (which are in different regions). I am looking at the vpc-peering component documentation and see two use cases, peering v1 accounts to v2 accounts and v2 accounts to v2 accounts. What is the difference here and how do I know if my accounts are v1 or v2?

jose.amengual avatar
jose.amengual

could you share a transit gateway, instead of peering ?

Guus avatar

We probably could, would that be the better option? I want to allow connections from a lambda function in one account to a VPC connected Aurora (RDS) cluster in another account.

Guus avatar

It’s only 2 VPC’s so I think vpc peering should suffice for this use-case.

jose.amengual avatar
jose.amengual

vpc peering does not have the same capabilities than a transit gateway

jose.amengual avatar
jose.amengual

if is only two vpcs maybe it does not matter but if you think about maybe having 3 or 4 in the future then the TG is a better choice

jose.amengual avatar
jose.amengual

price wise I don’t know if there is much difference

RB avatar

v1 is legacy aws accounts that were not created with atmos or cp refarch v2 is aws accounts created with atmos and cp refarch

this1
RB avatar

If you created all your accounts using the cp/atmos method then you should use the v2 method

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for multi-account and/or cross-region VPC peering, consider this module https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/vpc-peering uses the module and can create cross-account and cross-region VPC peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Use case: Peering v1 accounts to v2
Use case: Peering v2 accounts to v2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is def needs to be improved and described in more details

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I suppose v1 means you specify accepter_aws_assume_role_arn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and v2 means you specify accepter_stage_name and the role ARN is determined by

  accepter_aws_assume_role_arn = var.accepter_stage_name != null ? module.iam_roles.terraform_role_arns[var.accepter_stage_name] : var.accepter_aws_assume_role_arn
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t use module.iam_roles (which uses the accoun-map component)

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

then provide accepter_aws_assume_role_arn var

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(you def don’t need a TGW when peering just 2 VPCs, especially if you don’t need transitive routing and don’t have CIDR overlapping)

RB avatar

Is it possible to generate a dependency tree using atmos?

RB avatar

How vpc may depend on account-map which depends on account, etc?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It is

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
atmos describe dependents | atmos

This command produces a list of Atmos components in Atmos stacks that depend on the provided Atmos component.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Affected Stacks | atmos

Streamline Your Change Management Process

RB avatar

Very nice.

This would only work if depends_on is explicitly set for each component, right?

Is there a way to implicitly detect it by analyzing the terraform, perhaps by checking for usage of the remote state modules?

Or would it be better to generate the component default yaml with depends_on using a separate script to analyze the component tf code?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It requires explicit depends on

2023-12-28

Release notes from atmos avatar
Release notes from atmos
08:04:34 PM

v1.52.0 what

Add additional and updated documentation for GitHub Actions with Atmos

why

GHA documentation was out-of-date

Release v1.52.0 · cloudposse/atmosattachment image

what

Add additional and updated documentation for GitHub Actions with Atmos

why

GHA documentation was out-of-date

2023-12-29

    keyboard_arrow_up