#atmos (2023-02)

2023-02-01

Release notes from atmos avatar
Release notes from atmos
04:34:36 PM

v1.26.0 what Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected why The –repo-path flag specifies the filesystem path to the already cloned target repository with which to compare the current branch, and it can be used when working with private repositories Working with Private Repositories There are a few ways to…

Release v1.26.0 · cloudposse/atmosattachment image

what

Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected

why

The –repo-path flag specif…

atmos describe affected | atmos

This command produces a list of the affected Atmos components and stacks given two Git commits.

Release notes from atmos avatar
Release notes from atmos
04:54:35 PM

v1.26.0 what Update atmos describe affected command Add –repo-path flag to atmos describe affected command Update docs https://atmos.tools/cli/commands/describe/affected why The –repo-path flag specifies the filesystem path to the already cloned target repository with which to compare the current branch, and it can be used when working with private repositories Working with Private Repositories There are a few ways to…

2023-02-03

Release notes from atmos avatar
Release notes from atmos
11:34:39 PM

v1.26.1 what Update Go templates in stack configs why If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used for Datadog, ArgoCD etc. w/o requiring Atmos to process them)

Release v1.26.1 · cloudposse/atmosattachment image

what

Update Go templates in stack configs

why

If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used fo…

Release notes from atmos avatar
Release notes from atmos
11:54:37 PM

v1.26.1 what Update Go templates in stack configs why If a context is not provided (as if using import with context), don’t process Go templates in the imported configurations (templating can be used for Datadog, ArgoCD etc. w/o requiring Atmos to process them)

2023-02-09

Release notes from atmos avatar
Release notes from atmos
03:24:44 PM

v1.27.0 what && why Algolia search integration for Atmos docs

Release v1.27.0 · cloudposse/atmosattachment image

what && why

Algolia search integration for Atmos docs

Release notes from atmos avatar
Release notes from atmos
03:44:41 PM

v1.27.0 what && why Algolia search integration for Atmos docs

Matthew Reggler avatar
Matthew Reggler

Hi, I’m, using Atmos in a test deployment to improve my understanding of the cloudposse terraform modules, and run into a problem when backfilling the access_roles for the tfstate-backend after the bucket has been created (and post-deployment of account + account-map )

Although I can view the outputs of the account-map module from a local command (atmos terraform output account-map -s core-gbl-root), attempts to access this data via component remote state are failing.

atmos terraform plan tfstate-backend -s core-ue1-root

...

│ Error: Attempt to get attribute from null value
│ 
│   on ../account-map/modules/roles-to-principals/outputs.tf line 12, in output "full_account_map":
│   12:   value       = module.account_map.outputs.full_account_map
│     ├────────────────
│     │ module.account_map.outputs is null
│ 
│ This value is null, so it does not have any attributes.

I can’t quite see how the tenant, env and stage passed to the remote state module in roles-to-principles differs from that used in the command line, so is there configuration elsewhere that I’ve missed?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

show how do you use the remote-state module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

context must be provided to it

context = module.this.context
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Matthew Reggler avatar
Matthew Reggler
module "account_map" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.3.1"

  component   = "account-map"
  privileged  = var.privileged
  tenant      = var.global_tenant_name
  environment = var.global_environment_name
  stage       = var.global_stage_name

  context = module.always.context
}

The remote state is the one defined in a vendor component, the child module of account-map

Matthew Reggler avatar
Matthew Reggler

I have a remote_stack_backend configured in a _defaults.yaml (for now)

  remote_state_backend:
    s3:
      bucket: "pd-core-ue1-root-tfstate"
      dynamodb_table: "pd-core-ue1-root-tfstate-lock"
      key: "terraform.tfstate"
      role_arn: "arn:aws:iam::root_acc_id:role/seed_role"

Thought that was the sufficient condition to hook things up

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are these set correctly?

tenant      = var.global_tenant_name
  environment = var.global_environment_name
  stage       = var.global_stage_name
Matthew Reggler avatar
Matthew Reggler

roles-to-principles defaults them to core, gbl, and root — and I’m not deviating from the standard, so that should work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not easy to figure out what’s wrong, you can zip the project up and DM me

1
Matthew Reggler avatar
Matthew Reggler

Seemed to have solved this, was a problem with the s3 remote state of the account-map component. Made a stub component to test remote state modules for various deployed components and each had a non-null outputs — except account-map

Complete delete and clean of the directory and redeploy fixed the problem, cheers for the help yesterday

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

super, glad you solved it. I looked at your code yesterday, but didn’t find anything obvious, that’s why i wanted to recheck it today

2023-02-10

Release notes from atmos avatar
Release notes from atmos
04:14:49 PM

v1.28.0 what Add component_path to the outputs of atmos describe affected command why Output the path to the terraform or helmfile component Useful when we want to run certain workflows (e.g. in GitHub actions) for specific components and need to know what the component path is for each

Release v1.28.0 · cloudposse/atmosattachment image

what

Add component_path to the outputs of atmos describe affected command

why

Output the path to the terraform or helmfile component Useful when we want to run certain workflows (e.g. in GitHub …

2023-02-13

Viacheslav avatar
Viacheslav

Hi,

I want to extract some JSON properties from terraform output, but have some troubles with it. When I am trying to save terraform output to the file:

atmos terraform "output -json > output.json" main -s security --skip-init

I got a error message, because Atmos recognize it as a single terraform command, not as a command and argument:

│ Error: Unexpected argument
│ 
│ The output command expects exactly one argument with the name of an output variable or no arguments to show all outputs.

Alternative option also doesn’t fit my requirements:

atmos terraform "output -json" main -s security --skip-init > output.json

because output contains not only terraform outputs, but also atmos logs like

...
Executing command:
/usr/bin/terraform output -json
...

Is there a way to pass > output.json argument to the terraform in Atmos, or maybe turn off stdout of Atmos for a specific workflow step? Does Atmos native way allow it? Final goal is to read the service principal password, created in terraform and call az login command to switch the user before running the next step.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) all atmos output should be to stderr IMO

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe atmos output can be squelched with log levels, but we should still handle this use case as pipelining commands is common Unix behavior

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


all atmos output should be to stderr IMO

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think no, b/c we have many situations where we don’t want anything in stderr, e.g. in GH actions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and we even have a PR to disable stderr completely

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#322 Add `--redirect-stderr` flag to `atmos` commands

what

• Add --redirect-stderr flag to atmos commands

why

• The flag specifies the file descriptor to redirect stderr to when atmos executes subcommands like atmos terraform and atmos helmfile • Errors can be redirected to any file or any standard file descriptor (including /dev/null and /dev/stdout) • Useful in GitHub actions to prevent failures when any error sent to stderr makes the action fail

examples

atmos terraform plan test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform plan test/test-component-override -s tenant1-ue2-dev --redirect-stderr ./errors.txt

atmos terraform apply test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform apply test/test-component-override -s tenant1-ue2-dev --redirect-stderr ./errors.txt

atmos terraform destroy test/test-component-override-2 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform destroy test/test-component-override -s tenant1-ue2-dev --redirect-stderr /dev/null

mos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/null
atmos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/stdout
atmos terraform workspace test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr ./errors.txt

test

atmos terraform plan test/test-component-override-3 -s tenant1-ue2-dev
Executing command:
/usr/local/bin/terraform workspace select test-component-override-3-workspace

# This is an error generated by terraform and sent to `stderror`
Workspace "test-component-override-3-workspace" doesn't exist.

You can create this workspace with the "new" subcommand.

Executing command:
/usr/local/bin/terraform workspace new test-component-override-3-workspace
Created and switched to workspace "test-component-override-3-workspace"!
atmos terraform plan test/test-component-override-3 -s tenant1-ue2-dev --redirect-stderr /dev/null
Executing command:
/usr/local/bin/terraform workspace select test-component-override-3-workspace

Executing command:
/usr/local/bin/terraform workspace new test-component-override-3-workspace
Created and switched to workspace "test-component-override-3-workspace"!
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c we had issues with GH action seeing errors in stderr

Viacheslav avatar
Viacheslav

Disabling stderr is a good feature, when we want to show only relevant information regarding a process, but process errors separately or even ignore them. But currently Atmos logs all service information to stdout as well as terraform output, and there is no ability to split terraform output from atmos output (without writing ugly wrapper to find a first figure bracket etc. ), am I right?

I see these kinds of entries in my file:

Found stack config files:
Variables for the component 'main' in the stack 'security':
Writing the variables to file:
Command info:
Executing command:

along with an output from terraform, when I redirect stdout to the file, but stderr is empty. So stderr disabling is not a magic bullet in this case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Viacheslav atmos terraform ... currently does not support piping the terraform only output into a file b/c atmos always outputs some info. We need to add LOG levels with one LOG level to completely suppress any outputs from atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently, you can do the following:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform shell main -s security 
terraform output > file.json
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


So stderr disabling is not a magic bullet in this case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, this is not for this case, this is mostly for GitHub actions to redirect stderr from other binaries if we don’t want to have errors in stderr in GH actions - I was trying to point out that we don’t need to redirect all atmos outputs to stderr, we need to introduce LOG levels

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Viacheslav for manual one-time execution, this should work for you

atmos terraform shell main -s security 
terraform output > file.json
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will not work in some cases in CICD. We’ll try to implement log levels in atmos soon

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) Thank you! atmos terraform shell is enough for testing purposes right now. So log levels would be a very expecting feature, we hope to see it in upcoming releases :)

1
kevcube avatar
kevcube

I’m experiencing this problem today, hoping to use terraform output > out.json but it has atmos output. I have used --redirect-stderr but still have some atmos output in my out.json. I will try the shell command above.

kevcube avatar
kevcube

Also Terragrunt outputs only to stderr but can be run in CI, I agree with @Erik Osterman (Cloud Posse) that it makes sense to send all atmos output to stderr

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it will not work in all cases, e.g. some GH actions)

kevcube avatar
kevcube

I am getting a really strange error here

kevcube avatar
kevcube

and when the process exited, I got kicked out into a subshell inside the component directory. @Andriy Knysh (Cloud Posse) do you see what I did wrong?

kevcube avatar
kevcube

reordered the command to more closely match the above suggestion and still same error

@Viacheslav for manual one-time execution, this should work for you

atmos terraform shell main -s security 
terraform output > file.json
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those are two diff commands

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

first, you need to execute

atmos terraform shell <component> -s <stack>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in the new shell you can execute any native TF command w/o using atmos syntax

kevcube avatar
kevcube

Ohhh I see

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform shell | atmos

This command starts a new SHELL configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Viacheslav @kevcube please see this release https://github.com/cloudposse/atmos/releases/tag/v1.33.0

1
Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) Thanks, looks good!

1
Release notes from atmos avatar
Release notes from atmos
03:54:39 PM

v1.29.0 what Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command why The flag specifies the file descriptor to redirect stderr to when atmos executes subcommands like atmos terraform and atmos helmfile Errors can be redirected to any file or any standard file descriptor (including /dev/null and /dev/stdout) Useful in GitHub actions to prevent failures when any error sent to stderr makes the…

Release v1.29.0 · cloudposse/atmosattachment image

what

Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command

why

The flag specifies the file…

Release notes from atmos avatar
Release notes from atmos
04:14:39 PM

v1.29.0 what Add –redirect-stderr flag to atmos commands If –redirect-stderr flag is not passed, always redirect stderr to stdout for terraform workspace select command why The flag specifies the file descriptor to redirect stderr to when atmos executes subcommands like atmos terraform and atmos helmfile Errors can be redirected to any file or any standard file descriptor (including /dev/null and /dev/stdout) Useful in GitHub actions to prevent failures when any error sent to stderr makes the…

2023-02-16

Matthew Reggler avatar
Matthew Reggler

I’ve got a bit stuck with how the aws-sso component provisions identity account permission sets, can anyone spot what I’ve missed here:

following earlier conversations in this channel, for my atmos experiments I’m using the {tenant}-{stage} format for account names in account along with a global descriptor_formats addition (described here link to comment) resulting in account_map outputs like the following:

terraform_roles = {
  "core-auto" = "arn:aws:iam::{acc_id}:role/eg-core-gbl-auto-terraform"
  "core-identity" = "arn:aws:iam::{acc_id}:role/eg-core-gbl-identity-admin"
  ...
}

While this has worked well elsewhere, in aws-sso the policy-Identity-role-RoleAccess.tf uses stage alone to perform a full_account_map lookup, which is an invalid index (“identity” vs “core-identity”)

My solution was to implement a lookup like that described by @Jeremy G (Cloud Posse) in the link comment above:

locals {
  # add this local to convert from "identity" to "core-identity"
  account_name = lookup(module.this.descriptors, "account_name", var.iam_primary_roles_stage_name)

  # edit this lookup to use my new local
  identity_account = module.account_map.outputs.full_account_map[local.account_name]
}

module "role_prefix" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  stage = var.iam_primary_roles_stage_name

  context = module.this.context
}

The alternative, setting var.iam_primary_roles_stage_name to the full “core-identity” string results in a bad role name later on as it is passed to to “stage” in the role_prefix null label module (you end up with arns like arn:aws:iam::{acc_id}:role/eg-core-gbl-core-identity-admin)

Is adding module.this.descriptors lookups to solve these kinds of issues the correct move here?


# This file generates a permission set for each role specified in var.target_identity_roles
# which is named "Identity<Role>RoleAccess" and grants access to only that role,
# plus ViewOnly access because it is difficult to navigate without any access at all.

locals {
  identity_account = module.account_map.outputs.full_account_map[var.iam_primary_roles_stage_name]
}

module "role_prefix" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  stage = var.iam_primary_roles_stage_name

  context = module.this.context
}

data "aws_iam_policy_document" "assume_identity_role" {
  for_each = local.enabled ? var.identity_roles_accessible : []

  statement {
    sid = "RoleAssumeRole"

    effect = "Allow"
    actions = [
      "sts:AssumeRole",
      "sts:TagSession",
    ]

    resources = [
      format("arn:${local.aws_partition}:iam::%s:role/%s-%s", local.identity_account, module.role_prefix.id, each.value)
    ]

    /* For future reference, this tag-based restriction also works, based on
       the fact that we always tag our IAM roles with the "Name" tag.
       This could be used to control access based on some other tag, like "Category",
       so is left here as an example.

    condition {
      test     = "ForAllValues:StringEquals"
      variable = "iam:ResourceTag/Name"  # "Name" is the Tag Key
      values   = [format("%s-%s", module.role_prefix.id, each.value)]
    }
    resources = [
      # This allows/restricts access to only IAM roles, not users or SSO roles
      format("arn:aws:iam::%s:role/*", local.identity_account)
    ]

    */

  }
}

locals {
  identity_access_permission_sets = [for role in var.identity_roles_accessible : {
    name               = format("Identity%sRoleAccess", title(role)),
    description        = "Allow user to assume %s role in Identity account, which allows access to other accounts",
    relay_state        = "",
    session_duration   = "",
    tags               = {},
    inline_policy      = data.aws_iam_policy_document.assume_identity_role[role].json
    policy_attachments = ["arn:${local.aws_partition}:iam::aws:policy/job-function/ViewOnlyAccess"]
  }]
}

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Sorry @Matthew Reggler (cc @Erik Osterman (Cloud Posse)), the published aws-sso component is old and has not been updated. The good news is that yes,

account_name = lookup(module.this.descriptors, "account_name", var.iam_primary_roles_stage_name)

is the way to move forward.


# This file generates a permission set for each role specified in var.target_identity_roles
# which is named "Identity<Role>RoleAccess" and grants access to only that role,
# plus ViewOnly access because it is difficult to navigate without any access at all.

locals {
  identity_account = module.account_map.outputs.full_account_map[var.iam_primary_roles_stage_name]
}

module "role_prefix" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  stage = var.iam_primary_roles_stage_name

  context = module.this.context
}

data "aws_iam_policy_document" "assume_identity_role" {
  for_each = local.enabled ? var.identity_roles_accessible : []

  statement {
    sid = "RoleAssumeRole"

    effect = "Allow"
    actions = [
      "sts:AssumeRole",
      "sts:TagSession",
    ]

    resources = [
      format("arn:${local.aws_partition}:iam::%s:role/%s-%s", local.identity_account, module.role_prefix.id, each.value)
    ]

    /* For future reference, this tag-based restriction also works, based on
       the fact that we always tag our IAM roles with the "Name" tag.
       This could be used to control access based on some other tag, like "Category",
       so is left here as an example.

    condition {
      test     = "ForAllValues:StringEquals"
      variable = "iam:ResourceTag/Name"  # "Name" is the Tag Key
      values   = [format("%s-%s", module.role_prefix.id, each.value)]
    }
    resources = [
      # This allows/restricts access to only IAM roles, not users or SSO roles
      format("arn:aws:iam::%s:role/*", local.identity_account)
    ]

    */

  }
}

locals {
  identity_access_permission_sets = [for role in var.identity_roles_accessible : {
    name               = format("Identity%sRoleAccess", title(role)),
    description        = "Allow user to assume %s role in Identity account, which allows access to other accounts",
    relay_state        = "",
    session_duration   = "",
    tags               = {},
    inline_policy      = data.aws_iam_policy_document.assume_identity_role[role].json
    policy_attachments = ["arn:${local.aws_partition}:iam::aws:policy/job-function/ViewOnlyAccess"]
  }]
}

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Linda Pham (Cloud Posse) we should get this component updated this week then.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Heads up, some more updates went out to aws-sso

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think there may be more coming

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, we created refarch so we can move these conversations out of atmos

Igor avatar

Hi everyone. Are there any examples of directory structure that would be recommended for multi-cloud, multi-tenant environments using Atmos + Terragrunt + Terraform? I am interested in learning more about using Atmos to deploy multi-tenant, multi-cloud environments across AWS and GCP . Dev, Test, Prod can have multiple tenants that are almost identical but can run in separate cloud accounts across clouds. For example, for Prod we may have 2 tenants on aws and 5 tenants on GCP. The number of tenants is under 20, not in the 100s. (We are already using Terragrunt and looking for a better way to refactor our code to keep it DRY and easy to maintain across multiple environments/tenants).

jose.amengual avatar
jose.amengual

I will say terragrunt is not needed in this case if you use atmos and I do not think it will even compatible

jose.amengual avatar
jose.amengual

managing remote state with Atmos is trivial so no need for it

RB avatar

Here is the refarch repo structure for atmos

https://github.com/cloudposse/atmos/tree/master/examples/complete

And atmos.tools documentation.

The stacks/orgs contain the root stacks, stacks/catalogs contain per tf root dir yaml that gets turned into tfvars, and the components/terraform directory that houses all of the terraform root dirs

Pepe is right. Both atmos and terragrunt are terraform wrappers and result in a different structure of your code. I do not think it’s possible to combine them.

Why are you trying to combine them? What’s your use case?

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There are other alternative layouts proposed here https://atmos.tools/core-concepts/stacks/catalogs

Stack Catalogs | atmos

Catalogs are how to organize all Stack configurations for easy imports.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Igor I’d be happy to do a screenshare and give you some more ideas

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Ostermanattachment image

Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.

Igor avatar

Great, this is what I was looking for! We are already using Terragrunt, and we like it. However, over time the directory structure got messy and inconsistent across teams/clouds. So we are in the process of refactoring it. I was hoping to introduce Atmos without removing Terragrunt as an additional layer.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It should be possible. That’s probably the right move as a first step to reduce risk.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Part of our motivation in writing atmos though was to ensure it works with vanilla terraform. I can see why you might want to keep using terragrunt if you’re using a lot of complex terragrunt.hcl files.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s a way to specify the terraform command which you would want to overwrite in the atmos.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) seems like we didn’t document it here: https://atmos.tools/cli/configuration#components

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe you would add something like this to the atmos.yaml

components:
  terraform:
    command: /usr/bin/terragrunt
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, since you are using terragrunt, you’ll probably want to disable the backend autogeneration in atmos

Igor avatar

That would certainly simplify the migration! I would like to reduce the effort of initial refactoring by not having to refactor all Terragrunt code and just move the directories/update the links/etc.

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
components:
  terraform:
    auto_generate_backend_file: false
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, agree

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep us posted on your progress. If you run into any gotchas, let us know or open an issue. Chances are there’s an easy fix on our side.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’d love to write up a page for “How to migrate from Terragrunt to Atmos”

Igor avatar

Thank you Erik, I will pick a time slot for us to connect next week.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Igor, we need to write more documentation on different atmos topics. if you want us to review your work, you can add us to your repo or DM the repo structure

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we designed infra using atmos for many different use cases, would help you with reviewing your design as well

Igor avatar

Sounds good! I would love to see some documentation about how to migrate from Terragrunt to Atmos. Especially if it’s possible to keep the original Terragrunt code and even allow adding it. Over time DevOps engineers will get more comfortable with a new Atmos approach but we keeping the old approach (Terragrunt) would simplify the migration. Again, if it only makes sense since we don’t want to create a monster. Our life is already complicated by the fact we need to work across aws/gcp/azure.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let’s start with your repo I’m not super familiar with terragrant, will need something real to look at and start with, then we can write a doc on migrating from terragrant to atmos or how to use them together

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, just a few notes on multi-cloud, multi-org design (I’m planning to write a doc about that as well )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

components folder:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components/
   terraform/
     aws/
       vpc/
       vpc-log-bucket/
     gcp/
       component-1/
       component-2/
     azure/
       component-1/
       component-2/ 
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

separate the logic (code) by the cloud since the code is different

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stacks folder:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
stacks/
   catalog/
     aws/
       vpc/
         base.yaml
       vpc-log-bucket/
         base.yaml
     gcp/
       component-1/
         base.yaml
       component-2/
         base.yaml
     azure/
       component-1/
         base.yaml
       component-2/ 
         base.yaml
   mixins/
     aws/
      region/
      stage/
     gcp/
       region/
       stage/
     azure/
       region/
       stage/
   orgs/
     org1/
       core/
         dev/
         prod/
         staging/   
       plat/
         dev/
         prod/
         staging/   
     org2/
       core/
         dev/
         prod/
         staging/   
       plat/
         dev/
         prod/
         staging/   
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then let’s take for example org1-core-us1-dev stack:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the file stacks/orgs/org1/core/dev/us-eat-1.yaml, add this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:
  # <Import the components from the catalog required for this stack>
  - catalog/aws/vpc/base
  - catalog/gcp/component-1/base

components:
   terraform:
     vpc:
       metadata:
        component: aws/vpc  # Point to the AWS VPC Terraform component
     vars: {} # Override the default vars in `base.yaml` if needed

     gcp-component-1:
       metadata: 
         component: gcp/component-1 # Point to the GCP Terraform component
       vars: {} # Override the default vars in `base.yaml` if needed
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just an idea ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this shows you how you can create different Atmos components and point them to the TF code for diff clouds

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

of cause, you can create diff orgs or diff tenants dedicated only to a specific cloud. For example, your org1 could work only with AWS, while org2 could work only with GCP, etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in which case, you would import the component and create Atmos components in the root stacks only for the specific cloud

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

similar with tenants inside one org

Igor avatar

Yes that would be more common use case - tenants that are limited to one cloud. The main reason for multi-cloud on our side is because companies have a strong preference to working with a specific cloud provider. I also see it as an industry trend. So usually tenants are not replicated across clouds.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is completely flexible and you can structure it in diff folders and sub-folders (both components and stacks folders)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anything can be in any subfolder at any level

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

just be consistent

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ve accumulated a lot of patterns and best practices already, just need to document all of that

Igor avatar

Yes the structure of stacks above looks great, it can consistently scale across many tenants/environments.

Igor avatar

Just one more question - how do Terraform workspaces relate to this structure? Is it in separate folder per tenant or per environment?

jose.amengual avatar
jose.amengual

I stand corrected, after reading the posts I think is definitely possible

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


how do Terraform workspaces relate to this structure? Is it in separate folder per tenant or per environment?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF workspaces are done automatically by atmos taking into account the stack name (e.g. org1-core-ue1-dev ) and the Atmos component name (e.g. vpc or gcp-component-1)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note, that @Igor already has tfstate. so atmos handles workspaces, but won’t know what to do about the state you have managed by terragrunt. So either you’ll want to disable atmos usage of workspaces for state, or migrate the state. We do not have a process to migrate that state.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Igor let me know when you get to workspaces, I’ll show you how to override them per stack/component to keep the current ones

1

2023-02-17

2023-02-19

2023-02-20

Viacheslav avatar
Viacheslav

Hi everyone, can somebody advise me please?

I have a helmfile component helmfile in ./components/helmfile/ directory. The component contains a helmfile.d folder inside as well as environments and bases directories, there are no helmfile.yaml and this structure works well with standalone helmfile execution.

In Atmos I have a stack mid with some variables. When I execute atmos helmfile template helmfile -s mid, varfile ./components/helmfile/mid-helmfile.helmfile.vars.yaml created, but I get following error:

Executing command:
/usr/local/bin/helmfile --state-values-file mid-helmfile.helmfile.vars.yaml template
in helmfile.d/00-cluster-addons.yaml: environment values file matching "mid-helmfile.helmfile.vars.yaml" does not exist in "."

When I move this file from the root of component to the ./components/helmfile/helmfile/helmfile.d/directory, then it works well. But I don’t get how to tell Atmos that it should use a state file is located in the root of the components folder, where it was generated, but not in helmfile.d/ ?

I know that I can run separate command atmos helmfile generate varfile with –file option, but will be nice if I can set the default destination of a state file or to change a path where helmfile is looking for the state file (in the root of component folder, not in helmfile.d). Thanks!

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please show the Atmos component YAML definition in your stack

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) Sure, this is my mid.yaml stack file:

components:
  helmfile:
    helmfile: 
      vars:
        provider: azure

vars:
  stage: mid

also atmos.yaml contains this component configuration:

base_path: "."

components:
  terraform:
    base_path: "components/terraform"
    apply_auto_approve: false
    deploy_run_init: true
    init_run_reconfigure: true
    auto_generate_backend_file: true
  helmfile:
    base_path: "components/helmfile"
    kubeconfig_path: "~/.kube"
    use_eks: false
    cluster_name_pattern: "{stage}-cluster"

stacks:
  base_path: "stacks"
  included_paths:
    - "**/*"
  name_pattern: "{stage}"

workflows:
  base_path: "workflows"

logs:
  verbose: true
  colors: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the name helmfile is not a good name for the component, try to name it something meaningful (e.g. what the component does)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  helmfile:
    my-component:
      metadata:
        component: my-component/helmfile.d # Point to the helmfile component code (on disk)
      vars:
        provider: azure
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

metadata.component: attribute tells Atmos where to find the component’s folder. In this folder Atmos will generate the varfile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if your helmfile component works from my-component/helmfile.d, you can point Atmos to it (note that for Helmfile to work, a helmfile.yaml should be in the same folder, or a similar file which Helmfile expects as the starting point)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Viacheslav

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) mm, got it! Thanks :)

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) I have additional question - is it possible to use absolute path when run atmos helmfile apply ? So under the hood, instead of

helmfile -e asa -l app=kubeflow-core --state-values-file=somestatefile.yaml apply

be able to run:

helmfile -e asa -l app=kubeflow-core --state-values-file=${PWD}/somestatefile.yaml apply

An idea is to keep root of the repo as working directory, because in case of helmfile.d , helmfile goes to this folder and look for values there, instead of looking in the root folder. It is easy to fix by adding PWD prefix, but is there something that help to do it in Atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not currently. atmos always uses the component’s folder to write files to and as the working dir

1

2023-02-21

2023-02-23

2023-02-24

Release notes from atmos avatar
Release notes from atmos
07:04:35 PM

v1.30.0 what Update and enhance Atlantis Integration Update and overhaul the Atlantis Integration docs https://atmos.tools/integrations/atlantis/ Add –format and –file flags to atmos describe component command why

Allow configuring Atlantis Integration in the settings.atlantis section in the YAML stack configs (instead, or in addition to, configuring it in integrations.atlantis in atmos.yaml) Configuring the Atlantis…

Release v1.30.0 · cloudposse/atmosattachment image

what

Update and enhance Atlantis Integration Update and overhaul the Atlantis Integration docs https://atmos.tools/integrations/atlantis/ Add –format and –file flags to atmos describe component …

Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

1
    keyboard_arrow_up