#atmos (2022-10)

2022-10-02

Release notes from atmos avatar
Release notes from atmos
01:14:34 AM

v1.9.0 what Add atmos components validation using JSON Schema and OPA policies why Validate component config (vars, settings, backend, and other sections) using JSON Schema Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos…

Release v1.9.0 · cloudposse/atmosattachment image

what Add atmos components validation using JSON Schema and OPA policies why Validate component config (vars, settings, backend, and other sections) using JSON Schema Check if the component confi…

Release notes from atmos avatar
Release notes from atmos
01:34:37 AM

v1.9.0 what Add atmos components validation using JSON Schema and OPA policies why Validate component config (vars, settings, backend, and other sections) using JSON Schema Check if the component config (including relations between different component variables) is correct to allow or deny component provisioning using OPA/Rego policies Implement validation by atmos validate component command, and by adding a new section settings.validation to the component stack config to be used in other atmos…

2022-10-03

Brian Ojeda avatar
Brian Ojeda

@Andriy Knysh (Cloud Posse) - You’re killing it! The Atmos’ validation feature added with v1.9 is better than I hoped! I loved that the validation JSON schemas are per component! Even better, I didn’t even think about the OPA support, but it makes perfect sense!

3
2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thank you

@Andriy Knysh (Cloud Posse) - You’re killing it! The Atmos’ validation feature added with v1.9 is better than I hoped! I loved that the validation JSON schemas are per component! Even better, I didn’t even think about the OPA support, but it makes perfect sense!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the validation is per component, and it also participate in the whole inheritance chain. You can define settings.validation at any level (component, base component, catalog, stack, region, stage, etc.) and then import it. It will be deep-merged into the final config. This way, you can define it once and then reuse w/o specifying the same config in all components

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ve added some preliminary quick start here https://github.com/cloudposse/atmos#validation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(still working on a docs site, a lot of atmos features that need to be described)

Brian Ojeda avatar
Brian Ojeda

Just want to say thank you again. This validation feature adds a ton of value to atmos.

We have already started using both json-schema and OPA methods. It has already helped us find several issues with existing stacks. Additionally, I know without any doubt it will save time because we validate all stacks and respective components during the PR workflow.

It’s nice this feature came about at the same time TF finalized the optional object-property type constraints in v1.3. These features complement either other.

Brian Ojeda avatar
Brian Ojeda

1
Release notes from atmos avatar
Release notes from atmos
10:34:37 PM

v1.9.1 what Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code) why The component processor’s code is used by the utils provider to get the component’s remote state We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path…

Release v1.9.1 · cloudposse/atmosattachment image

what Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, t…

Release notes from atmos avatar
Release notes from atmos
10:54:40 PM

v1.9.1 what Add atmos CLI config path and atmos base path parameters to the component processor to support components remote state from remote repos (Note that this does not affect atmos functionality, this is to be used in the utils provider which calls into the atmos code) why The component processor’s code is used by the utils provider to get the component’s remote state We already supported the ATMOS_CLI_CONFIG_PATH and ATMOS_BASE_PATH ENV vars to specify the CLI config file (atmos.yaml) path…

2022-10-04

el avatar

Is there a way to get atmos to use a previously generated Terraform plan? e.g. something like:

atmos terraform plan <component> --stack=uw2-sandbox -out planfile
atmos terraform apply planfile
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, 1 sec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

run

atmos terraform --help
el avatar

oooh I see, so I’d use the deploy command?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Additions and differences from native terraform:
 - before executing other 'terraform' commands, 'atmos' calls 'terraform init'
 - you can skip over atmos calling 'terraform init' if you know your project is already in a good working state by using the '--skip-init' flag like so 'atmos terraform <command> <component> -s <stack> --skip-init
 - 'atmos terraform deploy' command executes 'terraform plan' and then 'terraform apply'
 - 'atmos terraform deploy' command supports '--deploy-run-init=true/false' flag to enable/disable running 'terraform init' before executing the command
 - 'atmos terraform deploy' command sets '-auto-approve' flag before running 'terraform apply'
 - 'atmos terraform apply' and 'atmos terraform deploy' commands support '--from-plan' flag. If the flag is specified, the commands will use the previously generated 'planfile' instead of generating a new 'varfile'
 - 'atmos terraform clean' command deletes the '.terraform' folder, '.terraform.lock.hcl' lock file, and the previously generated 'planfile' and 'varfile' for the specified component and stack
 - 'atmos terraform workspace' command first calls 'terraform init -reconfigure', then 'terraform workspace select', and if the workspace was not created before, it then calls 'terraform workspace new'
 - 'atmos terraform import' command searches for 'region' in the variables for the specified component and stack, and if it finds it, sets 'AWS_REGION=<region>' ENV var before executing the command
 - 'atmos terraform generate backend' command generates the backend file for the component in the stack
 - 'atmos terraform generate varfile' command generates a varfile for the component in the stack
 - 'atmos terraform shell' command configures an environment for the component in the stack and starts a new shell allowing executing all native terraform commands
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
 - 'atmos terraform apply' and 'atmos terraform deploy' commands support '--from-plan' flag. If the flag is specified, the commands will use the previously generated 'planfile' instead of generating a new 'varfile'
el avatar

awesome. thank you, Andriy!

2022-10-09

Release notes from atmos avatar
Release notes from atmos
10:44:35 PM

v1.10.0 what Fix remote state for Terraform utils provider Remove all global vars from Go code Implement Logs.Verbose Update terraform commands Refactor why

Remove all global vars from Go code - this fixes remote state for Terraform utils provider Terraform executes a provider data source in a separate process and calls it using RPC But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called In the…

Release v1.10.0 · cloudposse/atmosattachment image

what Fix remote state for Terraform utils provider Remove all global vars from Go code Implement Logs.Verbose Update terraform commands Refactor why Remove all global vars from Go code - this f…

Release notes from atmos avatar
Release notes from atmos
11:04:37 PM

v1.10.0 what Fix remote state for Terraform utils provider Remove all global vars from Go code Implement Logs.Verbose Update terraform commands Refactor why

Remove all global vars from Go code - this fixes remote state for Terraform utils provider Terraform executes a provider data source in a separate process and calls it using RPC But this separate process is only one per provider, so if we call the code the get the remote state of two different components, the same process will be called In the…

2022-10-11

Release notes from atmos avatar
Release notes from atmos
03:44:39 PM

v1.10.1 what Fix atmos CLI config processing Improve logs.verbose why Fix issues with CLI config processing introduced in #210 In Go, a struct is passed by value…

Release v1.10.1 · cloudposse/atmosattachment image

what Fix atmos CLI config processing Improve logs.verbose why Fix issues with CLI config processing introduced in #210 In Go, a struct is passed by value to a function (the whole struct is copie…

Fix remote state for Terraform `utils` provider. Remove all global vars from Go code. Implement `Logs.Verbose`. Update `terraform` commands by aknysh · Pull Request #210 · cloudposse/atmosattachment image

what Fix remote state for Terraform utils provider Remove all global vars from Go code Implement Logs.Verbose Update terraform commands Refactor why Remove all global vars from Go code - this f…

2022-10-12

Release notes from atmos avatar
Release notes from atmos
01:04:36 PM

v1.10.2 what Update atmos describe stacks command why Output atmos stack names (logical stacks derived from the context variables) instead of stack file names In the -s (–stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names test atmos describe stacks –sections none –components vpc tenant1-ue2-dev: components: terraform: vpc: {} tenant1-ue2-prod: components: terraform: vpc: {} tenant1-ue2-staging:…

Release v1.10.2 · cloudposse/atmosattachment image

what Update atmos describe stacks command why Output atmos stack names (logical stacks derived from the context variables) instead of stack file names In the -s (–stack) filter, support both 1)…

Release notes from atmos avatar
Release notes from atmos
01:24:37 PM

v1.10.2 what Update atmos describe stacks command why Output atmos stack names (logical stacks derived from the context variables) instead of stack file names In the -s (–stack) filter, support both 1) atmos stack names (logical stacks derived from the context variables); 2) stack file names test atmos describe stacks –sections none –components vpc tenant1-ue2-dev: components: terraform: vpc: {} tenant1-ue2-prod: components: terraform: vpc: {} tenant1-ue2-staging:…

2022-10-13

Release notes from atmos avatar
Release notes from atmos
01:54:38 AM

v1.10.3 what Update atmos.yaml initialization why For some atmos commands (e.g. atmos version and atmos vendor), don’t process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)

Release v1.10.3 · cloudposse/atmosattachment image

what Update atmos.yaml initialization why For some atmos commands (e.g. atmos version and atmos vendor), don’t process stacks b/c stacks folder might not be present (e.g. during cold-start when …

Release notes from atmos avatar
Release notes from atmos
02:14:36 AM

v1.10.3 what Update atmos.yaml initialization why For some atmos commands (e.g. atmos version and atmos vendor), don’t process stacks b/c stacks folder might not be present (e.g. during cold-start when using atmos vendor and atmos version in CI/CD systems)

2022-10-14

2022-10-19

Roman Orlovskiy avatar
Roman Orlovskiy

Hi all. I would appreciate if someone could help me understand the logic behind dns-primary and dns-delegated components when it comes to the branded zones as mentioned here https://github.com/cloudposse/terraform-aws-components/tree/master/modules/dns-primary#component-dns-primary.
This component is responsible for provisioning the primary DNS zones into an AWS account. By convention, we typically provision the primary DNS zones in the dns account. The primary account for branded zones (e.g. [example.com](http://example.com)), however, would be in the prod account, while staging zone (e.g. [example.qa](http://example.qa)) might be in the staging account.
A couple of questions: • What is the difference between the “primary” and “branded” zones? • If the branded zone like example.com has to be in the prod AWS account, not the dns account, then which “primary” zones would one need to create in the dns account at all? ◦ Does the purpose of the dns account only comes down to the domain registration in this scenario? • Is this a more preferred approach compared to managing example.com in the dns account and then using cross-account access to create additional records there?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here we have a few diff things:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Branded/vanity domain vs. service-discovery domain. Branded domain is what your business uses on the internet. Service-discovery domain is what you use in your infra to provision services. Then, let’s say you have a service discovery domain for prod, e.g. [prod.mycompany.io](http://prod.mycompany.io) and your brand domain [mycompany.com](http://mycompany.com). In the DNS zone [mycompany.com](http://mycompany.com) you add a record pointing it to the service-discovery domain ``prod.mycompany.io`
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Primary DNS zone vs delegated DNS zone. Primary zone is where you provision the service discovery domains. For example, you can provision [mycompany.io](http://mycompany.io) service discovery domain in the dns account, and then provision [prod.mycompany.io](http://prod.mycompany.io) in the prod account delegating DNS to the primary DNS zone
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
What is DNS Delegation?

In an answer to my previous question I noticed these lines: It’s normally this last stage of delegation that is broken with most home user setups. They have gone through the process of buying …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so in the dns account, you can manage both branded/vanity domains and service-discovery domains, but they serve diff purpose

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also provision service discovery domains in the corresponding account, e.g. [mycompany.io](http://mycompany.io) domain for prod provision in prod account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it all depends on your architecture and requirements

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Roman Orlovskiy

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this is not related to atmos (which is a CLI tool) so should be in a diff channel)

Roman Orlovskiy avatar
Roman Orlovskiy

@Andriy Knysh (Cloud Posse), my apologies for using the wrong chat and thanks a lot for the detailed response, appreciate it!

I think I get it, but I have one more question. At which point exactly does it makes sense to have a separate domain for service discovery?

In my case, I don’t plan on having it currently, only a branded domain [example.com](http://example.com). So based on what you mentioned, it would make more sense to deploy the primary zone for it in the prod account instead of the dns account, and then use it for dns-delegated terraform component to create [dev.example.com](http://dev.example.com)/[test.example.com](http://test.example.com) delegated subdomains in separate AWS accounts, and [prod.example.com](http://prod.example.com) in the prod AWS account.

I will still be able to use terraform/ExternalDNS to automate creation of Route53 records in the delegated zones dev/test/prod, and then just point [example.com](http://example.com) to [prod.example.com](http://prod.example.com). in prod account.

What would be the benefits of the service discovery domain? I assume security would be the main one, but I am failing to find something on the web regarding this.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to have a separate domain for service discovery, and to have a separate dns account to manage all TLDs - depends on the organization and level of access you want to provide to diff people

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you create a separate dns account if you want to completely separate the TLDs management, and want to give a serparate group of people that cn manage domains access to only the dns account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

similar with branded vs. service discovery domain: branded domains can be managed only by a separate group of people (ideally in a separate dns account). Service-discovery domains are DevOps playground and the DevOps engineers manage them. So this is a separation of concerns and access level

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if your scale is small (a few people managing all of that), then for sure you can use just one TLD (e.g. example.com) , provision the zone in prod, provision all prod subdomains in prod, provision dev subdomains in dev and delegate the dev zone to the prod TLD zone (by updating the name servers)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so it all depends on the org scale, security and level of access you want to give to diff people

Roman Orlovskiy avatar
Roman Orlovskiy

That makes a lot of sense, thanks.

Btw, what is a recommended approach for pointing [example.com](http://example.com) to [prod.example.com](http://prod.example.com) ? CNAME cant be created for the apex domain, and Route53 Alias record can only point to a record in the same hosted zone, while [prod.example.com](http://prod.example.com) Alias record will be created using ExternalDNS in [prod.example.com](http://prod.example.com) hosted zone.

Roman Orlovskiy avatar
Roman Orlovskiy

I wanted to avoid having to create additional resources in between like ALBs or maybe AWS Global Accelerator, and then just create A records with static ips, but this might be an overkill in my case. I am considering just giving ExternalDNS access to [example.com](http://example.com) DNS zone for it to create the Alias record pointing to k8s ALB by itself, but that is not ideal as I understand.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your case yes, having just one zone and allowing external-dns to control it is the easiest solution. And you don’t have branded vs service discovery domains, it’s just one domain with subdomains (so don’t complicate it with DNS delegation and pointing the domain to one of its subdomains for which CNAME will not work and you’ll need something like Global Accelerator with a static IP)

1
Roman Orlovskiy avatar
Roman Orlovskiy

Thank you! Really appreciate your help. You guys are doing amazing work. Cheers!

OliverS avatar
OliverS

One of my customers has following architecture and workflow, and I’m wondering if atmos could be a good way to automate this:

• there is a set of “component” git repos: ◦ each repo is a standalone “capability” tailored to the business, such as “provides a datalake” or “provides a static website” or “provides a VPC for all the other components”; ◦ the git repo contains service code (eg Dockerfile that creates .net core service), PLUS infra code that provisions the resources required to run that component / module (eg an EC2 instance, RDS db, etc)

• there is a set of stack git repos: ◦ each stack git repo is a unique combination of the above “modules” and is standalone, ie it has its own VPC, domain names, certs, etc; it may be in separate or same AWS account as another stack depending on the situation ◦ the stack is created by pulling code from the individual component git repos and creating VM and docker images and terraform applying on each component infra code to provision infra that will use VM/docker images ◦ a stack for a specific environment (staging, prod etc) therefore consists of multiple tfstates (each component is a separate “terraform apply”), all stored in s3 ◦ each stack git repo has a separate folder for each “instance” (ie environment) of that stack

• there is a set of aws account git repos: ◦ each account git repo specifies which stacks it contains, and has infra code to deploy basics like IAM roles for a specific stacks, a bucket for the stack provisioning The git repos are not public, they are private repos in github. There are no definite plans to open-source any of these.

Based on what I’m seeing about atmos, it is plausible that atmos could be used to bring together the various pieces:

• yaml file for a stack would specify which modules a stack is made of, and their versions (eg git commit sha) for each environment (stack instance)

• “applying” atmos on that yaml for a specific environment would fetch the code from the component git repos into the stack git repo clone folder for the env, and run the right commands (packer, terraform etc) in the right order

• there would be a separate atmos yaml file for account level setup Any gotchas that could prevent me from using atmos to do this?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i guess something like that can be done with atmos. Many of those items we already implement using root components (which use modules, remote or local), and stacks

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, this could be brought together.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I would suggest reducing the number of repos; at a minimum combining repos to some group of accounts, if not all of them. The stack git repos, are really just YAML stacks in our model. We would avoid the terraliths if possible.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The aws account git repos are just another component deployed that provisions and manages the accounts. We do not recommend a separate git repo per account (we used to do that ~4 years ago), but no longer recommend it. It’s not a gitops friendly solution.

OliverS avatar
OliverS

My client’s customers each have access to the git repo for their aws account and stack instances, so we can’t combine them (git does not support that level of access control, and client’s must only have access to their own stuff).

Example to illustrate my original post:

• terraform root modules, 1 git repo per module: ◦ module-1module-2module-3

• stacks (1 git repo per stack, all stacks have their tfstate in s3, each stack is for a separate customer of my client) ◦ stack-1 ▪︎ dev folder (dev “instance” for this stack) • copies from HEAD of modules 1 and 2 • references remote state of account 1 • use aws creds for role created for and by account 1 ▪︎ staging folder (staging “instance” for this stack) • copies from release candidate commits/tags of modules 1 and 2 • references remote state of account 2 • use aws creds for role created for and by account 2stack-2 ▪︎ dev folder (dev “instance” for this stack) • copies from HEAD of modules 2 and 3 • references remote state of account 3 • use aws creds for role created for and by account 3stack-3 (has dev, staging and prod folders like the other stacks, with different combinations of modules)

• aws accounts (1 git repo per aws account, all have their tfstate in s3, each stack is for a separate customer of my client, account tf require significantly more sensitive permissions than stacks such as IAM role create/delete): ◦ aws-account-1 (customer 1) ◦ aws-account-2 (customer 2) ◦ aws-account-3 (in-house)

OliverS avatar
OliverS

If we cannot combine the above into smaller number of git repos due to customer privacy concerns, what friction do you foresee if I try using atmos to drive this with above set of git repos?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

better to schedule a call with me and review

2022-10-20

Release notes from atmos avatar
Release notes from atmos
02:44:40 PM

v1.10.4 what Parse atmos.yaml CLI config when executing atmos vendor command Improve OPA policy evaluation and error handling why When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not…

Release v1.10.4 · cloudposse/atmosattachment image

what Parse atmos.yaml CLI config when executing atmos vendor command Improve OPA policy evaluation and error handling why When executing atmos vendor pull command, we need to parse atmos.yaml an…

Release notes from atmos avatar
Release notes from atmos
03:04:40 PM

v1.10.4 what Parse atmos.yaml CLI config when executing atmos vendor command Improve OPA policy evaluation and error handling why When executing atmos vendor pull command, we need to parse atmos.yaml and calculate the paths to stacks and components folders to write the vendored files into the correct component folder Add timeout to OPA policy evaluation (it will show a descriptive error message instead of hanging forever if Rego policy is not correctly defined/formatted or Regex in Rego is not…

2022-10-21

el avatar

hey all :wave: I have an S3 backend already provisioned (using the CloudPosse tfstate-backend module), but I’m struggling to get atmos to use it using the following configuration:

import:
  - test/shared/automation/_defaults

vars:
  environment: uw2
  namespace: test
  region: us-west-2

terraform:
  vars: {}

  backend_type: s3
  backend:
    s3:
      encrypt: true
      bucket: "test-uw2-automation"
      key: "terraform.tfstate"
      dynamodb_table: "test-uw2-automation-lock"
      acl: "bucket-owner-full-control"
      region: "us-west-2"
      profile: "automation"
      role_arn: null

components:
  terraform:
    tfstate-backend:
      vars:
        profile: "automation"
        logging_bucket_enabled: true
    vpc:
      vars:
        enabled: true
        ipv4_primary_cidr_block: 10.5.0.0/16
        subnet_type_tag_key: test/subnet/type
        nat_gateway_enabled: true
        nat_instance_enabled: false
        max_subnet_count: 3

Running atmos terraform plan vpc -s <stack> doesn’t seem to pick up the S3 backend configuration; I’m running the plan after assuming a different role that doesn’t even have access to the specified backend bucket and DynamoDB table.

Any idea what I might be missing here? Thanks in advance for the help

el avatar

ah I think it’s because I had auto_generate_backend_file set to false, since I copied it from the example

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, set it to true

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you set it to false you can run ``atmos terraform generate backend vpc -s xxx`

el avatar

thanks! I’m working through deploying the vpc component in two separate accounts (authenticating with AWS SSO) and I’m trying to get atmos to switch between them automatically. I think it should work if the backend file is generated automatically each time

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, if you configure diff backend.s3 sections in diff accounts (diff YAML files). Also, the aes provider needs to use a dynamic role or profile to assume into the diff accounts

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

usually we do it like this

provider "aws" {
  region = var.region

  assume_role {
    role_arn = module.iam_roles.terraform_role_arn
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the role_arn is dynamic as well

el avatar

gotcha, so does that require an identity role that you assume first that has permissions to assume the roles specified in module.iam_roles in each account?

also, does this setup play nicely with Leapp?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we assume identity role first, and we use Leapp. You can def do this, or use separate roles without going thru primary-delegated roles

el avatar

gotcha, that makes sense. thanks! The other piece I was missing was passing the profile variable into the aws provider to make sure it’s always assuming the same profile that the backend is using.

2022-10-24

2022-10-26

Release notes from atmos avatar
Release notes from atmos
01:54:41 PM

v1.10.5 what In atmos helmfile commands, first check if the context ENV vars are already defined. If they are not, set them in the code why Some users of atmos define the context ENV vars (e.g. REGION) in the caller scripts, and atmos overrides them. This fix will first check if the ENV vars are not defined by the parent process before setting them

Release v1.10.5 · cloudposse/atmosattachment image

what In atmos helmfile commands, first check if the context ENV vars are already defined. If they are not, set them in the code why Some users of atmos define the context ENV vars (e.g. REGION) …

2022-10-31

gabe avatar

Working through tutorials for atmos - https://docs.cloudposse.com/tutorials/atmos-getting-started/ Has anybody came across this issue now that it appears a atmos.yaml is required for running atmos?

Sample components (fetch-location, fetch-weather, etc.) are not validating. @Erik Osterman (Cloud Posse) - I think I can work through it, but wanted to inform your team that the tutorials will probably need some updates.

 ✗ . [none] (HOST) 02-atmos ⨠ atmos validate component fetch-location --stack example                                                                     

Searched all stack YAML files, but could not find config for the component 'fetch-weather' in the stack 'example'.
Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos.yaml is required for running atmos for the following reasons:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. It contains the paths to components and stacks, and all companies have diff config for that, so we can’t hardcode it in Go code
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. It contains definitions for atmos custom commands, which get parsed before atmos executes them (and they are available in atmos help
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks you for pointing that out, the tutorial needs to be updated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@gabe are you using the tutorial docker image?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Dan Miller (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) just updated them, so they should be working.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it would be rad to add a GHA to validate them.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

looks like the latest version of atmos has a requirement for the atmos.yaml config script, which wasnt required when the tutorial was updated. We should be pinning the atmos version for the tutorial’s Dockerfile, so that updates to atmos can’t break them / add additional steps. I’ll add a PR shortly for that

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

updated now. The latest version of the repo and the image should have the atmos version pinned to 1.4.20

gabe avatar

Thanks Dan. Let me run through it.

gabe avatar

Running through the tutorials, it looks like there are still some updates that need to be accounted for. Atmos v1.4.20 requires a name_pattern for stacks which the example.yaml stack does not adhere to, which results in the following error when running through the Getting Started with Atmos tutorial /tutorials/02-atmos

stack name pattern must be provided in 'stacks.name_pattern' config or 'ATMOS_STACKS_NAME_PATTERN' ENV variable
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use https://github.com/cloudposse/atmos/blob/master/atmos.yaml - this is the latest CLI config

```

CLI config is loaded from the following locations (from lowest to highest priority):

system dir (‘/usr/local/etc/atmos’ on Linux, ‘%LOCALAPPDATA%/atmos’ on Windows)

home dir (~/.atmos)

current directory

ENV vars

Command-line arguments

#

It supports POSIX-style Globs for file names/paths (double-star ‘**’ is supported)

https://en.wikipedia.org/wiki/Glob_(programming)

Base path for components, stacks and workflows configurations.

Can also be set using ‘ATMOS_BASE_PATH’ ENV var, or ‘–base-path’ command-line argument.

Supports both absolute and relative paths.

If not provided or is an empty string, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are independent settings (supporting both absolute and relative paths).

If ‘base_path’ is provided, ‘components.terraform.base_path’, ‘components.helmfile.base_path’, ‘stacks.base_path’ and ‘workflows.base_path’

are considered paths relative to ‘base_path’.

base_path: “./examples/complete”

components: terraform: # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_BASE_PATH’ ENV var, or ‘–terraform-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/terraform” # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE’ ENV var apply_auto_approve: false # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT’ ENV var, or ‘–deploy-run-init’ command-line argument deploy_run_init: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE’ ENV var, or ‘–init-run-reconfigure’ command-line argument init_run_reconfigure: true # Can also be set using ‘ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE’ ENV var, or ‘–auto-generate-backend-file’ command-line argument auto_generate_backend_file: false helmfile: # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_BASE_PATH’ ENV var, or ‘–helmfile-dir’ command-line argument # Supports both absolute and relative paths base_path: “components/helmfile” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_KUBECONFIG_PATH’ ENV var kubeconfig_path: “/dev/shm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_HELM_AWS_PROFILE_PATTERN’ ENV var helm_aws_profile_pattern: “{namespace}-{tenant}-gbl-{stage}-helm” # Can also be set using ‘ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN’ ENV var cluster_name_pattern: “{namespace}-{tenant}-{environment}-{stage}-eks-cluster”

stacks: # Can also be set using ‘ATMOS_STACKS_BASE_PATH’ ENV var, or ‘–config-dir’ and ‘–stacks-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks” # Can also be set using ‘ATMOS_STACKS_INCLUDED_PATHS’ ENV var (comma-separated values string) included_paths: - “orgs//*” # Can also be set using ‘ATMOS_STACKS_EXCLUDED_PATHS’ ENV var (comma-separated values string) excluded_paths: - “/_defaults.yaml” # Can also be set using ‘ATMOS_STACKS_NAME_PATTERN’ ENV var name_pattern: “{tenant}-{environment}-{stage}”

workflows: # Can also be set using ‘ATMOS_WORKFLOWS_BASE_PATH’ ENV var, or ‘–workflows-dir’ command-line arguments # Supports both absolute and relative paths base_path: “stacks/workflows”

logs: verbose: false colors: true

Custom CLI commands

commands:

  • name: tf description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: plan description: This command plans terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true env:
      • key: ENV_VAR_1 value: ENV_VAR_1_value
      • key: ENV_VAR_2

        ‘valueCommand’ is an external command to execute to get the value for the ENV var

        Either ‘value’ or ‘valueCommand’ can be specified for the ENV var, but not both

        valueCommand: echo ENV_VAR_2_value

        steps support Go templates

        steps:

      • atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }}
  • name: terraform description: Execute ‘terraform’ commands

    subcommands

    commands:

    • name: provision description: This command provisions terraform components arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        ENV var values support Go templates

        env:

      • key: ATMOS_COMPONENT value: “{{ .Arguments.component }}”
      • key: ATMOS_STACK value: “{{ .Flags.stack }}” steps:
      • atmos terraform plan $ATMOS_COMPONENT -s $ATMOS_STACK
      • atmos terraform apply $ATMOS_COMPONENT -s $ATMOS_STACK
  • name: play description: This command plays games steps:
    • echo Playing…

      subcommands

      commands:

    • name: hello description: This command says Hello world steps:
      • echo Hello world
    • name: ping description: This command plays ping-pong

      If ‘verbose’ is set to ‘true’, atmos will output some info messages to the console before executing the command’s steps

      If ‘verbose’ is not defined, it implicitly defaults to ‘false’

      verbose: true steps:

      • echo Playing ping-pong…
      • echo pong
  • name: show description: Execute ‘show’ commands

    subcommands

    commands:

    • name: component description: Execute ‘show component’ command arguments:
      • name: component description: Name of the component flags:
      • name: stack shorthand: s description: Name of the stack required: true

        If a custom command defines ‘component_config’ section with ‘component’ and ‘stack’, ‘atmos’ generates the config for the component in the stack

        and makes it available in {{ .ComponentConfig.xxx.yyy.zzz }} Go template variables,

        exposing all the component sections (which are also shown by ‘atmos describe component’ command)

        component_config: component: “{{ .Arguments.component }}” stack: “{{ .Flags.stack }}”

        steps support Go templates

        steps:

      • ‘echo Atmos component: {{ .Arguments.component }}’
      • ‘echo Atmos stack: {{ .Flags.stack }}’
      • ‘echo Terraform component: {{ .ComponentConfig.component }}’
      • ‘echo Backend S3 bucket: {{ .ComponentConfig.backend.bucket }}’
      • ‘echo Terraform workspace: {{ .ComponentConfig.workspace }}’
      • ‘echo Namespace: {{ .ComponentConfig.vars.namespace }}’
      • ‘echo Tenant: {{ .ComponentConfig.vars.tenant }}’
      • ‘echo Environment: {{ .ComponentConfig.vars.environment }}’
      • ‘echo Stage: {{ .ComponentConfig.vars.stage }}’
      • ‘echo settings.spacelift.workspace_enabled: {{ .ComponentConfig.settings.spacelift.workspace_enabled }}’
      • ‘echo Dependencies: {{ .ComponentConfig.deps }}’
      • ‘echo settings.config.is_prod: {{ .ComponentConfig.settings.config.is_prod }}’

Integrations

integrations:

# Atlantis integration # https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html atlantis: # Path and name of the Atlantis config file ‘atlantis.yaml’ # Supports absolute and relative paths # All the intermediate folders will be created automatically (e.g. ‘path: /config/atlantis/atlantis.yaml’) # Can be overridden on the command line by using ‘–output-path’ command-line argument in ‘atmos atlantis generate repo-config’ command # If not specified (set to an empty string/omitted here, and set to …

gabe avatar

Yes, we can use an atmos configuration file, but the example stack in the tutorial is not currently setup with a tenant environment or stage so the tutorial example needs to be updated.

The stack name pattern '{environment}-{stage}' specifies 'environment`, but the stack 'example' does not have an environment defined
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, the tutorial needs to be updated

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

fyi, this is a complete working example, it’s used in all atmos tests https://github.com/cloudposse/atmos/tree/master/examples/complete

gabe avatar

agreed the tutorial still needs to be updated. I just wanted to raise that @Dan Miller (Cloud Posse) and @Erik Osterman (Cloud Posse) as Dan updated the tutorial to pin the version of atmos but there are additional items that need to be addressed and corrected within the tutorial.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

ty. I’ll confirm later today and update if necessary

gabe avatar

let me know if it makes better sense to log an issue on GitHub - happy to do that

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

~I’m not able to reproduce the error when running from Geodesic with atmos version 1.4.20. I’ve verified that no atmos.yaml exists on the image as well, yet I’m not getting any errors.~~~ ~@gabe when youre in the Geodesic container, what does atmos version return?~~~

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I can reproduce it now. Will update shortly

    keyboard_arrow_up