#atmos (2022-03)

2022-03-02

2022-03-05

Release notes from atmos avatar
Release notes from atmos
05:14:17 AM

v1.3.30 what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from the context General cleanup why The command allow generating varfile for terraform and helmfile components and writing it to files in the components folders, or any file name specified by the -f command line argument (–file for the long version) Use atmos patterns of generate and describe to…

Release v1.3.30 · cloudposse/atmosattachment image

what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from …

2022-03-11

Release notes from atmos avatar
Release notes from atmos
03:54:21 PM

v1.4.0 what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in the workflow steps as stack attribute in each workflow step as stack attribute in the workflow itself

Update atmos terraform shell command

why Allow sequential execution of atmos and shell commands Update atmos terraform shell command to use the bash shell by default on Unix systems if the SHELL ENV…

Release v1.4.0 · cloudposse/atmosattachment image

what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in t…

1
2
1

2022-03-30

Josh Holloway avatar
Josh Holloway

:wave: I’m just getting started with Atmos - I’m wondering if there’s a way to essentially do atmos terraform deploy-all -s <stack> without manually curating a workflow for each stack

RB avatar

i don’t believe this is possible at the moment

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No, because it’s totally ambiguous

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I wish we could do that, but in the real world, order matters

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s no easy way to determine that order without being explicit about it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s on our roadmap to support something like this, but nothing scheduled

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So the reason we implemented workflows is because of that order

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and the destroy order too matters, and may require other operations (e.g. stopping DMS tasks) before destroying

Josh Holloway avatar
Josh Holloway

Yeah that’s fair enough… I guess I was thinking more along the lines of something like:

workflows:
  deploy-all-iam:
    steps:
      - command: terraform deploy iam-resource-a
      - command: terraform deploy iam-resource-b
Josh Holloway avatar
Josh Holloway

And then inferring the stack from the -s param.

atmos workflow deploy-all-iam -s ue1-dev
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep, agree - that would make sense.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I believe we support that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
workflows:

  terraform-plan-test-component-override-2-all-stacks:
    description: Run 'terraform plan' on 'test/test-component-override-2' component in all stacks
    steps:
      - command: terraform plan test/test-component-override-3 -s tenant1-ue2-dev
      - command: terraform plan test/test-component-override-3 -s tenant1-ue2-staging
      - command: terraform plan test/test-component-override-3 -s tenant1-ue2-prod
      - command: terraform plan test/test-component-override-3 -s tenant2-ue2-dev
      - command: terraform plan test/test-component-override-3 -s tenant2-ue2-staging
      - command: terraform plan test/test-component-override-3 -s tenant2-ue2-prod

  terraform-plan-test-component-override-3-all-stacks:
    description: Run 'terraform plan' on 'test/test-component-override-3' component in all stacks
    steps:
      - command: terraform plan test/test-component-override-3
        # The step `stack` attribute overrides any stack in the `command` (if specified)
        stack: tenant1-ue2-dev
      - command: terraform plan test/test-component-override-3
        stack: tenant1-ue2-staging
      - command: terraform plan test/test-component-override-3
        stack: tenant1-ue2-prod
      - command: terraform plan test/test-component-override-3
        stack: tenant2-ue2-dev
      - command: terraform plan test/test-component-override-3
        stack: tenant2-ue2-staging
      - command: terraform plan test/test-component-override-3
        stack: tenant2-ue2-prod

  terraform-plan-all-tenant1-ue2-dev:
    description: Run 'terraform plan' on all components in the 'tenant1-ue2-dev' stack
    # The workflow `stack` attribute overrides any stack in the `command` (if specified)
    # The step `stack` attribute overrides any stack in the `command` (if specified) and the workflow `stack` attribute
    stack: tenant1-ue2-dev
    steps:
      - command: echo Running terraform plan on the component 'test/test-component' in the stack 'tenant1-ue2-dev'
        type: shell
      - command: terraform plan test/test-component
        # Type `atmos` is implicit, you don't have to specify it
        # Other supported types: 'shell'
        type: atmos
      - command: echo Running terraform plan on the component 'test/test-component-override' in the stack 'tenant1-ue2-dev'
        type: shell
      - command: terraform plan test/test-component-override
        type: atmos
      - command: echo Running terraform plan on the component 'test/test-component-override-2' in the stack 'tenant1-ue2-dev'
        type: shell
      - command: terraform plan test/test-component-override-2
        type: atmos
      - command: echo Running terraform plan on the component 'test/test-component-override-3' in the stack 'tenant1-ue2-dev'
        type: shell
      - command: terraform plan test/test-component-override-3
        type: atmos

  test-1:
    description: Test workflow
    steps:
      - command: echo Command 1
        type: shell
      - command: echo Command 2
        type: shell
      - command: echo Command 3
        type: shell
      - command: echo Command 4
        type: shell
      - command: echo Command 5
        type: shell

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we support defining stack inside workflows in diff places, but not as -s command line param

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We can add that

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this should be supported so the workflows can be DRY across stacks

Josh Holloway avatar
Josh Holloway

Yeah so my “logic” behind this was a re-usable workflow for bootstrapping a new account… Essentially, we use a deployer-role from our identity account and have the whole :chicken: /:egg: problem when first running.

It would be cool to be able to do something like: atmos workflow deployer-role-bootstrap -f bootstrap -s gbl-dev

# workflows/bootstrap.yaml
workflows:
  deployer-role-bootstrap:
    steps:
      # We need to disable role assumption since the role doesn't exist
      - command: ./some-script-disable-role-assumption.sh
        type: shell
      - command: terraform deploy iam/deployer-role
      - command: ./some-script-to-remove-bootstrap-vars.sh
        type: shell
      # re-init to migrate state to the s3 backend
      - command: terraform init -reconfigure
1
Josh Holloway avatar
Josh Holloway

For additional context, I’ve got a deployer role component that has a boolean var: assume_role that will control whether the provider assumes a role based on other vars / context

For the initial bootstrap workflow atmos / TF would be run using the target account creds.. for all other runs it would use the identity account and assume the role

Josh Holloway avatar
Josh Holloway

Updating this with what I currently have working:

# workflows/bootstrap.yaml

workflows:
  plan:
    steps:
      - command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
        type: shell
      - command: terraform clean iam/deployer-role
        stack: gbl-dev
      - command: terraform plan iam/deployer-role
        stack: gbl-dev
  apply:
    steps:
      - command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
        type: shell
      - command: terraform clean iam/deployer-role
        stack: gbl-dev
      - command: terraform apply iam/deployer-role
        stack: gbl-dev
  complete:
    steps:
      - command: ./scripts/bootstrap-stack.sh disable -s gbl-dev
        type: shell
      - command: terraform init iam/deployer-role -reconfigure
        stack: gbl-dev
      - command: terraform deploy iam/deployer-role
        stack: gbl-dev

Goes something like this:

  1. With creds for gbl-dev : atmos workflow -f bootstrap apply
  2. With creds for gbl-itentity: atmos workflow -f bootstrap complete
Josh Holloway avatar
Josh Holloway

scripts/bootstrap-stack.sh loads the the target stack yaml and appends new import; catalog/bootstrap and looks something like:

#!/usr/bin/env python3

import argparse
import yaml

parser = argparse.ArgumentParser(description='Configure the stack for bootstrapping.')
parser.add_argument('action', choices=['enable','disable'])
parser.add_argument('-s', '--stack', help='The name of the stack to bootstrap')

args = parser.parse_args()

stack_file = 'stacks/{}.yaml'.format(args.stack)

print('Preparing {} for bootstrapping'.format(stack_file))

f = open(stack_file)
y = yaml.safe_load(f)
f.close()

bootstrap_exists = 'catalog/bootstrap' in y['import']

if args.action == 'enable':
    if bootstrap_exists:
        print('catalog/bootstrap already exists within import block. Skipping')
        exit()

    y['import'].append('catalog/bootstrap')

elif args.action == 'disable':
    if not bootstrap_exists:
        print('catalog/bootstrap does not exist within import block. Skipping')
        exit()

    y['import'].remove('catalog/bootstrap')



with open(stack_file, 'w') as f:
    print('Updating {}'.format(stack_file))
    yaml.dump(y, f, default_flow_style=False)
Josh Holloway avatar
Josh Holloway

Anyway.. enjoy my cruft

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll add -s to the workflow command soon

1
Soren Jensen avatar
Soren Jensen
09:27:38 PM

@Soren Jensen has joined the channel

2022-03-31

    keyboard_arrow_up