#atmos (2022-03)
2022-03-02
2022-03-05
v1.3.30 what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from the context General cleanup why The command allow generating varfile for terraform and helmfile components and writing it to files in the components folders, or any file name specified by the -f command line argument (–file for the long version) Use atmos patterns of generate and describe to…
what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from …
2022-03-11
v1.4.0 what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in the workflow steps as stack attribute in each workflow step as stack attribute in the workflow itself
Update atmos terraform shell command
why Allow sequential execution of atmos and shell commands Update atmos terraform shell command to use the bash shell by default on Unix systems if the SHELL ENV…
what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in t…
2022-03-30
:wave: I’m just getting started with Atmos - I’m wondering if there’s a way to essentially do atmos terraform deploy-all -s <stack>
without manually curating a workflow
for each stack
i don’t believe this is possible at the moment
No, because it’s totally ambiguous
I wish we could do that, but in the real world, order matters
there’s no easy way to determine that order without being explicit about it
It’s on our roadmap to support something like this, but nothing scheduled
So the reason we implemented workflows is because of that order
and the destroy order too matters, and may require other operations (e.g. stopping DMS tasks) before destroying
Yeah that’s fair enough… I guess I was thinking more along the lines of something like:
workflows:
deploy-all-iam:
steps:
- command: terraform deploy iam-resource-a
- command: terraform deploy iam-resource-b
And then inferring the stack
from the -s
param.
atmos workflow deploy-all-iam -s ue1-dev
@Andriy Knysh (Cloud Posse)
yep, agree - that would make sense.
So I believe we support that.
workflows:
terraform-plan-test-component-override-2-all-stacks:
description: Run 'terraform plan' on 'test/test-component-override-2' component in all stacks
steps:
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-dev
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-staging
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-prod
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-dev
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-staging
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-prod
terraform-plan-test-component-override-3-all-stacks:
description: Run 'terraform plan' on 'test/test-component-override-3' component in all stacks
steps:
- command: terraform plan test/test-component-override-3
# The step `stack` attribute overrides any stack in the `command` (if specified)
stack: tenant1-ue2-dev
- command: terraform plan test/test-component-override-3
stack: tenant1-ue2-staging
- command: terraform plan test/test-component-override-3
stack: tenant1-ue2-prod
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-dev
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-staging
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-prod
terraform-plan-all-tenant1-ue2-dev:
description: Run 'terraform plan' on all components in the 'tenant1-ue2-dev' stack
# The workflow `stack` attribute overrides any stack in the `command` (if specified)
# The step `stack` attribute overrides any stack in the `command` (if specified) and the workflow `stack` attribute
stack: tenant1-ue2-dev
steps:
- command: echo Running terraform plan on the component 'test/test-component' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component
# Type `atmos` is implicit, you don't have to specify it
# Other supported types: 'shell'
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override-2' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override-2
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override-3' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override-3
type: atmos
test-1:
description: Test workflow
steps:
- command: echo Command 1
type: shell
- command: echo Command 2
type: shell
- command: echo Command 3
type: shell
- command: echo Command 4
type: shell
- command: echo Command 5
type: shell
we support defining stack
inside workflows in diff places, but not as -s
command line param
I think this should be supported so the workflows can be DRY across stacks
Yeah so my “logic” behind this was a re-usable workflow for bootstrapping a new account… Essentially, we use a deployer-role from our identity account and have the whole :chicken: /:egg: problem when first running.
It would be cool to be able to do something like: atmos workflow deployer-role-bootstrap -f bootstrap -s gbl-dev
# workflows/bootstrap.yaml
workflows:
deployer-role-bootstrap:
steps:
# We need to disable role assumption since the role doesn't exist
- command: ./some-script-disable-role-assumption.sh
type: shell
- command: terraform deploy iam/deployer-role
- command: ./some-script-to-remove-bootstrap-vars.sh
type: shell
# re-init to migrate state to the s3 backend
- command: terraform init -reconfigure
For additional context, I’ve got a deployer role component that has a boolean var: assume_role
that will control whether the provider assumes a role based on other vars / context
For the initial bootstrap workflow atmos / TF would be run using the target account creds.. for all other runs it would use the identity account and assume the role
Updating this with what I currently have working:
# workflows/bootstrap.yaml
workflows:
plan:
steps:
- command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
type: shell
- command: terraform clean iam/deployer-role
stack: gbl-dev
- command: terraform plan iam/deployer-role
stack: gbl-dev
apply:
steps:
- command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
type: shell
- command: terraform clean iam/deployer-role
stack: gbl-dev
- command: terraform apply iam/deployer-role
stack: gbl-dev
complete:
steps:
- command: ./scripts/bootstrap-stack.sh disable -s gbl-dev
type: shell
- command: terraform init iam/deployer-role -reconfigure
stack: gbl-dev
- command: terraform deploy iam/deployer-role
stack: gbl-dev
Goes something like this:
- With creds for
gbl-dev
:atmos workflow -f bootstrap apply
- With creds for
gbl-itentity
:atmos workflow -f bootstrap complete
scripts/bootstrap-stack.sh
loads the the target stack yaml and appends new import; catalog/bootstrap
and looks something like:
#!/usr/bin/env python3
import argparse
import yaml
parser = argparse.ArgumentParser(description='Configure the stack for bootstrapping.')
parser.add_argument('action', choices=['enable','disable'])
parser.add_argument('-s', '--stack', help='The name of the stack to bootstrap')
args = parser.parse_args()
stack_file = 'stacks/{}.yaml'.format(args.stack)
print('Preparing {} for bootstrapping'.format(stack_file))
f = open(stack_file)
y = yaml.safe_load(f)
f.close()
bootstrap_exists = 'catalog/bootstrap' in y['import']
if args.action == 'enable':
if bootstrap_exists:
print('catalog/bootstrap already exists within import block. Skipping')
exit()
y['import'].append('catalog/bootstrap')
elif args.action == 'disable':
if not bootstrap_exists:
print('catalog/bootstrap does not exist within import block. Skipping')
exit()
y['import'].remove('catalog/bootstrap')
with open(stack_file, 'w') as f:
print('Updating {}'.format(stack_file))
yaml.dump(y, f, default_flow_style=False)
@Soren Jensen has joined the channel