#atmos (2022-03)
2022-03-02
2022-03-05
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.3.30 what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from the context General cleanup why The command allow generating varfile for terraform and helmfile components and writing it to files in the components folders, or any file name specified by the -f command line argument (–file for the long version) Use atmos patterns of generate and describe to…
what Add atmos terraform generate varfile command Add atmos helmfile generate varfile command The commands support -f argument to write to a specific file instead of generating the file name from …
2022-03-11
![Release notes from atmos avatar](https://a.slack-edge.com/80588/img/services/rss_72.png)
v1.4.0 what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in the workflow steps as stack attribute in each workflow step as stack attribute in the workflow itself
Update atmos terraform shell command
why Allow sequential execution of atmos and shell commands Update atmos terraform shell command to use the bash shell by default on Unix systems if the SHELL ENV…
what Add workflows Define workflows in YAML Support two types of workflows: atmos and shell Allow specifying the atmos stack in 3 different ways: on the command line in the command attribute in t…
2022-03-30
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
:wave: I’m just getting started with Atmos - I’m wondering if there’s a way to essentially do atmos terraform deploy-all -s <stack>
without manually curating a workflow
for each stack
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
i don’t believe this is possible at the moment
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
No, because it’s totally ambiguous
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
I wish we could do that, but in the real world, order matters
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
there’s no easy way to determine that order without being explicit about it
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
It’s on our roadmap to support something like this, but nothing scheduled
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
So the reason we implemented workflows is because of that order
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
and the destroy order too matters, and may require other operations (e.g. stopping DMS tasks) before destroying
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
Yeah that’s fair enough… I guess I was thinking more along the lines of something like:
workflows:
deploy-all-iam:
steps:
- command: terraform deploy iam-resource-a
- command: terraform deploy iam-resource-b
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
And then inferring the stack
from the -s
param.
atmos workflow deploy-all-iam -s ue1-dev
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Andriy Knysh (Cloud Posse)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
yep, agree - that would make sense.
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
So I believe we support that.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
workflows:
terraform-plan-test-component-override-2-all-stacks:
description: Run 'terraform plan' on 'test/test-component-override-2' component in all stacks
steps:
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-dev
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-staging
- command: terraform plan test/test-component-override-3 -s tenant1-ue2-prod
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-dev
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-staging
- command: terraform plan test/test-component-override-3 -s tenant2-ue2-prod
terraform-plan-test-component-override-3-all-stacks:
description: Run 'terraform plan' on 'test/test-component-override-3' component in all stacks
steps:
- command: terraform plan test/test-component-override-3
# The step `stack` attribute overrides any stack in the `command` (if specified)
stack: tenant1-ue2-dev
- command: terraform plan test/test-component-override-3
stack: tenant1-ue2-staging
- command: terraform plan test/test-component-override-3
stack: tenant1-ue2-prod
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-dev
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-staging
- command: terraform plan test/test-component-override-3
stack: tenant2-ue2-prod
terraform-plan-all-tenant1-ue2-dev:
description: Run 'terraform plan' on all components in the 'tenant1-ue2-dev' stack
# The workflow `stack` attribute overrides any stack in the `command` (if specified)
# The step `stack` attribute overrides any stack in the `command` (if specified) and the workflow `stack` attribute
stack: tenant1-ue2-dev
steps:
- command: echo Running terraform plan on the component 'test/test-component' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component
# Type `atmos` is implicit, you don't have to specify it
# Other supported types: 'shell'
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override-2' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override-2
type: atmos
- command: echo Running terraform plan on the component 'test/test-component-override-3' in the stack 'tenant1-ue2-dev'
type: shell
- command: terraform plan test/test-component-override-3
type: atmos
test-1:
description: Test workflow
steps:
- command: echo Command 1
type: shell
- command: echo Command 2
type: shell
- command: echo Command 3
type: shell
- command: echo Command 4
type: shell
- command: echo Command 5
type: shell
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we support defining stack
inside workflows in diff places, but not as -s
command line param
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
I think this should be supported so the workflows can be DRY across stacks
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
Yeah so my “logic” behind this was a re-usable workflow for bootstrapping a new account… Essentially, we use a deployer-role from our identity account and have the whole :chicken: /:egg: problem when first running.
It would be cool to be able to do something like: atmos workflow deployer-role-bootstrap -f bootstrap -s gbl-dev
# workflows/bootstrap.yaml
workflows:
deployer-role-bootstrap:
steps:
# We need to disable role assumption since the role doesn't exist
- command: ./some-script-disable-role-assumption.sh
type: shell
- command: terraform deploy iam/deployer-role
- command: ./some-script-to-remove-bootstrap-vars.sh
type: shell
# re-init to migrate state to the s3 backend
- command: terraform init -reconfigure
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
For additional context, I’ve got a deployer role component that has a boolean var: assume_role
that will control whether the provider assumes a role based on other vars / context
For the initial bootstrap workflow atmos / TF would be run using the target account creds.. for all other runs it would use the identity account and assume the role
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
Updating this with what I currently have working:
# workflows/bootstrap.yaml
workflows:
plan:
steps:
- command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
type: shell
- command: terraform clean iam/deployer-role
stack: gbl-dev
- command: terraform plan iam/deployer-role
stack: gbl-dev
apply:
steps:
- command: ./scripts/bootstrap-stack.sh enable -s gbl-dev
type: shell
- command: terraform clean iam/deployer-role
stack: gbl-dev
- command: terraform apply iam/deployer-role
stack: gbl-dev
complete:
steps:
- command: ./scripts/bootstrap-stack.sh disable -s gbl-dev
type: shell
- command: terraform init iam/deployer-role -reconfigure
stack: gbl-dev
- command: terraform deploy iam/deployer-role
stack: gbl-dev
Goes something like this:
- With creds for
gbl-dev
:atmos workflow -f bootstrap apply
- With creds for
gbl-itentity
:atmos workflow -f bootstrap complete
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
scripts/bootstrap-stack.sh
loads the the target stack yaml and appends new import; catalog/bootstrap
and looks something like:
#!/usr/bin/env python3
import argparse
import yaml
parser = argparse.ArgumentParser(description='Configure the stack for bootstrapping.')
parser.add_argument('action', choices=['enable','disable'])
parser.add_argument('-s', '--stack', help='The name of the stack to bootstrap')
args = parser.parse_args()
stack_file = 'stacks/{}.yaml'.format(args.stack)
print('Preparing {} for bootstrapping'.format(stack_file))
f = open(stack_file)
y = yaml.safe_load(f)
f.close()
bootstrap_exists = 'catalog/bootstrap' in y['import']
if args.action == 'enable':
if bootstrap_exists:
print('catalog/bootstrap already exists within import block. Skipping')
exit()
y['import'].append('catalog/bootstrap')
elif args.action == 'disable':
if not bootstrap_exists:
print('catalog/bootstrap does not exist within import block. Skipping')
exit()
y['import'].remove('catalog/bootstrap')
with open(stack_file, 'w') as f:
print('Updating {}'.format(stack_file))
yaml.dump(y, f, default_flow_style=False)
![Josh Holloway avatar](https://avatars.slack-edge.com/2022-03-30/3316196452386_d649b552e955a3153734_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
@Soren Jensen has joined the channel