#atmos (2022-12)
2022-12-06
What is the difference between command
and job
when writing workflows?
Atmos is a workflow automation tool to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
only command
is supported
oh, i must have seen job
somewhere else
thank you for the link
is there an eta for when the documentation will be finished
How do i configure the planfile?
configure the planfile?
i want the workflow to save a .tfplan
file when i run plan. sifting through the code online, i see that there is a “planFile” var that is populated. i see the file get generated but then it is automatically deleted. i’d like this file to persist so that i can use it to run terraform destory --auto-approve tfplan
it gets automatically deleted after a successful apply only
atmos terraform apply and atmos terraform deploy commands support --from-plan flag. If the flag is specified, the commands will use the previously generated planfile instead of generating a new varfile
Atmos is a workflow automation tool to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
@Andriy Knysh (Cloud Posse) is highlighting that all terraform
arguments are passed through.
THis means that configure the planfile, you would do it just as normal with terraform, by passing the -out
argument. https://developer.hashicorp.com/terraform/cli/commands/plan#out-filename
The terraform plan command creates an execution plan with a preview of the changes that Terraform will make to your infrastructure.
e.g. atmos terraform plan $component --stack $stack -out=planfile
i see the problem. i was placing any terraform flag options directly after the terraform command. atmos was saying it “didn’t recognize component -out=tfplan”. I didn’t realize that the component needed to immediately follow terraform commands and then flags.
EX:
this works:
terraform plan $component --out=tfplan
this does not:
terraform plan -out=tfplan $component
Great! I’m glad you got it working
2022-12-07
2022-12-08
Hey all. Trying to learn Atmos. I think the tuorial here: https://docs.cloudposse.com/tutorials/atmos-getting-started/ is broken. I cloned the latest from <https://github.com/cloudposse/tutorials.git>
and can follow through part of the tutorial (all the way through step 4). However, step 5 seems broken. When I try to run atmos workflow deploy-all -s example
I get this error:
✗ . [none] (HOST) 02-atmos ⨠ atmos workflow deploy-all -s example
Error: required flag(s) "file" not set
Usage:
atmos workflow [flags]
Flags:
--dry-run atmos workflow <name> -f <file> --dry-run
-f, --file string atmos workflow <name> -f <file>
-h, --help help for workflow
-s, --stack string atmos workflow <name> -f <file> -s <stack>
required flag(s) "file" not set
If I try to set the file I get this:
✗ . [none] (HOST) 02-atmos ⨠ atmos workflow deploy-all -s example -f example.yaml
file 'stacks/workflows/example.yaml' does not exist
It looks like atmos is expecting the workflow to be defined in a separate file (not in example.yaml). Is anyone else seeing this? I am doing something wrong?
the tutorial is out of date and has issues. You need to configure the base path for the workflows in atmos.yaml
, see https://atmos.tools/cli/configuration#workflows
Atmos is a workflow automation tool to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Atmos is a workflow automation tool to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
workflows:
atmos will join workflows.base_path
with the file name provided on the command line in --file
flag
Ok, thank you
Is there some sort of reference for the Atmos yaml files and what they can contain?
The stack
files that is (I think, sorry I am new to this).
Atmos is a workflow automation tool to manage complex configurations with ease. It’s compatible with Terraform and many other tools.
Thank you so much!
@Dan Miller (Cloud Posse)
2022-12-09
2022-12-15
v1.17.0 what Add local search to atmos docs Add atmos describe affected CLI command Update docs website why
Quickly search through documentation
The command atmos describe affected produces a list of the affected Atmos components and stacks given two Git commits. The command compares the final component sections (after all imports and deep-merging) for all Atmos components in all stacks, and produces a list of affected (changed) components in the stacks. The command also checks the changed files…
what
Add local search to atmos docs Add atmos describe affected CLI command Update docs website
why
Quickly search through documentation
The command atmos describe affected produces a list of…
This is a very exciting announcement for atmos
. This makes Atmos much easier to use in a CI/CD context. Using this command, you can easily compute all affected stacks so that they can be planned.
what
Add local search to atmos docs Add atmos describe affected CLI command Update docs website
why
Quickly search through documentation
The command atmos describe affected produces a list of…
@jose.amengual you wanted to try that for Atlantis projects (let us know how it goes and if we need to improve anything)
yes!!! and I did look at it
I wonder if filters can be used?
so that a specific stack or component can be listed
in the atlantis world when using -p for project I could do : atlantis plan -p pepe-auto-ue2-vpc
at this point I know the component name ( or I can parsed) and the stack name base on my naming convention so I could be smart and only describe affected
base on a specific stack
and the other useful thing will be that the output shows the atlantis stack name so you can match it with the -p
flag in the atlantis command
like
`{
"stack": "tenant2-ue2-staging",
"component_type": "terraform",
"component": "test/test-component-override-3",
"affected": "stack.vars"
"atlantis_project: "tenant2-ue2-staging-test-component-ovverride-3"
},
Yea, so ideal (futuristic/non-existant) implementation, atlantis would support running an external command to determine which projects are affected. So instead of relying exclusively on affected files (which is primitive), it could also exec a command which return a list of affected projects.
If that existing, then this would be extremely powerful as you would get all the power of atlantis with all the power of atmos.
do not call my Atlantis immature Erik!!!!! LOL
when you say atlantis would support running an external command to determine which projects are affected.
this is already possible if I understand correctly
we might be thinking the same here, I wanted atlantis to avoid having to PLAN an all projects and only plan on the projects that changed base on the PR trigger
If using the github action with atmos like I’m doing now you could do that preprocess before hits atlantis and then create an atlantis.yaml file that only contains those affected projects
IF using atmos inside atlantis, you can run a pre_workflow_hook
to run atmos to generate the atlantis.yaml file before is parsed, again, with only the the projects that changed
@jose.amengual could you explain in a few sentences what would you like to add the atmos describe affected
to help with atlantis?
sure
• support filtering per stack and component =
atmos describe affected -s prod-ue2
atmos describe affected vpc -s prod-ue2
atmos describe affected vpc
• when atlantis integration is enabled extend output to contain the project name for the component changed i.e:
`{
"stack": "tenant2-ue2-staging",
"component_type": "terraform",
"component": "test/test-component-override-3",
"affected": "stack.vars"
"atlantis_project: "tenant2-ue2-staging-test-component-ovverride-3"
},
v1.17.0 what Add local search to atmos docs Add atmos describe affected CLI command Update docs website why
Quickly search through documentation
The command atmos describe affected produces a list of the affected Atmos components and stacks given two Git commits. The command compares the final component sections (after all imports and deep-merging) for all Atmos components in all stacks, and produces a list of affected (changed) components in the stacks. The command also checks the changed files…
v1.18.0 what Update workflow commands of type shell why Workflow commands of type shell could be any complex shell commands (or scripts) in the workflow YAML definition file
what
Update workflow commands of type shell
why
Workflow commands of type shell could be any complex shell commands (or scripts) in the workflow YAML definition file
v1.18.0 what Update workflow commands of type shell why Workflow commands of type shell could be any complex shell commands (or scripts) in the workflow YAML definition file
2022-12-16
2022-12-19
@Andriy Knysh (Cloud Posse) having some issues with atmos describe affected
atmos describe affected
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
my git works fine but I do have instead-of settings
I wonder if there is a problem with that
according to https://github.com/src-d/go-git/issues/550, go-git
should use the correct transport depending on the URL. Look at the responses, some other Git settings might be wrong. Other than that, go-git
does not have anything special for SSH (well, PlainClone
function allows specifying auth credentials, but we don’t want to use it anyway)
can you give me a sample about clone a private repo using ssh key? thanks
i did not test it with SSH so can’t say anything else than what the link describes
maybe we can test something like this https://github.com/src-d/go-git/issues/550#issuecomment-323182885
If it helps, I did something like this to create the auth object needed to clone an ssh repo:
s := fmt.Sprintf("%s/.ssh/id_rsa", os.Getenv("HOME"))
sshKey, err = ioutil.ReadFile(s)
signer, err := ssh.ParsePrivateKey([]byte(sshKey))
auth = &gitssh.PublicKeys{User: "git", Signer: signer}
something like those we’ll to test
@jose.amengual i’ll will test this when I get a moment and will prob add an ENV var and a CLI flag to the describe affected
command to tell it to find and use the SSH credentials
2022-12-20
how do I deploy a multiregion component in atmos?
in this case, a regional cluster that needs to be created in the main
region an it’s id is then used for all the other regions
I guess I could do a data lookup to the resources I needs to find using the same module…..
@jose.amengual you use “Component Remote State”
here is what we added to a PR, not deployed to the main site yet
Component Remote State is used when we need to get the outputs of an Atmos component,
I’m trying not to use that module
What is the reason, @jose.amengual?
the remore-state
module requires some special atmos configs and I think data lookups keep the state more in line with the actual infrastructure
potentially you can have a bunch of atmos config yaml changes that the remote-state module will considere as actual changes but in reality were never deployed
that is why I prefer the data lookups
by data lookups do you mean using data terraform_remote_state ...
?
no
or straight data aws_whatever
dat.aws_vpc etc…
yes
2022-12-21
2022-12-27
Hi there! I have a terraform project with complex tfvars variable of type “map of objects”, which also contains nested maps of objects
{
"complexvar":
{
"toplevel": {
"objectA" : {
"bottomlevel1" : {
"attribute1": true
"attribute2": true
},
"bottomlevel2" : {
"attribute1": true
"attribute2": true
}
},
"objectB" : {
"bottomlevel3" : {
"attribute1": "abc"
"attribute2": "abc"
}
}
}
}
}
and I am trying to update attributes of “*bottomlevel**” objects with “vars” property of the stack:
vars:
complexvar:
objectA:
bottomlevel1:
attribute1: false
When I do following, atmos generates a new tfvars.json only with bottomlevel1 object and removes bottomlevel2 and objectB instead of deep merge. Does atmos support deep merge of terraform.tfvars.json and stack vars?
It sounds like you might be mixing your own tfvar files with the ones we generate. Atmos only deep merges what it manages, which are stack configurations. Translate all your tfvars into stack configs and it will definitely handle this use case
@Erik Osterman (Cloud Posse) Thank you! It is an acceptable solution in my case
The key thing here is atmos really knows very little about terraform. It is a general purpose workflow automation tool. One of the built in features is it can generate varfiles.
We designed atmos this way so it can automate all sorts of tools not just terraform
v1.19.0 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…
what
Add Sources of Component Variables to atmos describe component command Update docs
why The atmos describe component command outputs the final deep-merged component configuration in YAML form…
2022-12-28
Hi all, since backend.tf.json
files need to be generated when ‘vendoring’ components from terraform-aws-components
, why would atmos
users leave auto_generate_backend_file: false
and generate backend files manually?
if you want atmos to generate those backend.tf files you need that setting set to true
yes
and it’s not related to vendoring, you can use vendoring or not, and you can auto generate the backend files or not
yes, I understood that, I was just asking why would users prefer to generate the backend files manually instead of having atmos to it automatically
if you have a single backend for the entire infrastructure, then it’s your choice to have the backend files committed or auto generated - the result will be the same
but atmos supports multiple backends. eg per account, per tenant, per region. in this case it will be generated automatically based on the information in the configuration
and in this case it can’t be one static backend file