#atmos (2023-01)
2023-01-04
v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…
what
Add Sources of Component Variables to atmos describe component command Update docs
why The atmos describe component command outputs the final deep-merged component configuration in YAML form…
v1.19.1 what Add Sources of Component Variables to atmos describe component command Update docs why The atmos describe component command outputs the final deep-merged component configuration in YAML format. The output contains the following sections: atmos_component - Atmos component name atmos_stack - Atmos stack name backend - Terraform backend configuration backend_type - Terraform backend type command - the binary to execute when provisioning the component (e.g. terraform, terraform-1, helmfile)…
2023-01-06
Hi all! I have an atmos workflow that creates two stacks with terraform. Let suppose, stack1 creates an Azure resource group with randomly generated name, and stack2 creates a storage account that needs to be pointed to that resource group. In order to create a storage account, I need to use a data source for the resource group in stack2, but I don’t know the name of that resource group to import.
Is there any native way to read outputs from stack1 and pass it as variables to stack2? I want to avoid using some complex bash commands between applying two stacks. Maybe there are any other suitable approaches to solve this problem following the philosophy of atmos?
will this https://pr-280.atmos-docs.ue2.dev.plat.cloudposse.org/core-concepts/components/remote-state/ help?
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
(not related to workflows, but it’s a way to get the remote state of one component and use it as inputs for another component)
Yes, this is the way you would want to do it
(note this is a link to an open PR updating the documentation; we’re going to merge it soon, and the link above will break)
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) thanks a lot, it looks suitable for my project
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) For some reason it does not work as I expected, I’ll be glad to get any advice.
I created resources with component main and stack prerequisites
In the second stack mid with a reference to the same component main, I enabled remote-state module following the guide and passed stack=prerequisites
and component=main
as inputs, so I expect to get outputs from remote backend of the previous stack, but got module.remote_state[0].outputs is null
instead.
Investigating the state file I found that module.remote_state[0]
with name “config” contains all input vars of the stack prerequisites, which is correct I think. But “data_source” looks like:
{
"module": "module.remote_state[0]",
"mode": "data",
"type": "terraform_remote_state",
"name": "data_source",
"provider": "provider[\"<http://terraform.io/builtin/terraform\|terraform.io/builtin/terraform\>"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"backend": "local",
"config": {
"value": {
"path": ".terraform/modules/remote_state/modules/remote-state/dummy-remote-state.json"
},
"type": [
"object",
{
"path": "string"
}
]
},
"defaults": null,
"outputs": {
"value": {},
"type": [
"object",
{}
]
},
"workspace": null
},
"sensitive_attributes": []
}
]
},
For some reason outputs of the component created in stack prerequisites can’t be read by the remote state module.
P.S. component main uses a remote backend in Azure storage account, that defined in backend.tf file, so I do not define backend in atmos.yaml, but here I see that remote-state supports only s3 and tf cloud. Can it be a reason why the module returns null?
this module https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/modules/remote-state supports only s3 and remote backends
we are not using Azure and can’t currently test it
Thanks for update, got it. Do you plan to add Azure as a backend for remote state in the near future?
No, this would be a community driven feature. We would accept a PR adding support.
@Erik Osterman (Cloud Posse) Thanks, here is my PR to enable azurerm support: https://github.com/cloudposse/terraform-yaml-stack-config/pull/61
what
• Allow to use azurerm
backend to read a state from
why
• Currently only s3
and remote
backend supported
references
2023-01-07
v1.20.0 what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/
Update/improve “Atmos…
what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Chea…
Take 20 minutes to learn the most important atmos concepts.
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
CLI Commands Cheat Sheet
v1.20.0 what & why
Add “Quick Start” doc https://atmos.tools/category/quick-start/
Add “Component Remote State” docs https://atmos.tools/core-concepts/components/remote-state/
Add “CLI Commands Cheat Sheet” doc https://atmos.tools/cli/cheatsheet/
Update/improve “Atmos…
2023-01-09
Hi, I see some action in this repo https://github.com/cloudposse/github-action-setup-atmos is this action ready to use? @matt
A GitHub Actions that installs the Atmos CLI
This one is ready
A GitHub Actions that installs the Atmos CLI
2023-01-10
v1.21.0 what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stacks and atmos describe affected commands
why
Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…
what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stac…
v1.21.0 what
Avoid exiting early when missing configuration file
Make the –workflow-template flag optional for the atmos atlantis generate repo-config command. Update docs
Update atmos describe stacks and atmos describe affected commands
why
Do not immediately exit when a configuration file is not found. This allows, for example, the version command to be run without a configuration file. The commands requiring a configuration file still do exit when it is missing. See <a class=”issue-link…
2023-01-12
Hello all, I have (what I hope is) a fairly basic question about Atmos that I haven’t been able to answer with the documentation. Is there a way to configure components in a stack, such that you can take outputs from one component and feed them in as inputs to another component?
As an example, let’s assume I have a stack that creates a VPC and a set of EC2 instances – could I take a set of subnet ids from the “VPC component” and pass it into the components that create the EC2 instances?
Imports don’t seem like the right answer since that appears to be primarily static configuration
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
also take a look at https://atmos.tools/category/quick-start/, especially https://atmos.tools/quick-start/create-components
Take 20 minutes to learn the most important atmos concepts.
In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform
Oh, I see . One thing that’s not immediately clear is how you indicate the order in which the components have to run. Is there some way to create a dependency tree out of them?
I see no indication in here how the stack determines which TF modules to run first – is it just based on the definition order?
In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.
you decide what to deploy first. In stacks, you just configure the components that CAN be deployed into the stacks
you execute atmos terraform apply <component> --stack <stack>
command
so you deploy the dependencies first
then deploy the components that uses the remote state from the dependencies
you can also use workflows if you want to deploy in particular order https://atmos.tools/core-concepts/workflows/
Workflows are a way of combining multiple commands into one executable unit of work.
Hmm, that’s unfortunate. I’d really like a “deploy this whole tree of dependencies” where I can clearly tell each component what other components it depends on
that’s what the workflows are for
Workflows appear to require you to create a linear list rather than a dependency tree, where things can work in parallel
oh, graphs are complicated and we don’t support anything like that
it would convolute the stacks and components
In a previous role, I wrote something like that to orchestrate CloudFormation stacks
it’s def possible, but will create a lot of new issues and will make everything much more complicated
Each component would indicate what outputs it requires from each other component and it would automatically resolve the dependency tree
Yeah, I understand
It can be done for a particular case, but to make all of that generic is not an easy task
When you are looking at deploying literally hundreds of components, having things that don’t depend on each other deployed in parallel is a nice benefit
we are planning on extending Atmos workflows to add parallel
section to deploy a collection of components in parallel
this is much easier to understand, reason about and use than some complex dependencies expressed in YAML for diff components in the same or diff stacks
v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…
what
Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/
why Useful when you want to restart a workflow…
Workflows are a way of combining multiple commands into one executable unit of work.
v1.22.0 what Update atmos workflow command to allow restarting workflows from a named step Update workflow docs https://atmos.tools/core-concepts/workflows/ why Useful when you want to restart a workflow from a particular step. Each workflow step can be given an arbitrary name (step’s identifier) using the name attribute. For example: workflows: test-1: description: “Test workflow” steps: - command: echo Command…
2023-01-13
v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why
Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:
atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…
what
Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs
why
Support two ways of speci…
v1.23.0 what Update atmos terraform apply and atmos terraform deploy commands Add –planfile flag to atmos terraform apply and atmos terraform deploy commands Improve docs why
Support two ways of specifying a planfile for atmos terraform apply and atmos terraform deploy commands:
atmos terraform apply and atmos terraform deploy commands support –planfile flag to specify the path to a planfile. The –planfile flag should be used instead of the planfile argument in the native terraform apply…
2023-01-17
v1.24.0 what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[
{
“component”: “test/test-component-override-2”,
“component_type”:…
what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for …
v1.24.0 what & why
Update atmos describe affected command
Add affected Spacelift stack to each item in the output list (if the Spacelift workspace is enabled in settings.spacelift.workspace_enabled for the component in the stack). This takes into account settings.spacelift.stack_name and settings.spacelift.stack_name_pattern settings which override the auto-generated Spacelift stack names
atmos describe afected
[
{
“component”: “test/test-component-override-2”,
“component_type”:…
2023-01-19
hi! When I try to run atmos terraform shell <component> -s <env> I get: “can’t find a shell to execute” at the very end. I remember this working at some point, but now I’m second guessing myself
This is all within the docker container. Am I forgetting to set some env?
2023-01-20
it appears that the atmos helmfile implementation does not not take into account the possibility of an eks cluster name including attributes
adding attributes to the cluster_name_pattern does not appear to have any effect and there does not appear to be a way to override or specify the cluster name
am i missing something?
did you set the name pattern in atmos.yaml
?
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster"
all these tokens are supported
// ReplaceContextTokens replaces context tokens in the provided pattern and returns a string with all the tokens replaced
func ReplaceContextTokens(context Context, pattern string) string {
r := strings.NewReplacer(
"{base-component}", context.BaseComponent,
"{component}", context.Component,
"{component-path}", context.ComponentPath,
"{namespace}", context.Namespace,
"{environment}", context.Environment,
"{region}", context.Region,
"{tenant}", context.Tenant,
"{stage}", context.Stage,
"{workspace}", context.Workspace,
"{attributes}", strings.Join(context.Attributes, "-"),
)
return r.Replace(pattern)
}
so if you set, for example:
cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-{attributes}-eks-cluster"
and add
attributes:
- blue
to the resources, then the cluster name will be something like this
ns-core-ue2-prod-blue-eks-cluster
using atmos 1.24.0
in atmos.yaml i have
yaml
helmfile:
use_eks: true
cluster_name_pattern: "{namespace}-{stage}-{attributes}"
in my stack config i have
yaml
components:
helmfile:
echo-server:
vars:
attributes:
- "20230115"
in the execution log i see
txt
Writing the variables to file:
/conf/components/helmfile/echo-server/palolo-labs-us-west-2-echo-server.helmfile.vars.yaml
Using AWS_PROFILE=default
Downloading kubeconfig from the cluster 'palolo-labs-' and saving it to /dev/shm/palolo-labs-us-west-2-kubecfg
Executing command:
/usr/local/bin/aws --profile default eks update-kubeconfig --name=palolo-labs- --region=us-west-2 --kubeconfig=/dev/shm/palolo-labs-us-west-2-kubecfg
An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: palolo-labs-.
exit status 254
here it does not appear that the attribute is being appended to the cluster name and so the cluster is not found
it’s not easy to understand what’s going on from the output above, you can zip up the project (all files and atmos.yaml) and DM me, I’ll check it
much appreciated..
2023-01-23
2023-01-24
2023-01-25
How do you guys deploy lambdas with atmos? I’m trying to decide the best way when I have a monorepo with atmos infra and another repo for the lambda code and trying to avoid 2 PRs to create/deploy the lambda function
this is not related to atmos
, this is how do you handle two repos, it’s a terraform thing
and in general, you can use two (maybe more) diff methods to handle that:
yes is not pure atmos related, but I was wondering if you guys handle any part of a lambda Infrastructure with atmos and what it looks like fke the user flow
- For the lambda source repo, create a pipeline (e.g. GH actions) to compile the code, zip it, and store in, let’s say, S3 bucket. Then in the infra repo, use the ZIP artifact URL to download it and use for the lambda, e.g. https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/blob/master/main.tf#L99
module "artifact" {
- In the infra repo Terraform code, download the Lambda source(s) using, for example, https://github.com/hashicorp/terraform-provider-http. Then zip it up and create an archive on disk, then use it for Lambda. Similar to https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/blob/master/examples/complete/fixtures.us-east-2.tfvars#L31 https://github.com/cloudposse/terraform-aws-lambda-function/blob/main/examples/complete/main.tf#L46
trying to avoid 2 PRs
in the best case scenario, you will need a PR in the lambda source repo (for the changes), and a pipeline run (e.g. Spacelift or GH action) in the infra repo to detect the source changes (or an artifact changes) and redeploy
yes, I could avoid using 2 PRs but no matter what a user will have to interact with two different repos
2023-01-26
2023-01-27
2023-01-31
v1.25.0 what Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations why Auto-generate Atmos components from templates Using imports with context and parameterized config files will help to make the stack configurations extremely DRY, and is very useful when creating stacks and components for <a…
what
Update Atmos Stack imports Update docs https://atmos.tools/core-concepts/stacks/imports/ Add imports schema with context Allow using Go templates in imported configurations
why
Auto-generat…
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once