#atmos (2024-03)
2024-03-01
What additional feature does cloudposse/github-action-pre-commit
provide in addtion of pre-commit
that it forked from?
according to the Readme: NOTE: This is a fork of pre-commit/action to add additional features.
That needs a bit more TLC i guess (open for PR’s re that).
Given the repo description that should be in the area of allow overriding the git config user name and email
I forked it originally and the main reason to fork was exactly that. Pre commit is gravitating more towards the saas model and they removed features from its upstream action prompting the fork.
The upstream maintainer also refused prs so it was easier to maintain a fork
It essentially wraps the pre commit cli command so it takes advantage of all the pre commit features plus git owner of the pushed commits
What are the features that we you care about that are removed?
I see what you are saying
They are good reasons, thanks for clarifying
I am curious how you keep it in sync with upstream while maintain the features you added or would like to keep?
Good question. I don’t maintain it anymore but i believe cloudposse can answer that better. I recall dependabot is used to keep up with package updates and precommit installed is always the latest version.
What’s the primary concern? Have you noticed a feature in the upstream action that the fork doesn’t have?
I do not have specific concern at the moment, is in the process of using this fork version vs directly upstream, so trying to collecting all information, so I can make a decision
noticed tat there is quite some divergence now between the two now, so that needs some work/contributions. Nothing planned though re getting this back in a more synched state.
The official action is deprecated. That was the main reason we forked.
Agree the description could be better updated
If there are updates we should pull in, we should do that.
Thanks for the info, they are helpful!
Does atmos use terraform workspaces by default, and what for if not why and how would you use them with atmos?
Yes. atmos
does use tf workspaces. Because atmos
is used to deploy “components” (aka small reusable terraform root modules), it uses workspaces to prevent collision when deploying multiple instance of the same component (eg, marketing-db, platform-db, etc).
Yep, so by convention, we recommend using workspaces, that way the backend can be configured once and then a workspace is used for each stack the component is deployed.
It’s technically not required, but it’s what we use and have the most experience with.
@Andriy Knysh (Cloud Posse) I can’t seem to find any docs on workspace configuration
2024-03-04
v1.65.0
Add providers
section to Atmos manifests. Update docs @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2167232934” data-permission-text=”Title is private” data-url=”https://github.com/cloudposse/atmos/issues/555“…
what
Add providers section to Atmos manifests Auto-generate the prefix attribute for Atmos components for Terraform backend gcs for GCP Update docs (https://pr-555.atmos-docs.ue2.dev.plat.cloudpos…
@Andriy Knysh (Cloud Posse) If i interpret this correctly, this means we can replace the dynamic iam-role parts in found in most component providers.tf files (sourcing from account-map) using this more static approach?
what
Add providers section to Atmos manifests Auto-generate the prefix attribute for Atmos components for Terraform backend gcs for GCP Update docs (https://pr-555.atmos-docs.ue2.dev.plat.cloudpos…
that could be one of the use cases (if you want to use static values)
we are working on other use cases for the feature
seeing some usecases as well (localstack needs some overrides as well, nice to set them from here)
cc @kevcube since you may be a fan this feature
2024-03-05
Are these atmos accelerators supported?
We were given a link to these for GCP use. Most of them are empty placeholders and the readmes give no examples of their yaml configurations. For example with the one I shared, it does not give any examples on how to configure routes or cloud nat or firewall rules so this example is completely useless for anything other than a basic deployment. Whenever I try to configure something new, atmos complains because the yaml is not formatted properly
We created those components and we try our best to update them and document them but I will say, they are a good starting point for you to fork and keep your version of them
Thank you this is helps but why aren’t these in the readmes?
Ok if the readmes could be updated at some point that would be greatly appreciated
we just have not had time to get all the readmes updated
(also, for the sake of clarity, those templates are by Slalom, not Cloud Posse, so cloudposseis not the best at answering questions on those. There are others in the channel though, including @jose.amengualwho are maintaining these templates)
Does Atmos automatically migrate state if you change the backend for a stack?
I think terraform will ask you that before it was X and now is Y and if you want to migrate
but I only have done that from local to s3
So I would have to do that change locally and not in my pipeline?
you change the yaml, which then will render a different backend config
whether you run that on a pipeline or not, that has nothing to do
After the backend config is generated then is all TF work, so whatever the TF workflow is for switching backends that is what is going to happen
I do not think you can do that on a pipeline because usually when you change the backend is an interactive cli response
It’s also possible to keep the state location idempotent, by setting some parameters in the backend, that way the location doesn’t change even if the structure changes
The right solution will depend on what you need to accomplish
@jose.amengual is correct that atmos doesn’t perform automatic state migrations at this time. It’s nontrivial given there are dozens of backend types
Using an atmos custom command you can make that easier
So, specifically, we needed to fix our GCS backend because we did not have a prefix set. However, when I update the backed, neither “atmos terraform plan” nor “atmos terraform init” ask to migrate to state. It just says that it sees the backend and wants to add existing resources
I think you need to run –reconfigure for that
Can I run that with the atmos command?
@jose.amengual I don’t think that is correct. Reconfigure should just tell tf to ignore the existing state and only use the new backend. Terraform init should ask me to migrate state but it does not
Is the issue that the prefix does not constitute as a change?
mmm you might be right
then what you could do is to switch to local
by pulling the state, then once is local, when move it to the correct backend
Hmm seems like a lot of work with all the prefixes I had to add to each component. I think I will just drag the files to the new dir and run the apply and it should see the state in the correct location
ohh, if you can do that then yes
I have no idea about what backend you use
@pv in the latest Atmos release (https://github.com/cloudposse/atmos/releases/tag/v1.65.0), the prefix
for GCP backend (gcs
) will be generated automatically using the Atmos component names (you don’t need to manually defined prefixes for all Atmos components)
@Andriy Knysh (Cloud Posse) I think the issue is when we do not set the dir in the prefix for a component then certain components try to delete and readd other components so some of the backends need to be defined more specifically
2024-03-06
I’m attempting to expand our atmos architecture to support multiple accounts and regions. I’ve found the docs for overriding the provider, but I also need to override the component’s S3 backend. Is there a way to do that? I’m not finding it in the docs…
see this doc on how to configure backends https://atmos.tools/quick-start/configure-terraform-backend
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
backend_type
and backend
sections are first class sections similar to vars
, settings
, providers
, env
- meaning they can be specified at any level/scope - Org, tenant, account, region (in the corresponding _defaults.yaml
), and can also be specified per component (if needed)
Awesome, that’s exactly what I’m looking for. Thanks!
so you can have a diff backend per Org, tenant, region (if needed), account, or even per component
and use inheritance https://atmos.tools/core-concepts/components/inheritance to make the entire config DRY
Component Inheritance is one of the principles of Component-Oriented Programming (COP)
We’re currently defining the backend in each component, so auto_generate_backend_file
is currently false. THis is a global setting, right, so I’ll need to be careful about shifting existing infrastructure? Or does the component’s backend.tf override this until we make the change for each component?
auto_generate_backend_file
is a global setting in atmos.yaml
i would set it to true
and then check terraform plan
on the components to see if the backend does not change
if it does for any reason, there are ways to override it to make it the same as it was before for each component that is already deployed
awesome. Yeah, that would be a big job, but definitely doable.
you will need to set auto_generate_backend_file
in any case if you want to use multiple Atmos components managing the same TF component in the same stack
In the previous steps, we’ve configured the vpc-flow-logs-bucket and vpc Terraform components to be provisioned into three AWS accounts
oh wow, that’s how you do multiple copies of the same component within a stack - I was considering that some time ago, and assumed it was impossible. outstanding!
that’s how you do multiple copies of the same component within a stack
here are some links that will help you understand diff patterns for how to deploy multiple copies of the same component in the same stack:
<https://atmos.tools/design-patterns/multiple-component-instances>
<https://atmos.tools/design-patterns/component-catalog>
<https://atmos.tools/design-patterns/component-catalog-with-mixins>
<https://atmos.tools/design-patterns/component-catalog-template>
Conceptually, an Atmos component is not the same as a Terraform component (although in many cases there is a one-to-one correspondence and correlation). An Atmos component manifest is the configuration of a Terraform component inside of a Stack manifest. Or, we can call it a metadata for a terraform component. This means you can configure multiple Atmos components with diff settings and point them to the same TF component (code). This allows you to have generic TF components (root modules) that can be deployed multiple times in the same or many accounts and regions w/o changing any Terraform code at all (the TF components don’t know and don’t care where they will be deployed). This is a separation of code (TF components) from config (Atmos component manifests and Stack manifests)
I’m struggling a bit with overriding providers. I’ve been able to override the backend config; the migration to dynamically-generated backends was complicated, but not too painful. But the same method doesn’t seem to be working with provider overrides.
Here’s the yaml I created for a particular account, within my catalog: (data redacted)
terraform:
providers:
aws:
region: us-west-2
assume_role: "arn:aws:iam::XXXX:role/<role_name>"
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: <bucket_name>
dynamodb_table: <table_name>
key: "terraform.tfstate"
region: "us-west-2"
role_arn: "arn:aws:iam::XXXX:role/<role_name>"
Then I import this file into the stack yaml. The backend override is working fine, but my plan appears to ignore the providers override. It’s not creating a providers_override.tf.json in the component directory, and the resources are set to be provisioned in my default account, instead of the one I specified in the providers block.
Am I missing something?
are you using the latest release https://github.com/cloudposse/atmos/releases/tag/v1.65.0 ?
if you are, add --logs-level=Trace
flag
atmos terraform plan <component> -s <stack --logs-level=Trace
it should show if the providers_override.tf.json
file is generated and where
That was exactly the problem - I had to update to the latest release. I noticed because the logs-level flag wasn’t recognized
(logs level was implemented a few weeks ago, but the providers
section only in the latest release)
We’re managing a multi-tenant architecture where each tenant operates within their own AWS account. I’m looking for efficient ways to monitor and detect changes within each tenant’s stack. Upon detecting changes, I would like to automatically run atmos terraform plan
specific to the affected stack and tenant in their respective AWS account.
I’m familiar with atmos affected stacks
- I’m interested if theres a recommended pattern to apply the changes to target aws accounts
Unless you’re doing something special, the AWS account is part of the reported stack (normally aws account is the “tenant”-“stage” combination) - so this should do all what you are needing.
@Patrick McDonald have you seen our github actions? We have one for affected-stacks. I believe @Dan Miller (Cloud Posse) may be working on a public reference example.
Our actions are already public
GitHub Actions.
monitor and detect changes
do you mean drift detection?
We have workflows for that too that are pretty rad.
I guess Im looking for how to manage authenticating into the respective tenant accounts to run atmos terraform plan for every tenant change.
we use Github OIDC to assume a role in AWS that can assume other roles across the organization
@Patrick McDonald in what context are you thinking? Locally or through automation?
through automation. I would like the github workflow to detect the change and assume role into the target account and plan the terraform. Im assuming the affected-stacks action just detects changes?
affected-stacks detects changes that were made against code. However that does not detect any “drift” or changes that were made to the resources themselves outside of code
We have a few workflows, but the basic use case is to find all “affected stacks” or changes to code, and then run terraform against those resources. Will link that in a second
The more complex use case is what we call “drift detection”. That’s where we regularly check for changes in every single terraform resource in all stacks and create a GitHub Issue for any “drifted” resources
A composite workflow that runs the atmos describe affected command
^ this action compares git refs between the main branch and the given branch, and then returns a list of all changed components and stacks
I apologize I’m not asking the right question It’s more of a Github question than atmos. let’s say the affected stacks action finds the change and all is well. I have a workflow that will assume role into our sandbox account to plan/apply changes using the aws-actions/configure-aws-credentials@v3
action.
Since I have 10 accounts, is there an easy way to dynamically figure out the account of the changed tenant/stack and assume role?
No worries! We don’t assume the role of the target account in the GitHub Action workflow directly. Instead, we assume 1 role in 1 central “identity” account and then assume another role in any target account by means of the Terraform provider configuration
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = assume_role.value
}
}
}
this is a part of our “reference architecture”, but here’s an idea of what that would look like. You can have a role for “github” workflow and a role for “devops” users. Both can assume the same “terraform” role in any target account. Then use that “terraform” role to plan and apply terraform
ok so the “assuming” happens in the terraform
yes exactly
gotcha.. makes sense.
2024-03-07
2024-03-08
Does anyone know how to get Atmos to work with https_proxy env var? Normal terraform is picking it up but it appears the Atmos binary is not passing the env var of the OS it is run on to use the proxy.
Atmos should pass all the OS environment variables to Terraform, plus the ENV variables defined in the stack config. Using this Golang code
cmd := exec.Command(command, args...)
cmd.Env = append(os.Environ(), env...)
where os.Environ()
is all ENV variables in the executing process
env
are ENV variables defoned in stack manifests
@pv how to you check that https_proxy
is not passed?
FYI, to bypass Atmos binary, you can execute atmos terraform shell <component> -s <stac>
(which will generate the varfile and backend for the component in the stack), and then you can execute any native terraform command and check if the ENV var is working
This command starts a new SHELL
configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.
you can also use the env
section in stack manifests. The env
section is first class section like vars
, you can define it globally per Org, account, region, or per component (all of those will be deep-merged into final values). For example, in a component
components:
terraform:
my-component:
env:
https_proxy: <value>
all ENV vars defined in env
section` wiol be passed to Terraform
and when you set --logs-level
parameter to Trace
, you will see Atmos messages about what ENV vars are being used. For example:
atmos terraform plan <component> -s <stack --logs-level=Trace
Atmos will show a message like this:
Using ENV vars:
AWS_PROFILE=xxxxx
TF_DATA_DIR=xxxxx
TF_IN_AUTOMATION=true
https_proxy=<value>
I’m struggling a bit with overriding providers. I’ve been able to override the backend config; the migration to dynamically-generated backends was complicated, but not too painful. But the same method doesn’t seem to be working with provider overrides.
Here’s the yaml I created for a particular account, within my catalog: (data redacted)
terraform:
providers:
aws:
region: us-west-2
assume_role: "arn:aws:iam::XXXX:role/<role_name>"
backend_type: s3
backend:
s3:
acl: "bucket-owner-full-control"
encrypt: true
bucket: <bucket_name>
dynamodb_table: <table_name>
key: "terraform.tfstate"
region: "us-west-2"
role_arn: "arn:aws:iam::XXXX:role/<role_name>"
Then I import this file into the stack yaml. The backend override is working fine, but my plan appears to ignore the providers override. It’s not creating a providers_override.tf.json in the component directory, and the resources are set to be provisioned in my default account, instead of the one I specified in the providers block. Am I missing something?
Ahh… found the problem. I was running on older version of atmos (1.44.0). Upgraded to 1.65.0 and the providers override was picked up.
(@Andriy Knysh (Cloud Posse) shouldn’t atmos error if it encounters an unsupported key? e.g. using an old atmos with providers
block)
it should. Only if the version that @Andy Wortman used supported the Atmos Manifest Schema AND it’s added to the repo and configured
1.44.0
is a very old Atmos version, which does not support Atmos Manifest Schema
@Andy Wortman updated to 1.65.0
, but still Atmos Manifest Schema needs to be added to the repo and configured in atmos.yaml
@Andy Wortman FYI https://atmos.tools/reference/schemas
Atmos Schemas
@Andy Wortman we recommend you do it (add and configure the schema) - we did after it was implemented and found a lot of misconfig (even if many people had been looking at the stack manifests for months)
then you can run atmos validate stacks
and it will validate all stack manifests (try to “misconfigure” any section to test it). Also, when running atmos terraform plan/apply
, it will also validate all stacks manifests (so you could catch any misconfig)
2024-03-12
v1.66.0
Add stacks.name_template
section to atmos.yaml
. Add Go
templating to Atmos stack manifests @aknysh (<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2182266112” data-permission-text=”Title is private”…
Add stacks.name_template
section to atmos.yaml
. Add Go
templating to Atmos stack manifests @aknysh (#560)
what
Add stacks.name_template section to atmos.yaml Add Go templating to Atmos stack…
aknysh has 265 repositories available. Follow their code on GitHub.
Hi, I have trouble vendoring on atmos:
# vendor.yaml
# <https://atmos.tools/quick-start/vendor-components>
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: vendor-config
description: Atmos vendoring manifest
spec:
sources:
# <https://github.com/cloudposse/terraform-aws-codebuild>
- component: "codebuild"
source: "github.com/cloudposse/terraform-aws-codebuild.git"
targets:
- "components/terraform/codebuild"
included_paths:
- "**/*.tf"
I got this error after atmos vendor pull
:
error downloading '<https://github.com/cloudposse/terraform-aws-codebuild.git>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
I suspect I have wrong source syntax in vendor.yaml? I could clone successfully using git clone <https://github.com/cloudposse/terraform-aws-codebuild.git>; git version 2.34.1
@prwnd9 plerase see https://atmos.tools/core-concepts/components/vendoring#vendoring-modules-as-components
Use Component Vendoring to make copies of 3rd-party components in your own repo.
try to use source
like these
github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
Note the usage of the ///, which is to vendor from the root of the remote repository.
2024-03-13
Hi everyone , I’m new into Atmos looking for a repo structure for Provisioning EKS Cluster within an Organizational Units.
@Selçuk KUBUR you can start with thiese two docs:
https://atmos.tools/design-patterns/organizational-structure-configuration
Organizational Structure Configuration Atmos Design Pattern
Take 30 minutes to learn the most important Atmos concepts.
the Quick Start describes this repo: https://github.com/cloudposse/atmos/tree/master/examples/quick-start
you can clone the repo, and then use the Organizational Structure Configuration Atmos Design Pattern to extend it to have an Org and OU config. Also add more components that you need (e.g. EKS and all the releases that you want to deploy to EKS
this is the EKS cluster component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/cluster
and all Helm releases are here https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks
please also look at how to configure Atmos stacks naming convention https://atmos.tools/cli/configuration#stacks. (stacks.name_pattern
in atmos.yaml
)
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
Hi Again, @Andriy Knysh (Cloud Posse) thank u so much , I have configured repo structure for org/ou’s with guide and downloaded module for eks but in deployment I’m getting issue as below any idea how I can fix it?
here https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cluster/providers.tf the module "iam_roles"
is used to read the IAM role for Terraform to assume. You can use the module (it will require some effort from you to understand and configure), or just update the code here to use your own IAM role for Terraform (or no role if you want Terraform to use the role that you assume when provisioning the resources - you need to consider what you want)
provider "aws" {
region = var.region
assume_role {
# WARNING:
# The EKS cluster is owned by the role that created it, and that
# role is the only role that can access the cluster without an
# entry in the auth-map ConfigMap, so it is crucial it is created
# with the provisioned Terraform role and not an SSO role that could
# be removed without notice.
#
# This should only be run using the target account's Terraform role.
role_arn = module.iam_roles.terraform_role_arn
}
}
module "iam_roles" {
source = "../../account-map/modules/iam-roles"
profiles_enabled = false
context = module.this.context
}
if you want your own role for Terraform (different from the role that you assume), then do this:
provider "aws" {
region = var.region
assume_role {
# WARNING:
# The EKS cluster is owned by the role that created it, and that
# role is the only role that can access the cluster without an
# entry in the auth-map ConfigMap, so it is crucial it is created
# with the provisioned Terraform role and not an SSO role that could
# be removed without notice.
#
# This should only be run using the target account's Terraform role.
role_arn = <Terraform IAM role with permissions to provision all the resources>
}
}
if you want to use the role that you assume, do this:
provider "aws" {
region = var.region
}
pay attention to the warning
# WARNING:
# The EKS cluster is owned by the role that created it, and that
# role is the only role that can access the cluster without an
# entry in the auth-map ConfigMap, so it is crucial it is created
# with the provisioned Terraform role and not an SSO role that could
# be removed without notice.
(it means that you have to pay attention to the role you use to provision the EKS cluster. If the role is lost/deleted for any reason, you will lose admin access to the cluster)
It worked now :slightly_smiling_face: i had to add force protocol prefix as git::.
source: "git::<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}"
how do we hide sensitive data from being exposed in a stack file? is there anyway I can read from environment variable into stack?
Terraform natively supports environment variables prefixed with TF_VAR_
for setting input variables, but I would not recommand using it. Because you want consistent outcomes across different execution environments, you should use a secrets manager.
For sensitive data, the recommended practice is to store it in a secrets manager (e.g., AWS Secrets Manager or SSM Parameter Store). You can then use Terraform’s data sources to retrieve these secrets during plan/apply, ensuring resources are configured consistently, regardless of the execution context.
However, using environment variables to configure Terraform providers is a common practice.
You can then use Terraform’s data sources to retrieve these secrets during plan/apply, ensuring resources are configured consistently, regardless of the execution context.
and the SSM/ASM paths to the secrets you can define in Atmos manifests
agree with @Brian, using ASM/SSM is one of the best and secure solutions (and it can be used from localhost and from CI/CD)
git://CentralCIRepoToken>:{{env “CI_REPO_TOKEN”}}, but this works for me in vendor.yaml but same syntax not supporting in stack yamls
stack manifests (yaml files) are not for vendoring, they are to configure components (e.g. terraform variables) and stacks (where the components are provisioned). What are you trying to achieve?
for an example on how to use SSM to store secrets and then read them in Terraform, see:
you can then configure ssm_github_api_key
in Atmos stack manifest for your component:
components:
terraform:
my-component:
vars:
ssm_github_api_key: "<SSM path to the secret>"
@Andriy Knysh (Cloud Posse) shouldn’t this work with the latest atmos 1.66?
components:
terraform:
my-component:
vars:
super_sensitive: '{{env "CI_REPO_TOKEN"}}'
This would rely on the Sprig env
function.
(but @Andriy Knysh (Cloud Posse) is correct, that at Cloud Posse, we would generally read the secrets from SSM, rather than the ENV)
Atmos supports Go templates in stack manifests.
yes it will work, all Sprig functions are supported
Please share your stories: https://www.reddit.com/r/Terraform/comments/1bbm6e7/anybody_use_atmos/
2024-03-14
Is there a canonical way of setting booleans using stack manifest templating?
@cricketsc you are right, Go
templates quote boolean values, I’ve just tested a few variants using a few functions, still the final string produced from a tremplate has quoted bools
the string "true"
will still work as a boolean value for a bool variable in Terraform
Terraform automatically converts number and bool values to strings when needed. It also converts strings to numbers or bools, as long as the string contains a valid representation of a number or bool value.
true converts to "true", and vice-versa
false converts to "false", and vice-versa
@Andriy Knysh (Cloud Posse) isn’t that because we are requiring the go template to be in encased in quotes?
foo: '{{ .... }''
That means anything returned by the go template, will be a string.
The way around this is if we start supporting YAML modifiers.
We don’t have that today.
e.g.
foo: !boolean '{{ ... }}'
@Andriy Knysh (Cloud Posse) is correct, that terraform still has an awkward relationship with strings and booleans.
@cricketsc if you are encountering a problem, please share more details and maybe there’s an alternative.
and by “modifiers” I mean explicit types. https://github.com/cloudposse/atmos/issues/267
what
Support YAML “explicit types” used in numerous other projects, like Home Assistant.
why
• Greater flexibility to organize configurations • Greater extensibility; should be able to add more of these explicit types.
Examples
• !env_var FOO
will insert the value of the FOO
environment variable (inspired by Home Assistant YAML)
• !unset
will delete the value from the stack configuration (#227)
• !include FILE
will insert the YAML contents of the file at that position with proper indentation
• !include_dir DIR
will insert all the YAML files in lexicographic order with the proper indentation
• !secret aws/ssm FOO
will read the value from AWS SSM and insert the value into the in-memory stack configuration
Set the CLOUDFLARE_API_KEY
for the cloudflare
provider.
env:
CLOUDFLARE_API_KEY: !secret aws/ssm FOO
Related
See Also
• https://www.home-assistant.io/docs/configuration/splitting_configuration/ • https://www.home-assistant.io/docs/configuration/splitting_configuration/#advanced-usage • https://www.home-assistant.io/docs/configuration/yaml/ • https://stackoverflow.com/questions/63567945/how-to-extend-go-yaml-to-support-custom-tags
Oh!
TIL:
!!bool is a built-in tag in YAML used to explicitly specify that the data type of a value is boolean.
So try this, @cricketsc
foo: !!bool '{{ ... }}'
@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) thanks for taking a look at this!
2024-03-15
Having some trouble with cloudposse/github-action-atmos-affected-stacks. Is there a dependency between that atmos version and the version of the atmos-affected-stacks action? I recently upgraded to atmos 1.65. The actions all worked for that commit, but recently we’ve started seeing the below error on every PR. I’ve tried the upgrading the versions of cloudposse/github-action-setup-atmos and loudposse/github-action-atmos-affected-stack, we’re using while staying on v1 of both. Same error as below. Upgrading to v2 of both took care of this error, but broke the matrix code that triggers the plan/apply steps.
Run atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch"
atmos describe affected --file affected-stacks.json --verbose=true --repo-path "$GITHUB_WORKSPACE/main-branch"
affected=$(jq -c '.' < affected-stacks.json)
printf "%s" "affected=$affected" >> $GITHUB_OUTPUT
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
ATMOS_CLI_PATH: /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos
TERRAFORM_CLI_PATH: /home/runner/work/_temp/2b951b22-3979-4563-9e42-c061f9ebb96f
ATMOS_CLI_CONFIG_PATH: atmos.yaml
Current working repo HEAD: ad0b5b3d6d9f6ce34f82c1222a47b77488982893 HEAD
Remote repo HEAD: 738d0df4ab18f9d845a20049457fca92ef47639e refs/heads/main
template: describe-stacks-all-sections:35: function "SessionName" not defined
Error: Process completed with exit code 1.
@Andy Wortman Atmos 1.66.0 introduces Go
templates in Atmos stack manifests https://github.com/cloudposse/atmos/releases/tag/v1.66.0
and the error means that you use other Go
templates in your YAML files (not intended for Atmos processing, but rather for the resources being provisioned)
this is the fix for that:
in all your Go
templates, instead of using this
{{ ... }}
need to use this
{{`{{ .... }}`}}
or this
{{ printf "{{ ..... }}" }}
I understand this is an “inconvenience”, but that’s an issue in any tools that use Go
templates by themselves and also allow configuring Go
templates for the resources that they provision. Helm
and helmfile
use the same
{{`{{ .... }}`}}
in Helm,
people have been discussing it for years, see https://github.com/helm/helm/issues/2798
There is a use case: deploy Prometheus as StatefulSet and config alerting-rules as ConfigMap.
alerting-rules can take more detail on here: https://prometheus.io/docs/alerting/rules/#alerting-rules
it looks like:
IF node_memory_Active >= 1
FOR 1m
LABELS {
service = "k8s_metrics",
alertname = "InstanceMemoryOverload"
}
ANNOTATIONS {
summary = "Instance {{ $labels.instance }} memory overload",
description = "{{ $labels.instance }} memory overload for more than 1 minutes, now is {{ $value }}."
}
Can Helm support to ignore {{expr}} which is just for configuration but not render?
in any case, we have now a situation where Go
templates are used in diff contexts, and using this solves the issue
{{`{{ .... }}`}}
Go
templating does not process the templates in it, but rather just outputs them verbatim (which then goes to the provisioned resources)
let us know if it solves the issue
You said that was added in atmos 1.66 - I’m using 1.65 and getting this error.
hmm, prob the GH action just downloads the latest version
you can either pin Atmos in the GHA, or update your templates and Atmos to 1.66.0
Setup atmos version spec 1.65.0
Attempting to download 1.65.0...
Found in cache @ /opt/hostedtoolcache/atmos/1.65.0/x64
Successfully set up Atmos version 1.65.0 in /opt/hostedtoolcache/atmos/1.65.0/x64
this template: describe-stacks-all-sections:35: function "SessionName" not defined
is a message from the Go templates in Atmos 1.66.0 (before that, that message did not exist)
somehow/somewhere the GHA instalss the latest Atmos
btw, here’s how to pin the GHA to Atmos version
steps:
- uses: hashicorp/setup-terraform@v2
- name: Setup atmos
uses: cloudposse/github-action-setup-atmos@v1
with:
version: 0.65.0
the error you are seeing is from github-action-atmos-affected-stack
check if Atmos version is pinned
steps:
- uses: actions/checkout@v3
- id: affected
uses: cloudposse/github-action-atmos-affected-stacks@v3
with:
atmos-config-path: ./rootfs/usr/local/etc/atmos/
atmos-version: .....
nested-matrices-count: 1
yeah, I think I found it! get-affected-stack calls its own copy of setup-atmos
The setup-atmos call we defined was locked to 1.65.0, but the get-affected-stacks one wasn’t
Thanks!
sorry for this inconvenience, but adding Go templates to Atmos manifests requires using
{{`{{ .... }}`}}
for the “raw” mbedded templates, there is no way around it
let me show you how you can modify the GHA to get the Atmos version from the Docker file from the repo, so those are always in sync (same version is used on localhost and in GHA):
This is only a problem in atmos manifests - like stack yaml? If that’s the case, no big deal. We’re using Go templates in like 6 places. Easy to adjust.
jobs:
context:
runs-on: [self-hosted, Ubuntu]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Extract ATMOS_VERSION from Dockerfile
id: extract_atmos_version
run: |
version=$(grep 'ARG ATMOS_VERSION=' Dockerfile | cut -d'=' -f2)
echo "atmos_version=$version" >> "$GITHUB_ENV"
- name: Atmos Affected Stacks
uses: cloudposse/github-action-atmos-affected-stacks@v3
with:
atmos-version: ${{ env.atmos_version }}
This is only a problem in atmos manifests - like stack yaml? If that’s the case, no big deal. We’re using Go templates in like 6 places. Easy to adjust.
just use
{{`{{ .... }}`}}
instead of
{{ .... }}
Thanks so much. Someday we’ll meet at a conference or something, and your beers are on me
Curious what the right approach is or how to do what i’m trying to do. I’m in Azure land and defining policies via json that just gets jsondecoded and i create a resource…not unlike aws https://github.com/cloudposse/terraform-aws-service-control-policies/tree/main/catalog…
I am trying to figure out where the right place to put those files is and initially i’m thinking stacks/catalog/policy-definnitions
But i’m not sure how to get the right path to flow throught ot he terraform component that lives in components/terraform/policy-definitions
Am i just stuck needing to define those in the component itself?
You are thinking along the right lines. One thing we like to do is not mix different type of config inside the catalog. So for our refarch, it’s strictly for “stack” manifests. We create a separate folder we called “policies” for our OPA policies. Now this is slightly different than what you’re trying to do because these policies relate to Atmos and Atmos knows where to find them in your case.
You’re defining policies that relate to a specific component and in a custom format, and those configurations only makes sense in the context of that component, so when we do this, we typically have a folder inside the component with the configuration options related to that component. Don’t treat this as Canon law, however, rules are meant to be broken. So to directly answer your specific question, I believe there’s a setting that is exposed, which contains the base path for Atmos and using the new features of Atmos 1.66 you should be able to also refer to anything in the entire context using double mustaches. Unfortunately, I am on my phone and do not know the setting off the top of my head. As soon as @Andriy Knysh (Cloud Posse) is around, he can probably share with that is.
@Andriy Knysh (Cloud Posse) how to get the absolute base path of the atmos working directory? I suppose when @Andrew Ochsner is trying to access this config in the context of their module, they are in the temp module directory that terraform creates.
there are a few possible ways of doing this:
- Put the policies inside the component (e.g.
components/terraform/<my-component>/modules/policy-definitions
) and use them from the terraform code directly - Put the policies directly in YAML in Atmos stack manifests as JSON strings in multi-line YAML (prob not a good idea)
- Put the policies in some other folder (e.g.
stacks/catalog/policy-definnitions
). In this case, the terraform code needs to know how to find that folder. This can’t be done directly in TF w/o hardcoding the path from the component to the folder. The only way to do it is to add another variable to the TF code and specify the path to the policies folder (that var can be configured in Atmos manifests). - When you run
atmos describe component <component> -s <stack>
, you seeatmos_cli_config: base_path: <absolute_path_to_repo> stacks: base_path: stacks
you can use that in Atmos manifest for the component using Atmos 1.66.0 and Go
templates (https://atmos.tools/core-concepts/stacks/templating/) by joining:
{{ .atmos_cli_config.base_path }}/{{ .atmos_cli_config.stacks.base_path }}/catalog/policy-definnitions
Atmos supports Go templates in stack manifests.
in this case, you still need a separate Terraform variable to provide the path to the policies to the Terraform code, but that variable can be set in Atmos manifests using Go
templates:
e.g.
components:
terraform:
my-component:
vars:
policies_path: "{{ .atmos_cli_config.base_path }}/{{ .atmos_cli_config.stacks.base_path }}/catalog/policy-definnitions"
@Andrew Ochsner @Erik Osterman (Cloud Posse)
since Atmos supports the native Go
templates as well as the Sprig functions in Atmos manifests, you can also use any functions from (if needed) https://masterminds.github.io/sprig/paths.html
Useful template functions for Go templates.
cool thanks for all of these options! had lost some momentum on this and will hopefully have time tomorrow or Friday to pick it back up so will noodle on it when i get there
2024-03-16
Hey all - new to the updated Atmos - I’m trying to create a local repo to bounce a few ideas around but I’m unsuccessful thus far getting Atmos to path through a basic name pattern. It works in my work environment but I didn’t do initial config, maybe I’m missing something.
Atmos.yaml -
components:
terraform:
base_path: "components/terraform"
stacks:
base_path: "stacks"
name_pattern: "{stage}"
schemas:
jsonschema:
base_path: "stacks/schemas/jsonschema"
opa:
base_path: "stacks/schemas/opa"
atmos:
manifest: "stacks/schemas/atmos/atmos-manifest/1.0/atmos-manifest.json"
stacks\example.yaml -
import: []
vars:
stage: example
terraform:
vars: {}
helmfile:
vars: {}
components:
terraform:
fetch-location:
vars: {}
fetch-weather:
vars: {}
output-results:
vars:
print_users_weather_enabled: true
helmfile: {}
command -> atmos terraform plan fetch-location -s example
Thanks everyone, have a good weekend otherwise.
Have a look at this one https://github.com/cloudposse/atmos/tree/master/examples/demo-stacks
This is what we use to generate the live examples on our atmos.tools landing page
Thank you, will give it a review.
I think it looks pretty similar to your example, so it should be a good starting point
Sweet yea now I’m good thank you.
2024-03-19
Hello everyone , I’m trying to provision eks component but getting below issue when running atmos plan on this component “module.eks.data.utils_component_config.config” and it says failed to find a match for the import etc. any idea how I can fix that issue ?
Could you share the stack file under orgs/*
that has an import
for provisioning the eks component? And to be clear, when you say eks
, I assume you mean eks/cluster
, as linked here
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups and Fargate profiles.
There are three things I would double check:
• that you have a file under orgs, such the following:
import:
- stacks/catalog/eks/cluster.yaml
• that the catalog file specifies a component name, or the component in its metadata:
components:
terraform:
eks/cluster: {}
correct , eks/cluster
here is under orgs.
and lastly, I would assume you ran an atmos command such as: atmos terraform plan eks/cluster -s primetech-use2-dev
basically, if atmos isn’t finding the component, it’ll spit out the config for you to verify, but this likely means that the name you used can’t be found, or the stack file in orgs
is not expected tenant-environment-stage
I run command , atmos terraform plan eks --stack primetech-main-primetech-euc1-prod
I have updated atmos file to find my stage.
so could you run a couple commands…
• atmos describe stacks --components eks --sections none
• atmos describe stacks --components eks/cluster --sections none
that should list out where the components are… if there are in fact any orgs importing them
for eks
can see list of eks components,
gotcha… so, do you have any metadata blocks in the stacks/catalog/eks/cluster.yaml
file?
so, I would expect that you’d have something like:
components:
terraform:
eks:
metadata:
component: eks/cluster
I wasn’t have that file , just now created under stacks/catalog/eks/
I admit, there are a lot of ways to use atmos… so forgive me if my conventions seem strange. I’m following the practices from the atmos.tools site: https://atmos.tools/design-patterns/component-catalog
Component Catalog Atmos Design Pattern
look in remote-state.tf
it probably wants the vpc component to read the remote state from. You need to define it in Atmos stacks, or update the EKS component to provide the VPC parameters for the cluster
2024-03-20
Hi Team,
Could you explain how changing a component’s location or stacks location in our atmos/Cloud Posse setup affects its state? Are there any recommended practices or considerations we should be aware of when moving components to different locations or environments to ensure ? If there is a documentation goes in detail that will do as well?
Thanks for your help.
particularly the debate around “one big state of shared resources” versus “tiny states with atmos workflows/CP dependencies”. How do we best balance these approaches, especially considering the overlap between data layer services and storage resources deployed with apps? My understanding is that, apart from core infrastructure components like security, VPC, EKS, and SSM keys, most other elements are app-specific dependencies. These can either be exclusive to an app or shared across several, impacting how we manage their lifecycle. Could we discuss how to efficiently organize and differentiate these resources within our state management strategy?
Hey Shiv, someone will likely get back to you tomorrow. I am afk. TL;DR by default we compute the backend bucket prefix using the components relative path. If components change physical disk location, those relative paths change. We support overwriting this key prefix via configuration, if you want to ensure it’s static. Alternatively, files can be renamed via S3 (assuming that backend). Atmos does not automate relocation of state, due to the number of backends out there and the number different types of potential state move operations.
In terms of backend architecture, there are two approaches we recommend. The one we chose by default is the simplest: single backend. It simplifies a lot but has limitations. It’s complicated to give fine-grained IAM permissions across accounts and regions. However practicing gitops this limitation is really moot, since it’s up to CI/CD layer to enforce. From a multi-region DR perspective, replication of the S3 backend is possible, but dynamodb replication while Possible does not work. We have investigated this thoroughly.
The better pattern is a hierarchical backend architecture. One root bucket keeps the state of all other buckets. Provision one bucket per account per region. This simplifies the DR and permissions story. It complicates the remote state data source story. We have a solution coming out for this in April.
When i say “bucket” i am referring to proper tfstate backends, like the ones provisioned by the cloud posse modules
@Shiv if you are asking about what happens in you change the location of the Terraform component in components/terarform
folder or Atmos component in stacks
folder, here’s what happens:
- If you change the component location in
components/terraform
(e.g. fromvpc
tovpc2
), then nothing will change. you just need to pint the Atmos component to the new location of the Terraform component, e.g.components: terraform: vpc: metadata: # Terraform component. Must exist in `components/terraform` folder. component: "vpc2"
- You can change Atmos stack names and locations anytime w/o affecting anything (just update the
imports
if those imported files change)
- You can rename an Atmos component. For the backends, the Atmos component name is the
workspace_key_prefix
(fors3
) andprefix
for GCP). If you want to change the Atmos component name, but keep the samekey_prefix
(to not destroy the already deployed component), you can usebackend.s3.workspace_key_prefix
components: terraform: vpc-new: backend: s3: workspace_key_prefix: vpc
in the example above, the old name of the Atmos component was vpc
, and now we changed it to vpc-new
but want to keep the same (opld) workspace_key_prefix
as vpc
in general, changing stacks
(names, locations, etc.) can be done anytime. Those file names and folder structure are for people to organize the stack configurations and make it DRY
2024-03-21
2024-03-24
Hi team, I’m looking to build a single ECS cluster on a multi-tenant setup which has multiple different domains. However, the component appears to be tied to a single DNS zone according to the children (dns-primary and dns-delegated)
As I have these domains on an external registrar, I’ve tried to disable dns-delegated but it throws the error below:
Error: Attempt to get attribute from null value
│
│ on main.tf line 9, in locals:
│ 9: acm_certificate_domain = try(length(var.acm_certificate_domain) > 0, false) ? var.acm_certificate_domain : try(length(var.acm_certificate_domain_suffix) > 0, false) ? format("%s.%s.%s", var.acm_certificate_domain_suffix, var.environment, module.dns_delegated.outputs.default_domain_name) : format("%s.%s", var.environment, module.dns_delegated.outputs.default_domain_name)
I’m assuming by checking the code that it’s a 1-1 relationship, but I’m wondering if I’m doing something wrong or there’s an alternative to use a single ECS cluster with multiple (isolated) clients.
References:
• https://github.com/cloudposse/terraform-aws-components/tree/main/modules/ecs
• https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dns-primary
• https://github.com/cloudposse/terraform-aws-components/tree/main/modules/dns-delegated
We distinguish between what we call service discovery domains and vanity domains
Our components predominantly deal with service discovery domains. Then associate vanity domains with the SD domain.
@Erik Osterman (Cloud Posse) thanks for the answer. If I understood correctly, I’ll have a single service discovery domain to communicate between the ECS containers and then I can work through multiple vanity domains, right? I’ll move forward with this setup, thank you!
Yes, so each cluster’s ALB gets a DNS name via the service discovery domain. Then how you route it could depend on if you’re using Global Accelerators, CloudFront, API Gateways, or other ALBs, etc.
Because we support all of these various configurations, we’ve needed to be very flexible
2024-03-25
Hi everyone, atmos newbie here.
Could anyone explain, why would I want to exclude [providers.tf](http://providers.tf)
in atmos’s vendor.yaml
file?
Example file excludes it, and I’ve been blindly copying it ever since, but want to understand the why… thanks!
any examples where some files are excluded are just examples to show how to do it. You would want to exclude a file from vendoring (downloading) if you have the same file with your custom logic/code and you don’t want to override it with the vendored code, You also might want to exclude some other artifacts like, for example, docs or images, if you don’t want to download them. In short, you would exclude files if:
- You already have the same files with custom code and don’t want to override them
- You would download the same file from another source (e.g. vendor the entire component from one remote source, but exclude a file, then vendor the file from another remote or local path)
- You don’t want to download some files from remote sources (b/c they are not directly related to the code)
2024-03-26
Hey Guys! I’m currently exploring Atmos for our company. I have one question regarding stack’s var sections – is there any equivalent of jsonencode
(or just possibility of passing pure yaml)? As we are using TF also to configure our SaaS solutions it is sometimes impossible to fit into TF type system, for that cases we stringify input and this functionality is quite hard requirement then. I couldn’t find any clue on the web. Thanks for any info!
@Roy this can be done in a few ways:
- Using YAML multi-line strings (https://yaml-multiline.info/)
components: terraform: my-component: vars: var1: | <YAML or JSON here>
Find the right syntax for your YAML multiline strings.
you can use raw YAML or raw JSON in a YAML multi-line variable, and it will be sent to Terraform var as a string (which you can then as verbatim or decode)
- Since Atmos supports Sprig functions in Atmos templates, you can use any functions from https://masterminds.github.io/sprig/, including the JSON functions https://masterminds.github.io/sprig/defaults.html
Useful template functions for Go templates.
Useful template functions for Go templates.
see https://atmos.tools/core-concepts/stacks/templating for more info
Atmos supports Go templates in stack manifests.
- You can combine the two methods (using YAML multi-line and
Go
templates and Sprig functions). For example, something like thiscomponents: terraform: my-component: settings: my-config: c1: "a" c2: "b" c3: d1: 1 d2: 2 vars: var2: a: 1 b: 2 var1: "{{ toJson .vars.var2 }}" var3: "{{ toJson .settings.my-config }}" var4: | <YAML or JSON here>
let us know if any help is needed, thanks
hey @Andriy Knysh (Cloud Posse), thanks a lot! I can see some possible disadvantages:
- There probably I won’t be able to use validation capabilities (at least with jsonschema), isn’t it?
- Usage of
settings
can be tricky for other users of the code. - It cannot be like var2 to var1, as it will still result in type error for var2. I will explore it further on real-world examples and let You know, probably not sooner than next week, bests!
JSON schema validation is based on the final values, so it should work fine
2024-03-27
2024-03-28
is there atmos docker image ?
I’m not sure but there is a Dockerfile in the quick-start repo here: https://github.com/cloudposse/atmos/tree/master/examples/quick-start
2024-03-29
2024-03-31
Where do I configure which accounts correspond to dev/stage/prog?
those are variables that are passed to your provider
depends on what role_arn you use etc
So right now I’m using a conditional in the provider block to choose a different account profile name based on the input of var.stage.
Does this look right? https://github.com/kingparra/atmos-test
looks right, you can use profiles or role_arn whatever is better for you
Thanks for your help. I’m sure I’ll be pestering this channel with a lot of questions as I figure out how to use this thing.
no problem