#atmos (2023-07)
2023-07-05
v1.39.0 what Make Atmos understand Terraform configurations and dependencies Update atmos describe component command Update atmos describe affected command Add Atmos custom commands to atmos.yaml Update docs for atmos describe component and atmos describe affected commands why
Atmos now understands Terraform configurations and dependencies! (using a Terraform parser from HashiCorp) This is useful to detect component dependencies when a Terraform component in the components/terraform folder uses…
what
Make Atmos understand Terraform configurations and dependencies Update atmos describe component command Update atmos describe affected command Add Atmos custom commands to atmos.yaml Update d…
@Andriy Knysh (Cloud Posse), on #office-hours, @Hans D shared he is dabbling with Cuelang to generate atmos configs
i like Cue (even more than OPA :)
would like to see that @Hans D
the only issue with Cue is that almost nobody knows it, so it’s not easy for people to start using it
it can be used for configuration, generation and validation ( the nice part is that the configuration schema is used for validation automatically)
some small experiments re the component vendering:
package root
import (
"github.com/hans-d/cue-atmos/lib:atmos"
)
_CloudPosse_AWS: atmos.#NamedVendorConfig & {
name: string
source: {
uri: "github.com/cloudposse/terraform-aws-components.git//modules/\(name)?ref={{.Version}}"
version: string | *"1.239.0"
}
}
VendoredComponents: [Tool=_]: [Name=_]: atmos.#NamedVendorConfig & { name: string | *Name }
VendoredComponents: terraform: {
"account-map": _CloudPosse_AWS & {}
"vpc": _CloudPosse_AWS & {}
"vpc-flow-logs": _CloudPosse_AWS & {}
}
RenderedVendorConfig: [
for tool_component,tool_config in VendoredComponents
for cmp_name,cmp_config in tool_config {
_tmp: atmos.#RenderVendorConfig & { params: {
tool: tool_component
config: cmp_config
}}
_tmp.result
}
]
to spit out the various components.yaml
files
still rough, but gets the job done atm
Still experimenting to get some decent stack config (without needing the various imports in the resulting yaml)
that looks super, thanks for sharing
i wanted to use Cue long time ago, but the fact that people don’t know it (and it’s not easy to read for sure) has stopped us
// ValidateWithCue validates the data structure using the provided CUE document
maybe after looking at what you are doing we’ll finish the Cue integration
Cue for validation should be quite easy and can be done without most people knowing cue is involved.
Using cue for the actual config is a different ball game (but would be great if supported natively)
i always wanted it
but was hesitating because of completely new language
it’s definitely better than yaml
but more complicated
I can assist.
would be nice, thanks
For the stack, currently at:
stack: acme: core: test: {
"us-east-2": {
components: vpc: {
vpc_1: baseline.vpc.default & { enabled: true }
}
}
"eu-west-1": {
components: vpc: {
vpc_1: baseline.vpc.default & { enabled: true }
}
}
}
Includes various inheritance, region specific mixins, eg
stack: [NS=_]: [Tenant=_]: [Stage=_]: [Environment=_]: components?: [Component=_]: [Name=_]: {
name: Name
region: Environment
if baseline[Component].region[Environment] != _|_ { baseline[Component].region[Environment] }
}
with the vpc component:
package baseline
import (
cp "github.com/hans-d/cue-atmos/lib:cloudposse"
)
#vpc: cp.#aws_vpc
vpc: region: {
"eu-west-1": {
availability_zones: [ "eu-west-1a", "eu-west-1b", "eu-west-1c" ]
}
"us-east-2": {
availability_zones: [ "us-east-2a", "us-east-2b", "us-east-2c" ]
}
}
vpc: default: #vpc & {
max_subnet_count: int | *3
ipv4_primary_cidr_block: "10.8.0.0/18"
nat_gateway_enabled: false
nat_instance_enabled: false
public_subnets_enabled: false
subnet_type_tag_key: "acme/subnet/type"
vpc_flow_logs_enabled: false
vpc_flow_logs_log_destination_type: "s3"
vpc_flow_logs_traffic_type: "ALL"
}
you are a Cue hero :) with your help we’ll eventually support Cue in Atmos
I have to admit, that looks pretty darn clean.
Cue support within atmos doenst have to be fancy. The code above doesnt even have to be part of the go code. Those would be “sample”.
Only thing needed is a way to execute cue and feed the output to atmos, could even be as simple as cue export my-config.cue --out yaml | atmos .... -f -
where the -f -
allows for supplying the whole config as 1 giant yaml file, or just redirect it to 1 or more yaml files (can be generated by cue), and having atmos process those normal yaml files.
Bonus would be having the module inputs available as schema, eg by some command to generate that from the variables.tf (in the case of cloudposse modules)
all those cue
commands can be Atmos custom commands, e.g. atmos cue …..
# Custom CLI commands
Atmos can be easily extended to support any number of custom commands, what we call “subcommands”.
Dagger.io started with cue but moved towards SDK approach for pipeline as code. I think they experienced the same challenge of learning curve and adoption. It’s very confusing initially.
That is the difference between requiring cue, like dagger did, or using it as an option. For atmos i would say cue should (must?) be optional for the user
i guess we could support both YAML (as it is now, and many people know it), and Cue (for advanced usage by advanced users)
2023-07-06
v1.40.0 what Documentation pages for GitHub Actions for Terraform Plan and Apply why Documented the two new GitHub actions for Terraform Plan and Apply
what
Documentation pages for GitHub Actions for Terraform Plan and Apply
why
Documented the two new GitHub actions for Terraform Plan and Apply
Hey folks – How did the Atmos + Atlantis support end up turning out? Are there any users of that workflow? @jose.amengual maybe you?
We have a project coming up where the client is set on using Terragrunt. We were going to just accept that and use Atlantis with them so we didn’t need to write TF pipes, but now we’re finding out TG + Atlantis isn’t really that great of a combo? So I’m wondering if we can push them in another direction on the TF framework side, but wanted some more info on how Atlantis is working with Atmos.
we’ve added a lot of atlantis functionality to atmos and created all the docs
but yes, @jose.amengual was the main driver of all of that
Atmos natively supports Atlantis for Terraform Pull Request Automation.
it is awesome
@jose.amengual I know you asked for many improvements, and we did it, but maybe not everything
let us know if you have/had any issues that need to be fixed
yes there is a few thing maybe I asked but I believe there is a lot covered for advance users/basic users
let us know if anything else, we’ll improve it as you tell us, you are the main Atlantis+Atmos person here
Matt it works really well, I used without touching the Atlantis image because the customer was afraid of locking themself in to atmos and they wanted a away out, so my github action pushed varfiles and backendfiles for the PRs ( that is a bit annoying) to trigger atlantis runs
so Atlantis had no idea atmos existed
but I think you get far more by adding atmos to the atlantis image
any question you might have, please do not hesitate to ask
Awesome stuff – Great to know as I like having this as an option in our back pocket. Thanks the quick feedback @jose.amengual + @Andriy Knysh (Cloud Posse). If we do end up going this route, I’ll be sure to reach out!
As an aside, we just launched native GitHub Actions support for Atmos. No atlantis needed.
https://atmos.tools/integrations/github-actions/atmos-terraform-plan
The Cloud Posse GitHub Action for “Atmos Terraform Plan” simplifies provisioning Terraform from within GitHub using workflows. Understand precisely what to expect from running a terraform plan from directly within the GitHub UI for any Pull Request.
The Cloud Posse GitHub Action for “Atmos Terraform Apply” simplifies provisioning Terraform entirely within GitHub Action workflows. It makes it very easy to understand exactly what happened directly within the GitHub UI.
well Erik you broke my
if you are using github this is awesome , if not well…Atlantis is definitely an option
Yep, just an alternative.
Atlantis will work better with BitBucket, GitLab, etc. Also, we’ve only tested the GHAs so far with AWS.
Yeah – I think we’d be an early user of the GHAs, but this client is on BitBucket, so we’ll likely be going that route.
Thanks for sharing though Erik! Glad to see it released – I’m sure it’ll be a hit!
v1.40.0 what Documentation pages for GitHub Actions for Terraform Plan and Apply why Documented the two new GitHub actions for Terraform Plan and Apply
2023-07-07
2023-07-08
2023-07-10
Hey all, I m trying to create admin stack for spacelift and getting this error message:
│ Error: the stack name pattern '{tenant}-{environment}-{stage}' specifies 'tenant`, but the stack 'catalog/account' does not have a tenant defined in the stack file 'catalog/account'
│
│ with module.child_stacks_config.module.spacelift_config.data.utils_spacelift_stack_config.spacelift_stacks,
│ on .terraform/modules/child_stacks_config.spacelift_config/modules/spacelift/main.tf line 1, in data "utils_spacelift_stack_config" "spacelift_stacks":
│ 1: data "utils_spacelift_stack_config" "spacelift_stacks" {
I m running my command in geodesic container locally:
atmos terraform apply spacelift/admin-stack -s core-gbl-auto
Also, my file structure is like this :
stacks
catalog
core
infra
mixins
Any help will be highly appreciated!
run
atmos describe component spacelift/admin-stack -s core-gbl-auto
you should have tenant
in vars
if you don’t, then tenant
is not defined in your stack files
if you need tenant, then you have to define it in YAML config
if you don’t use tenant, then in atmos.yaml
you need to update the stack name pattern to {environment}-{stage}
I do see tenant and we do use it
stage: auto
tenant: core
terraform_version: 1.3.9
what do you have in atmos.yaml
for this
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
# Since we are distinguishing stacks based on namespace, and namespace is not part
# of the stack name, we have to set `included_paths` via the ENV var in the Dockerfile
included_paths:
- "orgs/**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "**/_defaults.yaml"
included_paths
must include all top-level stacks
excluded_paths
must exclude all component-related config
in your case
stacks
catalog
core
infra
mixins
included_paths:
- "core/**/*"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "**/_defaults.yaml"
what is in infra
folder?
infra
is our infrastructure OU which has all workloads stages such as prod, staging etc
then it should be
included_paths:
- "core/**/*"
- "infra/**/*"
taking into account that both core
and infra
contains top-level stacks (and not other YAML files like components config, mixins etc)
same issue, this is how my atmos.yaml looks like
base_path: ""
components:
terraform:
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_BASE_PATH' ENV var, or '--terraform-dir' command-line argument
# Supports both absolute and relative paths
base_path: "components/terraform"
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_APPLY_AUTO_APPROVE' ENV var
apply_auto_approve: false
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_DEPLOY_RUN_INIT' ENV var, or '--deploy-run-init' command-line argument
deploy_run_init: true
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_INIT_RUN_RECONFIGURE' ENV var, or '--init-run-reconfigure' command-line argument
init_run_reconfigure: true
# Can also be set using 'ATMOS_COMPONENTS_TERRAFORM_AUTO_GENERATE_BACKEND_FILE' ENV var, or '--auto-generate-backend-file' command-line argument
auto_generate_backend_file: false
stacks:
# Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using 'ATMOS_STACKS_INCLUDED_PATHS' ENV var (comma-separated values string)
included_paths:
- "core//*"
- "infra//"
# Can also be set using 'ATMOS_STACKS_EXCLUDED_PATHS' ENV var (comma-separated values string)
excluded_paths:
- "/_defaults.yaml"
- "catalog/**/*"
# Can also be set using 'ATMOS_STACKS_NAME_PATTERN' ENV var
name_pattern: "{tenant}-{environment}-{stage}"
workflows:
# Can also be set using 'ATMOS_WORKFLOWS_BASE_PATH' ENV var, or '--workflows-dir' command-line arguments
# Supports both absolute and relative paths
base_path: "stacks/workflows"
logs:
verbose: false
colors: true
and my file structure
its weird because everything works for other components, I can deploy all my aws components and it works. The only one is not working is the admin-stack for spacelift
try
included_paths:
- "core/**/*"
- "infra/**/*"
not
included_paths:
- "core//*"
- "infra//"
and
excluded_paths:
- "/_defaults.yaml"
- "catalog/**/*"
- "mixins/**/*"
if still not working, you can DM me your repo and I’ll take a look
2023-07-11
Hi Team,, I am looking for 1 help about atmos tool. We are seeing that some tag name change through spacelift for drift detection.. My question was how to know where the values are passing to it.. My question is like where the value syca-plat-use2-dev in the below example comes from.. any pointers is helpful
# module.store_write.aws_ssm_parameter.default["/platform/syca-plat-use2-dev-eks-cluster/_metadata/kube_version"] will be updated in-place
~ resource "aws_ssm_parameter" "default" {
id = "/platform/syca-plat-use2-dev-eks-cluster/_metadata/kube_version"
name = "/platform/syca-plat-use2-dev-eks-cluster/_metadata/kube_version"
~ tags = {
"Environment" = "xx"
~ "Name" = "syca-plat-use2-dev" -> "syca-plat-use2-dev-platform"
"Namespace" = "xx"
"Stage" = "xx"
"Tenant" = "xx"
}
~ tags_all = {
~ "Name" = "syca-plat-use2-dev" -> "syca-plat-use2-dev-platform"
# (4 unchanged elements hidden)
}
# (9 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
this is not related to Atmos. This looks like the Spacelift component was updated and the new version tries to use diff Spacelift stack names
please ask this question in CloudPosse Slack
cc @Jeremy White (Cloud Posse)
ok..
I moved it to our general channel in Cloud Posse
2023-07-12
2023-07-13
is there a way to template a workflow ?to have it rendered before it is run
and this is not that straightforward b/c to execute a Go template you need to provide a context (all the variables defined in the template). So to implement this, all those variables will have to be provided somehow, possibly on the command line. This looks too complicated (and we need to think about it)
How do you guys are managing the interface for custom policies being pass as inputs in Atmos? I just do not like much the fact users need to pass this huge blob of text in the stack, I’m trying to think how to organize it
write the policy to a file in the folder policies
in the component’s folder, update the component to accept a file name, read from the file in TF
I c ok, I have done that for other components
I was trying to think of maybe using the iam-policy module and pass actions, principals and such to create ehe policy instead
2023-07-14
v1.41.0 what Add exampes and tests for validation of components with multiple hierarchical inheritance why
Show that the settings.validation section is inherited all the way down the inheritance chain, even when using multiple abstract bae components, and atmos validate component command executes the validation policies placed at any level of the inheritance hierarchy base-component-1 is an abstract component, and it has the settings.validation section defined using an OPA policy base-component-3 is…
what
Add exampes and tests for validation of components with multiple hierarchical inheritance
why
Show that the settings.validation section is inherited all the way down the inheritance chain,…
v1.41.0 what Add exampes and tests for validation of components with multiple hierarchical inheritance why
Show that the settings.validation section is inherited all the way down the inheritance chain, even when using multiple abstract bae components, and atmos validate component command executes the validation policies placed at any level of the inheritance hierarchy base-component-1 is an abstract component, and it has the settings.validation section defined using an OPA policy base-component-3 is…
2023-07-19
Is it possible to rename files when vendoring? I know it is possible via mixins
. How can it be done via source
?
I want all pulled file to be have a modified name.
# example
main.tf -> main.vendored.tf
… from this config…
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: tfstate-backend
spec:
source:
uri: github.com/cloudposse/terraform-aws-components.git//modules/tfstate-backend?ref={{.Version}}
version: 1.256.0
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
excluded_paths:
- "**/context.tf"
mixins:
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
filename: context.tf
not possible using source
the whole purpose of vendoring is to bring your repo up to date with the upstream repo
Ah… Okay. I understand.
if we start changing and renaming files, it will be not possible to compare anything
Yeah… I guess it also could lead to colision too. eg, If the same file was vendored with different names.
you can always use https://developer.hashicorp.com/terraform/language/files/override to add your own files and override the upstream configuration (leaving the upstream files untouched)
Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.
there is a way of doing it using source
thou
exclude a file that you don’t want to vendor
Thanks. Not being able to rename when vending is no big deal.
I have used atmos
for the past 2+ years. But today was the first time I decided to use the feature (and will continue to do so).
You have explained the problem vendoring is solving, so I will adjust.
@Erik Osterman (Cloud Posse) I was watching the office hours and I hear the question about monorepo modules
we are creating a big monorepo project with like 300+ modules
we have been thinking on a few different ways ( using atmos)
one is to do releases for every single component change
another was to create tags per components 1.1.1-vpc
( for the vpc module release)
the tag per component is nice since you have a lot of granular control
that is why I asked @Andriy Knysh (Cloud Posse) about adding atmos the ability to do Vendoring of specific version of component versions based on some metadata like:
components:
terraform:
vpc:
metadata:
component: vpc
version: `1.1.1-vpc`
vars:
name: vpc
and by defining that and adding some setting on atmos.yaml
atmos could automatically do a atmos vendor pull
inside the component folder, and if no component.yaml
( vendoring file) exist then it will pull all the files or it just reads the component.yaml and does the pull
basically is like adding a init
cycle in atmos as how like terraform init
does
Yea, we’ve considered that too. So far, we haven’t yet decided on the path forward.
another thing to add is that this pattern is known to everyone that uses terraform , so is easy to understand
The hard part is knowing what the next version of a component should be
I think we would need to couple it with a file based versioning approach
So each component defines it’s version. When it’s merged, that version gets tagged. But that way there’s not a global version moving forward.
We need a sane way to release multiple components at the same time that are part of a release too.
global as for all components ?
I don’t know how to say… just that the vpc might be at 1.0.0 and eks might be at 5.0.0
the next “patch” release of the VPC is not 1.0.0 –> 5.0.1 for example. It’s 1.0.1
That means we have to know what the previous release was
It could be calculated from the tags, or it could be a version file.
Spacelift uses version files.
And then releases based on the version in the file.
wait, you can have 1.0.2-vpc
, 5.0.2-eks
component tags and you keep going that way but the whole components repo can have the releases , which will include every time the latest version of each component
meaning that you do not create a release for the individual components, you just create tags
but those are treated as independent component releases
But what about when vpc reaches 5.0.2
6 months later.
There was a 5.0.2 release already.
why that does it matter? the terraform-aws-components
could be in version 3.0.1
and that will included the vpc component that is happen to be at tag 5.0.6-vpc
so you will endup with version not aligned for sure
and millions of tags
the release of terraform-aws-components
at version 3.0.1
does not even have to look at the latest tags of the vpc component, you just always merge the individual component tags to main and do not delete the tags after it
which in a way is the same as file versioning? but instead of using a commit hash you use a tag
The problem is if you cut a release for 5.0.2
, and that release created 5.0.2-eks
; then months later, it’s time to release the vpc 5.0.2
, if you use a “github release” it will now be in conflict.
keep in mind, i’m not saying a github release of 5.0.2-vpc
why? 5.0.2
can only be used for the whole repo, 5.0.2-eks
can only be use for the eks release tag
anyways, my point is you can’t use releases.
you can use tags.
or releases that equal the tags
I think we are saying the same thing
ok
I will jump to the next office hours since is a problem we are trying to solve too
Cool, please do
(also, our incentives might be a little bit different; we’re solving this for components and need a way to keep them stable at various major releases while also not inhibit new development)
same here
2023-07-28
v1.42.0 what Add Sprig functions to Atmos Go templates in imports Various fixes and improvements for atmos describe component and component dependencies calculation Update docs https://atmos.tools/cli/commands/describe/component/ why
Sprig functions in Atmos Go templates in imports provide…
what
Add Sprig functions to Atmos Go templates in imports Various fixes and improvements for atmos describe component and component dependencies calculation Update docs https://atmos.tools/cli/com…
Useful template functions for Go templates.
Use this command to describe the complete configuration for an Atmos component in an Atmos stack.
v1.42.0 what Add Sprig functions to Atmos Go templates in imports Various fixes and improvements for atmos describe component and component dependencies calculation Update docs https://atmos.tools/cli/commands/describe/component/ why
Sprig functions in Atmos Go templates in imports provide…