#atmos (2023-06)
2023-06-01
Hello, Im running into this error when trying to apply a stack that uses cloudposse/terraform-aws-ecs-cluster
│ Error: Unsupported attribute
│
│ on .terraform/modules/autoscale_group/outputs.tf line 23, in output "autoscaling_group_tags":
│ 23: value = module.this.enabled ? aws_autoscaling_group.default[0].tags : []
│
│ This object has no argument, nested block, or exported attribute named "tags". Did you mean "tag"?
stacks/catalog/ecs-cluster/defaults.yaml:
components:
terraform:
ecs-cluster/defaults:
metadata:
component: ecs-cluster
type: abstract
settings:
spacelift:
workspace_enabled: false
vars:
enabled: true
name: ecs-cluster
capacity_providers_fargate: true
capacity_providers_fargate_spot: true
container_insights_enabled: true
tag:
Name: testing
stacks/orgs/metrop/pmc/dev/us-west-2.yaml:
import:
- mixins/region/us-west-2
- orgs/metrop/pmc/dev/_defaults
- catalog/ecs-cluster/defaults
components:
terraform:
ecs-cluster:
vars:
name: test
atmos describe stacks
pmc-uw2-dev:
components:
terraform:
ecs-cluster:
backend:
acl: bucket-owner-full-control
bucket: pmc-test-terraform-state
dynamodb_table: pmc-test-terraform-state-lock
encrypt: true
key: terraform.tfstate
region: us-west-2
role_arn: null
workspace_key_prefix: ecs-cluster
backend_type: s3
command: terraform
component: ecs-cluster
deps:
- mixins/region/us-west-2
- mixins/stage/dev
- orgs/metrop/_defaults
- orgs/metrop/pmc/_defaults
- orgs/metrop/pmc/dev/us-west-2
env: {}
inheritance: []
metadata: {}
remote_state_backend:
acl: bucket-owner-full-control
bucket: pmc-test-terraform-state
dynamodb_table: pmc-test-terraform-state-lock
encrypt: true
key: terraform.tfstate
region: us-west-2
role_arn: null
workspace_key_prefix: ecs-cluster
remote_state_backend_type: s3
settings: {}
vars:
environment: uw2
name: test
namespace: pmc
region: us-west-2
stage: dev
tenant: pmc
workspace: pmc-uw2-dev
ecs-cluster/defaults:
backend:
acl: bucket-owner-full-control
bucket: pmc-test-terraform-state
dynamodb_table: pmc-test-terraform-state-lock
encrypt: true
key: terraform.tfstate
region: us-west-2
role_arn: null
workspace_key_prefix: ecs-cluster
backend_type: s3
command: terraform
component: ecs-cluster
deps:
- catalog/ecs-cluster/defaults
- mixins/region/us-west-2
- mixins/stage/dev
- orgs/metrop/_defaults
- orgs/metrop/pmc/_defaults
- orgs/metrop/pmc/dev/us-west-2
env: {}
inheritance: []
metadata:
component: ecs-cluster
type: abstract
remote_state_backend:
acl: bucket-owner-full-control
bucket: pmc-test-terraform-state
dynamodb_table: pmc-test-terraform-state-lock
encrypt: true
key: terraform.tfstate
region: us-west-2
role_arn: null
workspace_key_prefix: ecs-cluster
remote_state_backend_type: s3
settings:
spacelift: {}
vars:
capacity_providers_fargate: true
capacity_providers_fargate_spot: true
container_insights_enabled: true
enabled: true
environment: uw2
name: ecs-cluster
namespace: pmc
region: us-west-2
stage: dev
tenant: pmc
workspace: pmc-uw2-dev
atmos terraform plan ecs-cluster -s pmc-uw2-dev
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.0.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Releasing state lock. This may take a few moments...
╷
│ Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "region" but a value was found in file "pmc-uw2-dev-ecs-cluster.terraform.tfvars.json". If you meant to use
│ this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your organization. To reduce the
│ verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Error: Unsupported attribute
│
│ on .terraform/modules/autoscale_group/outputs.tf line 23, in output "autoscaling_group_tags":
│ 23: value = module.this.enabled ? aws_autoscaling_group.default[0].tags : []
│
│ This object has no argument, nested block, or exported attribute named "tags". Did you mean "tag"?
╵
exit status 1
`which terraform` version
Terraform v1.4.6
on darwin_arm64
this is not related to atmos. This is new terraform aws provider version 5 causing a lot of issues
while we are updating the modules to fix the issues, you can pin your provider to v4 for now
gotcha!
@matt has joined the channel
2023-06-05
Hello there, I’m trying out the tutorial here: https://atmos.tools/tutorials/first-aws-environment but atmos terraform apply tfstate-backend –stack ue2-root outputs:
tfstate_backend_dynamodb_table_name = "acme-ue2-root-tfstate-delicate-elf-lock"
tfstate_backend_s3_bucket_arn = "arn:aws:s3:::acme-ue2-root-tfstate-delicate-elf"
while atmos terraform generate backend tfstate-backend –stack ue2-root generates:
"bucket": "acme-ue2-tfstate-delicate-elf",
"dynamodb_table": "acme-ue2-tfstate-lock-delicate-elf",
After I edit the backend.tf.json file to match the correct resource names, it succeeds migrating state to S3.
But continuting to deploy the static site, I get
Workspace "uw2-dev" doesn't exist.
I’m probably doing something very wrong, but I cannot figure out what
Get your first AWS environment deployed using Atmos, Stacks, and Vendoring
@Dan Miller (Cloud Posse)
Get your first AWS environment deployed using Atmos, Stacks, and Vendoring
Do you get a prompt after that message?
Workspace "uw2-dev" doesn't exist.
This is typically a Terraform message to warn you that the workspace doesnt exist yet (since it hasnt been created). You should then be able to continue and prompt Terraform to create that workspace. The default workspace option should be correct
If the deployment step fails after that message, then this is a different issue
This is the full message
16:44 $ atmos terraform workspace static-site --stack uw2-dev
Initializing modules...
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Using previously-installed hashicorp/random v3.5.1
- Using previously-installed hashicorp/time v0.9.1
- Using previously-installed hashicorp/local v2.4.0
- Using previously-installed hashicorp/aws v5.1.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Workspace "uw2-dev" doesn't exist.
You can create this workspace with the "new" subcommand.
failed to lock s3 state: 2 errors occurred:
* ResourceNotFoundException: Requested resource not found
* ResourceNotFoundException: Requested resource not found
exit status 1
16:44 $
I’m running it atmos native on my mac - you think this somehow could be fixed with geodesic? Also notice both the s3 bucket name and dynamodb table generated by atmos are missing root and that lock is misplaced
I’m running it atmos native on my mac - you think this somehow could be fixed with geodesic?
Geodesic is a great way to ensure that your local working environment doesnt have errors, but in this case I dont believe it would resolve the issue
Why are you selecting the workspace with atmos terraform workspace static-site --stack uw2-dev
? Atmos will select the workspace for you, or create it, as part of any terraform command
Ah, I was just trying things and copy/pasted the wrong part - this is the correct one:
10:10 $ atmos terraform deploy static-site --stack uw2-dev
Initializing modules...
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Using previously-installed hashicorp/local v2.4.0
- Using previously-installed hashicorp/aws v5.1.0
- Using previously-installed hashicorp/random v3.5.1
- Using previously-installed hashicorp/time v0.9.1
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Workspace "uw2-dev" doesn't exist.
You can create this workspace with the "new" subcommand.
failed to lock s3 state: 2 errors occurred:
* ResourceNotFoundException: Requested resource not found
* ResourceNotFoundException: Requested resource not found
exit status 1
It was the dynamodb table that was missing -root- in stacks/catalog/globals.yaml also
Yes nice find! I just opened a PR to fix that this morning but hadn’t had a chance to reply here
2023-06-06
Hi, I am a bit confused by something the example stack, and some of the docs for the modules. Say I have the account
module deployed in a stack (gbl-root
):
components:
terraform:
"infra/account":
metadata:
component: infra/account
(thread to follow…)
I use this since I am organizing the directory layout as in the examples, with the components/terraform/infra
dir. In this configuration, the inferred base component is empty, and so the workspace is set to gbl-root
. The state file is written at s3://<ns>-ue2-root-tfstate/account/gbl-root/terraform.tfstate
. So, when generating the state file path, it strips of the infra/
portion of the name to come up with the component.
However, when I attempt to deploy the account-map
module (infra/account-map
), it complains that the config needed for the remote state cannot find account
:
Searched all stack YAML files, but could not find config for the component 'account' in the stack 'gbl-root'
This makes sense, since that component reference in the account-map
module’s [remote-state.tf](http://remote-state.tf)
file is hard-coded to account
. If I change the account
stack to:
components:
terraform:
"account":
metadata:
component: infra/account
The infra/account-map
deploy is happy, since it can find the component config. However, with this config, the state file is now at s3://<ns>-ue2-root-tfstate/account/gbl-root-account/terraform.tfstate
.
I can of course make this change to the location, but I am trying to reconcile why the inferred component for the stack in the prior case (infra/account
) seems to differ from the component inferred for the S3 path, which is just account
. My suspicion is that first part the S3 path is based on the terraform component’s path in the filesystem, since that is the “base” component, and you could have multiple instantiations of that, e.g., vpc/ue2-prod/vpc-1
, vpc/ue2-prod/vpc2
. The follow up question is that if that is right, why is the component
that is passed to [remote-state.tf](http://remote-state.tf)
in various modules in terraform-aws-components
parameterized as a variable then?
Am I missing something here in my configuration?
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
for remote-state, Atmos component name is used, the component that you use in YAML, e.g.
components:
terraform:
"infra/account":
Hi @Andriy Knysh (Cloud Posse) , well I feel dumb. I somehow missed this in the docs.
I guess the answer is to override the remote-state locally to parameterize per that example. Thanks!
let me know if you need help with atmos configs
2023-06-07
is there a way to reference values inside of atmos yet?
we want to pass the version in component.yaml to a tag on a component, so we know the version it was used to deploy
maybe by having a yaml file with all the version for a specific infra we could pass that to the vendor command? ( as a new feature)
imagine this :
components:
vpc: 0.0.2
rds: 1.1.2
and then any atmos pulll command can find and read that file that declares the version for all the components and if not found default to the version in component.yaml
interesting use case, to have the component version on a tag automatically. we need to think about this
we do that with regular root modules and we pass the tag on a pipeline where we extract the version and such
An alternative idea is that the component should be aware.
In this case, it’s easier for the component to read the component.yaml
and parse the version.
Then pass those values to the context.
yes , that would work too
2023-06-08
2023-06-12
I’m trying to use : https://atmos.tools/core-concepts/stacks/imports#go-templates-in-imports and I get an error
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
invalid import 'map[context:map[owner:mike owner_ip:1.1.1.1/32 owner_tag:pepe] path:catalog/ec2/defaults]' in the file 'lab/us-west-2.yaml'
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
no matter what I do , I just get that error
even in debug mode
please show your code, it’s not possible to say anything w/o looking at the code
1.35.0
your code, not atmos version
---
# stacks/catalog/ec2/defaults.yaml
import: []
components:
terraform:
"ec2-{{ .owner }}":
metadata:
component: ec2
# type: abstract
vars:
name: "{{ .owner }}"
tags:
DELETEME: "true"
Owner: "{{ .owner_tag }}"
associate_public_ip_address: true
instance_type: g5.2xlarge
availability_zones: ["us-west-2a", "us-west-2b", "us-west-2c"]
ami: "ami-111111111"
volume_size: 250
delete_on_termination: true
device_encrypted: true
security_group_rules:
- type: "egress"
from_port: 0
to_port: 65535
protocol: "-1"
cidr_blocks:
- "0.0.0.0/0"
- type: "ingress"
from_port: 22
to_port: 22
protocol: "tcp"
cidr_blocks:
- "{{ .owner_ip }}"
import:
- lab/globals
- path: "catalog/ec2/defaults"
context:
owner: "pepe"
owner_tag: "pepe"
owner_ip: "1.1.1.1/32"
as it currently stands, all imports must be the same type
if you use -path
on one of the imports, you have to use the same format on all of them (within one file)
import:
- path: lab/globals
- path: "catalog/ec2/defaults"
context: ...
(don’t use context
if you don’t have it for a specific import)
I see ok, that worked
thanks as always
it will be cool to be able to iterate over a list with this imports
you mean in atmos code?
yes
like :
- path: "catalog/ec2/defaults"
context:
owner: [ "pepe", "jack", "arman"]
owner_tag: [ "pepe", "jack", "arman"]
owner_ip: ["1.1.1.1/32", "1.2.2.1/32", "1.3.3.1/32"]
@Andriy Knysh (Cloud Posse) can we have a more meaningful error message?
i wanted to support both formats in the same file, will prob fix it when I get time
@jose.amengual indent wrong
yes, is the stupid copy paste thing
also, just to be clear, when andriy said they all had to be of the same type, it was the import block portion.
e.g. Without context
imports:
- stack1
- stack2
- stack3
or with context:
imports:
- path: stack1
context:
abc: [ 123 ]
- path: stack2
- path: stack3
context:
foo: bar
and not
imports:
- stack1
- path: stack2
context: { ... }
- stack3
note, context
can be anything
correct
but for right now, cannot mix a list of string imports and map imports
yes correct (I’ll fix it in Atmos, it just needs to process each import separately; currently it converts the whole YAML into a list of struct and fails if a struct does not confirm to the schema)
2023-06-14
Which file or for that matter where does one set the {tenant}-{environment}-{stage}. Where do I set the tenant or the environment or the stage
in atmos.yaml
stacks:
# Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
# Supports both absolute and relative paths
base_path: "stacks"
# Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
included_paths:
- "**/**"
# Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
excluded_paths:
- "**/_defaults.yaml"
# Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
name_pattern: "{namespace}-{environment}-{stage}"
name_pattern
Cheers I get that but what are the values of namespace, environment and stage where do I set those value
is there a way to list the default values
you will put them on your stack file
vars:
environment: lab
namespace: pepe
stage: uw2
region: us-west-2
components:
terraform:
vpc:
vars: []
are global ( scoped to the stack file) variables
Cheers and then the stack must be named in accordance to the naming convention {namespace}-{environment}-{stage}
correct
the file name and location does not matter
but the vars:[]
values is what it dictates to what stack this components belong to
Ok let me have a try and see
hmm no luck there must be something wrong with the setup
.
├── components
│ └── terraform
│ └── infra-init
│ ├── component.yaml
│ ├── main.tf
│ ├── providers.tf
│ ├── terraform.tf
│ ├── tf_variables.tf
│ └── variables.tfvars
└── stacks
├── catalog
│ └── dvsa-dev-poc.yaml
├── mixins
├── org
│ └── dvsa
│ └── core
│ ├── dev
│ ├── prd
│ ├── pre
│ └── uat
├── terraform
└── workflows
└── infra-init.yaml
error?
atmos workflow init/tfstate -f infra-init.yaml -s dvsa-dev-poc –dry-run
I get no output
and no error
echo $? returns 0
you need to show your stack file
dvsa-dev-poc.yaml
vars:
environment: dev
namespace: dvsa
stage: poc
region: eu-west-1
components:
terraform:
infra-init/defaults:
metadata:
workspace_enabled: false
anf the atmos.yaml name pattern?
and the workflow file too
name_pattern: "{tenant}-{environment}-{stage}"
wait I have not defined a tenant
let me add that
you can always use decribe stack
command to see what atmos finds and renders
dvsa-dev-poc:
components:
terraform:
infra-init/defaults:
backend: {}
backend_type: ""
command: terraform
component: infra-init/defaults
deps:
- catalog/dvsa-dev-poc
env: {}
inheritance: []
metadata:
workspace_enabled: false
remote_state_backend: {}
remote_state_backend_type: ""
settings: {}
vars:
environment: dev
namespace: mot
region: eu-west-1
stage: poc
tenant: dvsa
workspace: dvsa-dev-poc
The workflow file is
more infra-init.yaml
workflows:
init/tfstate:
description: Provision Terraform State Backend for initial deployment.
steps:
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --auto-generate-backend-file=false
- command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
type: shell
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --init-run-reconfigure=false
what is you do plan against infra-init/defaults
?
that needs to work before you can put it on a workflow
specially with deploy
command
I want to plan against infra-init
it just going to boot strap the environment
create the S3 and Dynamodb backend
but you need all the components defined in stacks to be able to use them
I have using the cloudposse environment bootstrap
infra-init/defaults
, tfstate-backend
, needs to be defined and plan should work
there is no backend the backend is local this workflow will create the backend
understand, so you deploy that first, you do not need a workflow for that
but you could use a workflow
I want it all captured as this is a POC to use as a Ref Arch for other teams to accelerate there provisioning of multiple environments
if this can be captured as a workflow its a bit easier
Maybe the workflow needs to be updated
workflows:
init/tfstate:
description: Provision Terraform State Backend for initial deployment.
steps:
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --auto-generate-backend-file=false
- command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
type: shell
- command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --init-run-reconfigure=false
no problem, but you need to test first that a plan works before the workflow
if you run plan and it fails, the workflow will never work
Even if it fails I am Ok with the terraform and how to fix that. I am just trying to get all the pieces of atmos figured out so I can use it
But it should bubble up the errors
I am just running terraform
I guess it should
export ATMOS_LOGS_LEVEL=Trace and see
❯ export ATMOS_LOGS_LEVEL=Trace
user: imranhussain on C02G32WUML85 atmos/stacks/catalog on main [?] on ☁️ dvsarecallsmgmt (eu-west-1)
❯ atmos workflow init/tfstate -f infra-init.yaml -s dvsa-dev-poc --dry-run
Executing the workflow 'init/tfstate' from '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/workflows/infra-init.yaml'
description: Provision Terraform State Backend for initial deployment.
steps:
- name: step1
command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack
dvsa-dev-poc --auto-generate-backend-file=false
- name: step2
command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
type: shell
- name: step3
command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack
dvsa-dev-poc --init-run-reconfigure=false
Executing workflow step: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --auto-generate-backend-file=false
Stack: dvsa-dev-poc
Executing command:
/Users/imranhussain/brew/bin/atmos terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --auto-generate-backend-file=false -s dvsa-dev-poc
Executing workflow step: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
Executing command:
until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
Executing workflow step: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --init-run-reconfigure=false
Stack: dvsa-dev-poc
Executing command:
/Users/imranhussain/brew/bin/atmos terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --init-run-reconfigure=false -s dvsa-dev-poc
So it looks as if it is trying to run the workflow
but then it does not show any errors
that sleep is pretty hacky, you do not need that
that could be messing things up
I just copied and pasted it from what Eric sent me
I do not see a component listed
like I said try running plan first make sure is works then run your workflow
OK so go to the component directory and just run the plan
no no
atmos terraform plan componentname -s stackname
atmos terraform plan infra-init -s dvsa-dev-poc
Found stack config files:
- catalog/dvsa-dev-poc.yaml
- workflows/infra-init.yaml
Found config for the component 'infra-init' for the stack 'dvsa-dev-poc' in the stack config file 'catalog/dvsa-dev-poc'
Variables for the component 'infra-init' in the stack 'dvsa-dev-poc':
environment: dev
namespace: mot
region: eu-west-1
stage: poc
tenant: dvsa
Writing the variables to file:
/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/components/terraform/infra-init/dvsa-dev-poc-infra-init.terraform.tfvars.json
Executing command:
/Users/imranhussain/.asdf/shims/terraform init -reconfigure
Initializing the backend...
Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading registry.terraform.io/cloudposse/tfstate-backend/aws 1.1.1 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.bucket_label...
- terraform_state_backend.bucket_label in .terraform/modules/terraform_state_backend.bucket_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.replication_label...
- terraform_state_backend.replication_label in .terraform/modules/terraform_state_backend.replication_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.this...
- terraform_state_backend.this in .terraform/modules/terraform_state_backend.this
╷
│ Error: Unsupported Terraform Core version
│
│ on terraform.tf line 9, in terraform:
│ 9: required_version = "~> 1.4.6"
│
│ This configuration does not support Terraform version 1.5.0. To proceed, either choose another supported Terraform version or update this version
│ constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
exit status 1
So at least it tries to do something
maybe if I removed the –dry-run I would have seen it actually do the actions
yes
so maybe the workflow would have worked if I removed the –dry-run and spit out the same errors
Alright. I have some TF errors and I can sort those out. I will then go back to the workflow
Error: configuring Terraform AWS Provider: validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 00a12726-972f-463a-bb70-1435e422efab, api error ExpiredToken: The security token included in the request is expired
but thats just the Auth into AWS I am assuming that atmos works with environment variables for AWS_SECRET_KEY and so forth
The plan worked
I think I do not understand the var correctly are the automatically passed into the terraform run
But at least I have a plan
Is there a way to see what they are currently set to ?
2023-06-15
If I want to pass in the backend option specifically the key on each stack that is built is there a simple way to do that. Specifically I have some default backend option and I want to add the key element to that backend configuration at each stack or better yet have it included in as part of the run. Would I use the include option for the stack or would I use mixins. Also when the yaml is merged can I give a strategy to say I want to merge or overwrite ? does this make sense
define in the some file that is included
backend:
s3:
bucket: "somebucket"
region: "someregion"
encrypt: true
dynamodb_table: "somelocktable"
kms_key_id: "somerole"
The in the stack have something like
backend:
s3:
key: "somekey"
you define that as a global file that you import to each stack or you add it to each stack
each stack = logical stack defined by a name
if you are talking about templetarizing the backend config, that I do not know if is possible
@Andriy Knysh (Cloud Posse)
you define the backend config in the Org _defaults.yaml file
backend:
s3:
encrypt: true
bucket: "cp-ue2-root-tfstate"
key: "terraform.tfstate"
dynamodb_table: "cp-ue2-root-tfstate-lock"
acl: "bucket-owner-full-control"
region: "us-east-2"
see https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/_defaults.yaml#L8
backend:
then, regarding workspace_key_prefix
for each component: if you don’t specify it, then it will be generated automatically using the Atmos component name, for example
components:
terraform:
vpc:
metadata:
component: vpc
in this case, workspace_key_prefix
will be vpc
you can override workspace_key_prefix
per component (if needed for some reason) like so:
components:
terraform:
vpc:
metadata:
component: vpc
backend:
s3:
workspace_key_prefix: infra-vpc
then you import this Org global file https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/_defaults.yaml#L8 into the stacks (together with the component config)
backend:
and then Atmos deep merges the backend
section (including the s3
subsection) when executing commands like atmos terraform plan/applt vpc -s <stack>
from the example above, the final deep-merged config for the vpc
component would look like this:
backend:
s3:
encrypt: true
bucket: "cp-ue2-root-tfstate"
key: "terraform.tfstate"
dynamodb_table: "cp-ue2-root-tfstate-lock"
acl: "bucket-owner-full-control"
region: "us-east-2"
workspace_key_prefix: infra-vpc
and finally, by using this final backend config, Atmos auto-generated the backend file in the component folder, which terraform then uses
to summarize:
- Define just the common fields for the backend in some Org-level global config file
- Import that config file into all stacks
- For each component, don’t specify
workspace_key_prefix
, it will be calculated automatically (unless you want tooverride it for any reason)
- The final backend file will be auto-generated by Atmos before calling the terraform commands. When executing
atmos terraform plan <component> -s <stack>
, Atmos will generate the backend file with all the required fields in the component folder, and then callterraform plan/apply
, and terraform will use the generated backend fiel
Good Morning. Thanks for the great support from all. I have created the defaults file to use
So based on my testing the yaml file does not support anchors and aliases
but I get the general idea
and the
terraform:
vars: {}
does that point to file like the terraform –var-file option or is it just the terraform –var option that this yaml map covers
And once again thank you for all the support and guidance
Can you share what you did for YAML anchors? We definitely support it (it’s built into the go yaml library). Just be advised that there’s no concept of YAML anchors across files, because files are first read as YAML, thus any references are processed at evaluation time of the YAML.
Sorry. I did not read the error message to well. I was trying to alias a value but it wanted a map i.e. I was doing this :
vars:
environment: dev
namespace: mot
tenant: dvsa
stage: poc
region: ®ion eu-west-1
terraform:
backend_type: s3 # s3, remote, vault, static, azurerm, etc.
backend:
s3:
encrypt: true
bucket: "dvsa-poc-terraform-state"
dynamodb_table: "dvsa-poc-terraform-state-lock"
<<: *region
role_arn: null
when I should have been doing this.
vars:
environment: dev
namespace: mot
tenant: dvsa
stage: poc
region: ®ion eu-west-1
terraform:
backend_type: s3 # s3, remote, vault, static, azurerm, etc.
backend:
s3:
encrypt: true
bucket: "dvsa-poc-terraform-state"
dynamodb_table: "dvsa-poc-terraform-state-lock"
region: *region
role_arn: null
The error that was thrown which I failed to read correctly was
invalid stack config file 'org/dvsa/_defaults.yaml'
yaml: map merge requires map or sequence of maps as the value
Which clearly on a second read shows its a type error where it is expecting a map and I am passing in a scaler value
then the merged dict would include both
2023-06-16
A few more questions around the import statements that are at the top is there a way to template them. ?
I have
import:
- org/dvsa/_defaults
- org/dvsa/dev/_defaults
What I want to do is make this generic so it can be picked up by default based on the vars I have set
something along the lines of
import:
- org/_defaults
- org/{tenant}/_defaults
- org/{tenant}/{namespace}/{app_region}/_defaults
- org/{tenant}/{namespace}/{app_region}/{environment}/_defaults
So the later defaults an override the ones that came before it if they so wish but not be explicit in defining them.
So I have a multiple orgs so we abstract what is common in all orgs into the _default,yaml at that level. The we have for each org we define the _default.yaml at that level then I have a logical app_region one of “dev,uat,prd” at that level there is a _default.yaml that is common across all dev environments then I have multiple environments which all have their own _defaults.yaml which then can be merged in to the components which can override them at the environment level or add depending on what is needed. Or is there a different way to implement what I need to do ?
That will work. Have you seen https://atmos.tools/core-concepts/stacks/imports#imports-schema
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
fwiw, have you considered organizing the infra the way AWS organizes it?
e.g.
org → ou (tenant) → account → region → resources
in our case, we dedicate the account to the “Environment” (aka stage)
and we dedicate the org, to the namespace.
by following this model, when you look in AWS web console it closely models what you also see in IaC
(replace my → with a /
for folders)
and “resources” can be further broken out by something like “app” or “layer”
e.g. we do “network”, “compliance”, “data”, “platform” layers
We have multiple development teams that each have there own development environment in the same account so org/department/project/env/env1..env20
IMO, the department ~ OU
the OU has multiple stages, and the projects probably work together to some degree within the stage
This is how the AWS account structure has been laid out. Its legacy. There is no budget or appetite to change this multi-tenant situations. Larger discussion are in place to maybe move to the one account == environment but thats been pushed out by 6 months
Got it, yes, if working within existing conventions, then best to stick with them!
Is there a way to use templating for the yaml like {environment} or { tenant } like
atmos terraform generate varfiles --file-template {component-path}/{environment}-{stage}.tfvars.json
Is {component-path} that built in ? and can it be used in the _defaults.yaml
Digging around in the git repo I can see templates and I did see # - path: “mixins/region/{{ .region }}” in https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/eks_cluster_tmpl_hierarchical.yaml
import:
# Use `region_tmpl` `Go` template and provide `context` for it.
# This can also be done by using `Go` templates in the import path itself.
# - path: "mixins/region/{{ .region }}"
- path: mixins/region/region_tmpl
# `Go` templates in `context`
context:
region: "{{ .region }}"
environment: "{{ .environment }}"
# `Go` templates in the import path
- path: "orgs/cp/{{ .tenant }}/{{ .stage }}/_defaults"
components:
terraform:
# Parameterize Atmos component name
"eks-{{ .flavor }}/cluster":
metadata:
component: "test/test-component"
vars:
# Parameterize variables
enabled: "{{ .enabled }}"
name: "eks-{{ .flavor }}"
service_1_name: "{{ .service_1_name }}"
service_2_name: "{{ .service_2_name }}"
tags:
flavor: "{{ .flavor }}"
you can use Go templates in 1) the names of the imported files; 2) inside the imported files in any section
@Imran Hussain did you see this page?
That will work. Have you seen https://atmos.tools/core-concepts/stacks/imports#imports-schema
It talks about how to do it.
You pass the context
to the import. Your context can be anything you want.
Let’s say you have a file called catalog/mixins/app_project.yaml
that looked like
import:
- org/_defaults
- org/{tenant}/_defaults
- org/{tenant}/{{ .namespace }}/{{ .app_region }/_defaults
- org/{tenant}/{{ .namespace }}/{{ .app_region }}/{{ .environment}}/_defaults
then I had a file in org/{tenant}/{namespace}/{app_region}/{environment}/api.yaml
In that file, I would pass:
import:
- path: mixins/app_project
context:
tenant: foo
namespace: bar
app_region: use1
environment: dev
Good Morning. :smile: Maybe I have missed something ?
I have :
import:
- org/_defaults
- org/{{ .tenant }}/_defaults
- org/{{ .tenant }}/{{ .environment }}/_defaults
components:
terraform:
infra-init:
metadata:
workspace_enabled: false
backend:
s3:
workspace_key_prefix: infra-vpc-dev
`vars:`
`account_id: "123456782"`
at the org/defaults.yaml I define the variables
vars:
environment: dev
namespace: mot
tenant: dvsa
stage: poc
and I get the following error
no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml'
Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')
If I change the approach using the context then I get the same error:
import:
- org/_defaults
- org/{{ .tenant }}/_defaults
- org/{{ .tenant }}/{{ .namespace }}/{{ .app_region }}/_defaults
- org/{{ .tenant }}/{{ .namespace }}/{{ .app_region }}/{{ .environment }}/_defaults
using the following stack
import:
- path: mixins/project
context:
tenant: dvsa
app_region: dev
environment: dev01
namespace: mot
components:
terraform:
infra-init:
metadata:
workspace_enabled: false
backend:
s3:
workspace_key_prefix: infra-vpc-dev
`vars:`
`account_id: "123456782"`
I get the following error
no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml'
Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')
The file tree looks like the below
├── catalog
│ ├── dvsa-dev-poc.yaml -> dvsa-dev-poc.yaml.bc
│ ├── dvsa-dev-poc.yaml.bc
│ └── dvsa-dev-poc.yaml.tmpl
├── mixins
│ └── project.yaml
├── org
│ ├── _defaults.yaml
│ └── dvsa
│ ├── _defaults.yaml
│ └── mot
│ ├── _defaults.yaml
│ ├── dev
│ │ ├── _defaults.yaml
│ │ └── dev01
│ │ └── _defaults.yaml
│ ├── prd
│ ├── pre
│ └── uat
├── terraform
└── workflows
└── infra-init.yam
I seem to be missing something with respect to the templating aspect maybe due to when the variables are exposed or in what order they are exposed
(nitpick, please use ``` in stead of single ` for multi-line code blocks)
@Andriy Knysh (Cloud Posse) I can’t see what’s wrong
I am curious, why the symlink? dvsa-dev-poc.yaml -> dvsa-dev-poc.yaml.bc
I was testing a few things with different ways of doing the templating and was just a quick way to iterate over the different ways I wanted to use
Just being lazy I suppose
@Imran Hussain let’s review a few things in your config:
import format like this
import:
- org/_defaults
- org/{{ .tenant }}/_defaults
does not support Go templating
you need to use import with path
and context
(context is optional)
please review https://atmos.tools/core-concepts/stacks/imports
Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once
if you want us to review your config, please DM the source code and we’ll review
this is how you use Go templates in diff parts of the code - import paths, context, and component names https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/eks_cluster_tmpl_hierarchical.yaml
import:
# Use `region_tmpl` `Go` template and provide `context` for it.
# This can also be done by using `Go` templates in the import path itself.
# - path: "mixins/region/{{ .region }}"
- path: mixins/region/region_tmpl
# `Go` templates in `context`
context:
region: "{{ .region }}"
environment: "{{ .environment }}"
# `Go` templates in the import path
- path: "orgs/cp/{{ .tenant }}/{{ .stage }}/_defaults"
components:
terraform:
# Parameterize Atmos component name
"eks-{{ .flavor }}/cluster":
metadata:
component: "test/test-component"
vars:
# Parameterize variables
enabled: "{{ .enabled }}"
name: "eks-{{ .flavor }}"
service_1_name: "{{ .service_1_name }}"
service_2_name: "{{ .service_2_name }}"
tags:
flavor: "{{ .flavor }}"
then you import that catalog file like this https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/test1/us-west-1.yaml
import:
# This import with the provided hierarchical context will dynamically generate
# a new Atmos component `eks-blue/cluster` in the `tenant1-uw1-test1` stack
- path: catalog/terraform/eks_cluster_tmpl_hierarchical
context:
# Context variables for the EKS component
flavor: "blue"
enabled: true
service_1_name: "blue-service-1"
service_2_name: "blue-service-2"
# Context variables for the hierarchical imports
# `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
tenant: "tenant1"
region: "us-west-1"
environment: "uw1"
stage: "test1"
# This import with the provided hierarchical context will dynamically generate
# a new Atmos component `eks-green/cluster` in the `tenant1-uw1-test1` stack
- path: catalog/terraform/eks_cluster_tmpl_hierarchical
context:
# Context variables for the EKS component
flavor: "green"
enabled: false
service_1_name: "green-service-1"
service_2_name: "green-service-2"
# Context variables for the hierarchical imports
# `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
tenant: "tenant1"
region: "us-west-1"
environment: "uw1"
stage: "test1"
so you have hierarchical imports (two level) and you provide the context for all of them in the top-level stack
- path: catalog/terraform/eks_cluster_tmpl_hierarchical
context:
# Context variables for the EKS component
flavor: "blue"
enabled: true
service_1_name: "blue-service-1"
service_2_name: "blue-service-2"
# Context variables for the hierarchical imports
# `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
tenant: "tenant1"
region: "us-west-1"
environment: "uw1"
stage: "test1"
btw, these examples https://github.com/cloudposse/atmos/tree/master/examples/complete are working examples (including components and stacks) that are used for Atmos testing. Although the examples are not for a real infra (they have a lot of thing just for testing, including some errors and validation errors which Atmos tests find and report), but all Atmos features are covered by the examples
atmos describe stacks --components=infra-init -s dvsa-dev-poc
the command I run
no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml'
Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')
The error I get
I use direnv to set up my environment variables to find the atmos.yaml
Hi @Imran Hussain
Has your question been answered?
Yes it was.
we updated included_paths
in atmos.yaml
to the correct values
@Gabriela Campana (Cloud Posse) let’s create a task to improve Atmos docs to better describe all settings in atmos.yaml
for the sections included_paths
and excluded_path
(so we don’t fotget). A few people already asked the same questions
I fear this is a dumb question -
For the Atmos quickstart, are you supposed to work completely out of the sample repo? I’m at the Create Components section of the quick start and atmos vender pull --component infra/vpc
I get this error
failed to find a match for the import '/Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/**/*.yaml'
I basically made a new repo, create the stacks and components folders and added the yaml files for vpc and vpc-flow-control-logs as instructed and was going to pull the component directories down. I feel I could probably fix this by grabbing the entire example repo and work out of that, but I thought the quick start would guide the build out more from scratch. Any help would be great (edited)
I believe this is an error we should eliminate @Andriy Knysh (Cloud Posse)
I think this happens if you don’t even have a single file in that directory structure
It just errors out. if you did this, it will probably work:
mkdir -p /Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/test
touch /Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/test/test.yaml
it will probably work
I basically made a new repo, create the stacks and components folders and added the yaml files for vpc and vpc-flow-control-logs as instructed and was going to pull the component directories down.
That sounds right so far.
Do you also have an atmos.yaml
?
if you don’t have any stack config files, then any import will and should fail (similar to any languages like Java, Python, etc.)
@Jawn we can review your setup, you can DM us anytime with your code
Thanks Erik and Andriy.
Erik, the mkdir + touch did the trick. VPC threw an error relative paths require a module with a pwd
but it seemed to work and pull down the files.
I did create the atmos.yaml per the instructions at ~/.atmos/atmos.yaml
did you review https://atmos.tools/category/quick-start ?
Take 20 minutes to learn the most important atmos concepts.
Yes, that’s exactly what I was walking through when I ran into an issue. Starting from the “Quick Start” and getting to the page “Create Components” https://atmos.tools/quick-start/create-components
On this page, you won’t be able to pull the repo with the atmos vendor pull
unless you create the directory and touch the file.
I’m sure it would have worked had I manually copied the files in
In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform
@Andriy Knysh (Cloud Posse) do we need to take action here? Or fix the error that @Erik Osterman (Cloud Posse) mentioned?
I believe this is an error we should eliminate @Andriy Knysh (Cloud Posse)
i’m not sure what can be done here. We can make the error more detailed (but it already says that no files were found). If you don’t have any stack config files and you run Atmos commands, it will not find any files and error out
also, atmos vendor pull
needs a component folder already created (for the simple reason that the folder must already contain the file component.yaml
which describes how to vendor it)
How about add in the instructions
run this
mkdir -p ~/$REPO/stacks/orgs/test
touch ~/$REPO/stacks/orgs/test/test.yaml
as a temp workaround
it def does not work with empty folders (components and stacks)
we can def improve our docs (but yes, the docs can always be improved )
Is the quick start docs produced from a repo? I’ll submit a PR
contributions are welcome, thank you
Tried pushing a branch to create the PR
Permission to cloudposse/atmos.git denied to thedarkwriter.
I submitted the PR Sorry to be that guy, but as I move forward to the Provision section, it seems like there is more missing scaffolding.
I pulled the files in Create Components and created the files in Create Stacks.
Now when I run the apply command on the Provision page
atmos terraform apply vpc-flow-logs-bucket-1 -s core-ue2-dev
I get an error that looks like file(s) in an account-map directory (which isn’t created) is missing.
- iam_roles in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
exit status 1
I do see that terraform init did run in the module directory
(⎈ |docker-desktop:default)johnfahl:terraform-atmos-learn/ $ ll components/terraform/infra/vpc-flow-logs-bucket [11:12:04]
total 28
drwxr-xr-x 10 johnfahl staff 320 Jun 19 14:48 .
drwxr-xr-x 4 johnfahl staff 128 Jun 15 15:57 ..
drwxr-xr-x 3 johnfahl staff 96 Jun 19 14:48 .terraform
-rw-r--r-- 1 johnfahl staff 3593 Jun 15 15:56 component.yaml
-rw-r--r-- 1 johnfahl staff 246 Jun 21 11:06 core-ue2-dev-infra-vpc-flow-logs-bucket-1.terraform.tfvars.json
-rw-r--r-- 1 johnfahl staff 887 Jun 18 14:56 main.tf
-rw-r--r-- 1 johnfahl staff 268 Jun 18 14:56 outputs.tf
-rw-r--r-- 1 johnfahl staff 492 Jun 18 14:56 providers.tf
-rw-r--r-- 1 johnfahl staff 1937 Jun 18 14:56 variables.tf
-rw-r--r-- 1 johnfahl staff 315 Jun 18 14:56 versions.tf
@Andriy Knysh (Cloud Posse)
@Jawn w/o looking at the code, it’s not possible to say anything about the issues
in general, the flow log bucket component has https://github.com/cloudposse/terraform-aws-components/blob/main/modules/vpc-flow-logs-bucket/providers.tf
provider "aws" {
region = var.region
# Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
profile = module.iam_roles.terraform_profile_name
dynamic "assume_role" {
# module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
for_each = compact([module.iam_roles.terraform_role_arn])
content {
role_arn = module.iam_roles.terraform_role_arn
}
}
}
module "iam_roles" {
source = "../account-map/modules/iam-roles"
context = module.this.context
}
which means it’s looking for the account-map component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/account-map
2023-06-17
v1.36.0 what Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs why
If validation is configured for a component, Atmos executes the configured OPA Rego policies. If a policy is misconfigured (e.g. invalid Rego syntax or import), the validation can take a long time and eventually fail. Use the –timeout parameter to specify the required timeout
The timeout (in seconds) can be specified on the command line:…
what
Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs
why
If validation is configured for a compone…
v1.36.0 what Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs why
If validation is configured for a component, Atmos executes the configured OPA Rego policies. If a policy is misconfigured (e.g. invalid Rego syntax or import), the validation can take a long time and eventually fail. Use the –timeout parameter to specify the required timeout
The timeout (in seconds) can be specified on the command line:…
2023-06-18
2023-06-19
has anyone migrated a stack from an older implementation of atmos (implemented a little over a year ago) to the latest?
we do it all the time, let us know if you need help
We have a migration doc somewhere
Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance
.
Were you able to find what you were looking for?
what’s supposed to be in the second bullet point plat-gbl-sandbox plat-ue2-dev
https://atmos.tools/tutorials/atmos-component-migrations-in-yaml/#migrating-state-manually
Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance
.
@RB see Looks like some markdown formatting issues and missing copy.
what
• Update atmos-component-migrations-in-yaml.md
why
• Small fix
references
2023-06-20
Hi guys,
Is there any who can help me to do atmos vendor pull
in scp-style?
# Questions Hi, When I do vendor pull with atmos cli to use ssh to clone the code, but it fails. It works when I configure the uri with http protocol but fails with url-style or scp-style.
Here is the component description I have:
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: bucket
description: A bucket to build
spec:
source:
uri: github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}} # working
# uri: [email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}} # case 1 - not working
# uri: git::<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}> # case 2 - not working
version: 0.47.1
This is the error message I got.
# case 1
root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from '[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1' and writing to 'components/terraform/infra/bucket'
relative paths require a module with a pwd
# case 2
root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from 'git::<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>' and writing to 'components/terraform/infra/bucket'
error downloading '<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git
Please help me out to pull Terraform modules with atmos. Thank you in advance.
Atmos uses https://github.com/hashicorp/go-getter to load the files
Package for downloading things from a string URL using a variety of protocols.
whatever it supports will work (if a protocol is not supported, it will not work)
having said that, we use this style
github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
the other two was not completely tested, I’m not sure if they work correctly, or where the issue might be
why you did not use ///
in [email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}?
maybe it will work if you use it
Use Component Vendoring to make a copy of 3rd-party components in your own repo.
Let me briefly test with your suggestion!
i personally s=did not try the <ssh://git>
scheme, not sure if it’s working or not
Hmm. Seems not working…
root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from '[email protected]/cloudposse/terraform-aws-ec2-instance.git///?ref=0.47.1' and writing to 'components/terraform/infra/bucket'
relative paths require a module with a pwd
Actually I googled a lot for the error message “relative paths require a module with a pwd” but coudn’t find the exact reason for that.
https://github.com/hashicorp/nomad/issues/8969 - looks like a lot of issues regarding this even HashiCorp is having using their own go-getter
Nomad version
Nomad v0.12.5
Operating system and Environment details
• macOS 10.15.6 • Ubuntu 20.04
Issue
I’m new to Nomad and I believe this is more of a documentation issue and wanted to put this somewhere other people can find easily. I spent way too much time trying to resolve this and it was a little bit of a burden to getting started with Nomad.
When using the artifact stanza to clone a Git repository, I was getting the error relative paths require a module with a pwd
. It took quite a long time and digging to find out how the artifact stanza really worked as well as how to resolve the error.
Reproduction steps
Create a Job file with an artifact and using the docker exec driver. See the missing
Job file (that causes the error)
job "example" {
datacenters = [
"dc1"
]
type = "batch"
group "web" {
task "setup" {
artifact {
source = "[email protected]:username/repo.git"
destination = "local/repository"
options {
sshkey = "<key-in-base64>"
}
}
driver = "docker"
config {
image = "alpine"
args = ["ls"]
}
}
}
}
Since it was unclear how the artifact and paths works for the task, the jobs were failing with relative paths require a module with a pwd
which really did not say anything that would help me resolve the issue. I even scoured the documentation on the go-getter repository.
The real issue is that I was not mounting the local directory into the container (I was going step by step to examine the process and filesystem). To resolve the issue I needed to mount the directory using the docker config.volume
stanza and the job completed successfully.
Job file (that resolves the error)
job "example" {
datacenters = [
"dc1"
]
type = "batch"
group "web" {
task "setup" {
artifact {
source = "[email protected]:username/repo.git"
destination = "local/repository"
options {
sshkey = "<key-in-base64>"
}
}
driver = "docker"
config {
image = "alpine"
args = ["ls"]
volumes = [
"local/repository/:/path/on/container",
],
}
}
}
}
I know Nomad is new and this is not an attempt to bash the product, just an illumination on the confusion on how jobs/documentation might need an extra set of eyes from the perspective of someone new to Nomad. More specifically, I think there should be example job files for lots of popular application stacks to make it easier to get started.
This works!
apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
source:
...
uri: git::[email protected]:cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
Note that
I gave git::
in front of the URL I tested lastly.
Thank you for helping me with this! @Andriy Knysh (Cloud Posse)
np
from the doc
The git getter accepts both URL-style SSH addresses like git::<ssh://[email protected]/foo/bar>, and "scp-style" addresses like git::[email protected]/foo/bar. In the latter case, omitting the git:: force prefix is allowed if the username prefix is exactly git@.
but it looks like it’s not allowed even if the username prefix is git@
Yes I think so
Also, please check this thread
quick question about atmos vendor
. my component.yaml looks like this
uri: github.com/cloudposse/terraform-aws-ec2-instance.git/?ref={{.Version}}
version: 0.47.1
but when I pull, I get this error:
subdir "%253Fref=0.47.1" not found
how should the url be formatted?
Sounds vaguely familiar.
Oh, I missed your message https://sweetops.slack.com/archives/C031919U8A0/p1687299522666339?thread_ts=1687296704.077439&cid=C031919U8A0
This works!
apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
source:
...
uri: git::[email protected]:cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
Note that
I gave git::
in front of the URL I tested lastly.
Hi, can we vendor pull from private hosted git? I’m trying to pull but it is just creating the components directory and the command simply exiting after a while, can anyone throw some light ?
output:
Pulling sources for the component ‘account-configuration’ from ‘[email protected]/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref=2.9.7’ into ‘components/terraform/account-configuration’
but I can’t find any files under the specific component folder, its empty… anyone has any idea? please help
apiVersion: atmos/v1
kind: AtmosVendorConfig
metadata:
name: accounts-vendor-config
description: Atmos vendoring manifest
spec:
#
imports or
sources (or both) must be defined in a vendoring manifest
imports: []
sources:
#
source supports the following protocols: local paths (absolute and relative), OCI (<https://opencontainers.org>),
# Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>.
# In 'source', Golang templates are supported <https://pkg.go.dev/text/template>.
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'source'.
- component: "account-configuration"
source: "<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}"
#source: "[github.com/cloudposse/terraform-aws-components.git///?ref={{.Version}}](http://github.com/cloudposse/terraform-aws-components.git///?ref={{.Version}})"
version: "1.2.7"
targets:
- "components/terraform/account-configuration"
Package for downloading things from a string URL using a variety of protocols.
@Andriy Knysh (Cloud Posse)
i beleive @Kubhera has fixed the issue https://sweetops.slack.com/archives/C031919U8A0/p1710332023541759
It worked now :slightly_smiling_face: i had to add force protocol prefix as git::.
source: "git::<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}"
Though <https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io
is not the recommended implementation.
No hardcoded tokens should be committed in URLs.
It should “just work” if you have your SSH agent configured. if you’re using it in automation, then you’ll want to use the netrc
approach.
Here’s how to use .netrc
https://gist.github.com/sahilsk/ce21c39a6c2dbc2cd984
On GitHub, there’s an action to make this easier: https://github.com/marketplace/actions/setup-netrc
It appears you’re using GitLab, so there’s probably an equivalent way of doing it there.
ah yes, i missed the hardcoded token. netrc should be used here
v1.37.0 what Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs why
Having the spacelift_stack and atlantis_project outputs from the atmos describe component command is useful when using the command in GitHub actions related to Spacelift and Atlantis
The –include-spacelift-admin-stacks flag for the atmos describe affected command allows including the Spacelift admin…
what
Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs
why
Havi…
v1.37.0 what Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs why
Having the spacelift_stack and atlantis_project outputs from the atmos describe component command is useful when using the command in GitHub actions related to Spacelift and Atlantis
The –include-spacelift-admin-stacks flag for the atmos describe affected command allows including the Spacelift admin…
2023-06-21
2023-06-22
I can see in this file “https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/dev/us-east-2.yaml” which I think is a stack that you make use of this construct
name: "{tenant}-{environment}-{stage}-{component}"
Which are not template variables what are these and where do they come from can they be used anywhere ?
import:
- mixins/region/us-east-2
- orgs/cp/tenant1/dev/_defaults
- catalog/terraform/top-level-component1
- catalog/terraform/test-component
- catalog/terraform/test-component-override
- catalog/terraform/test-component-override-2
- catalog/terraform/test-component-override-3
- catalog/terraform/vpc
- catalog/terraform/tenant1-ue2-dev
- catalog/helmfile/echo-server
- catalog/helmfile/infra-server
- catalog/helmfile/infra-server-override
vars:
enabled: true
terraform:
vars:
enabled: false
components:
terraform:
"infra/vpc":
vars:
name: "co!!,mmon"
ipv4_primary_cidr_block: 10.10.0.0/18
availability_zones:
- us-east-2a
- us-east-2b
- us-east-2c
settings:
atlantis:
# For this `tenant1-ue2-dev` stack, override the org-wide config template specified in `examples/complete/stacks/orgs/cp/_defaults.yaml`
# in the `settings.atlantis.config_template_name` section
config_template:
version: 3
automerge: false
delete_source_branch_on_merge: false
parallel_plan: true
parallel_apply: false
allowed_regexp_prefixes:
- dev/
# For this `tenant1-ue2-dev` stack, override the org-wide project template specified in `examples/complete/stacks/orgs/cp/_defaults.yaml`
# in the `settings.atlantis.project_template_name` section
project_template:
# generate a project entry for each component in every stack
name: "{tenant}-{environment}-{stage}-{component}"
workspace: "{workspace}"
workflow: "workflow-1"
dir: "{component-path}"
terraform_version: v1.3
delete_source_branch_on_merge: false
autoplan:
enabled: true
when_modified:
- "**/*.tf"
- "varfiles/$PROJECT_NAME.tfvars.json"
apply_requirements:
- "approved"
the template is used for Atlantis integration https://atmos.tools/integrations/atlantis
Atmos natively supports Atlantis for Terraform Pull Request Automation.
Atmos processes the templates to create the real Atlantis project name. Those are not Go templates
in the same file I also see
{ workflow }
any known usage of Atmos on TFC?
can you run a custom script on TFC (to run atmos commands to generate varfiles and backend)?
I do not know
you could poentially run an action before hand
Hello, I’m trying to use remote-state.tf and I’m getting this error, any clues how to solve this?
Error: failed to find a match for the import '/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*.yaml' ('/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs' + '**/*.yaml')
│
│
│ CLI config:
│
│ base_path: ""
│ components:
│ terraform:
│ base_path: components/terraform
│ apply_auto_approve: false
│ deploy_run_init: true
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
(since remote-state
is processed by https://github.com/cloudposse/terraform-provider-utils)
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management)
Thanks for the reply! - This is contents of remote-state.tf. I am specifying the path to the atmos.yaml file.
module "remote_state_vpc" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.4.3"
component = "vpc"
atmos_cli_config_path = "/usr/local/etc/atmos/atmos.yaml"
context = module.this.context
}
Heres the full output:
╷
│ Error: failed to find a match for the import '/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*.yaml' ('/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs' + '**/*.yaml')
│
│
│ CLI config:
│
│ base_path: ""
│ components:
│ terraform:
│ base_path: components/terraform
│ apply_auto_approve: false
│ deploy_run_init: true
│ init_run_reconfigure: true
│ auto_generate_backend_file: true
│ helmfile:
│ base_path: ""
│ use_eks: true
│ kubeconfig_path: ""
│ helm_aws_profile_pattern: ""
│ cluster_name_pattern: ""
│ stacks:
│ base_path: stacks
│ included_paths:
│ - orgs/**/*
│ excluded_paths:
│ - '**/_defaults.yaml'
│ name_pattern: '{tenant}-{environment}-{stage}'
│ workflows:
│ base_path: stacks/workflows
│ logs:
│ file: /dev/stdout
│ level: Info
│ commands: []
│ integrations:
│ atlantis:
│ path: ""
│ config_templates: {}
│ project_templates: {}
│ workflow_templates: {}
│ schemas:
│ jsonschema:
│ base_path: ""
│ cue:
│ base_path: ""
│ opa:
│ base_path: ""
│ initialized: false
│ stacksBaseAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks
│ includeStackAbsolutePaths:
│ - /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*
│ excludeStackAbsolutePaths:
│ - /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/**/_defaults.yaml
│ terraformDirAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/components/terraform
│ helmfileDirAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp
│ stackConfigFilesRelativePaths: []
│ stackConfigFilesAbsolutePaths: []
│ stackType: ""
│
│
│ with module.remote_state_vpc.data.utils_component_config.config[0],
│ on .terraform/modules/remote_state_vpc/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│ 1: data "utils_component_config" "config" {
│
╵
Releasing state lock. This may take a few moments...
oh, if you are using that in the remote state, you need to set these two variables:
atmos_base_path = "<path to the root of the repo>"
atmos_cli_config_path = "/usr/local/etc/atmos"
and /usr/local/etc/atmos
, not /usr/local/etc/atmos/atmos.yaml
(don’t include atmos.yaml
, just the path to it
does atmos_base_path
require an absolute or relative path?
absolute
b/c the remote state code gets executed from the components folders
a relative path can’t be used
in a shared env where peoples repo path on disk would be different, how would an absolute path work?
try without atmos_base_path first
, just fix the issue with atmos.yaml
path
in a shared env where peoples repo path on disk would be different, how would an absolute path work?
that’s why we put it into /usr/local/etc/atmos
on local computer and in a Docker container
so any Atmos code (atmos binary and the utils provider) can find it
try this
module "remote_state_vpc" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.4.3"
component = "vpc"
atmos_cli_config_path = "/usr/local/etc/atmos"
context = module.this.context
}
regarding the base path
# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: ""
you can use relative paths, but it will depend from which directory you execute the commands
this worked:
module "remote_state_vpc" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.4.3"
component = "vpc"
atmos_cli_config_path = "/usr/local/etc/atmos"
atmos_base_path = "/Users/pmcdonald/workspace/atmos-metrop"
context = module.this.context
}
it didnt work without atmos_base_path
so relative paths are not for all use cases, though they will work in some
in Docker container (geodesic), we automatically set ATMOS_BASE_PATH
to the absolute path of the root of the repo
I see.. ok that makes sense
I understand that all of that is not simple and intuitive. But since the commands get executed from diff folders (atmos CLI from one, Terraform calls all the providers from the components folders), it’s not easy to come up with a generic way where both absolute and relative paths would work in all possible cases
something needs to give
@Andriy Knysh (Cloud Posse) maybe we should implement a search paths, and include the git
root as one of them.
in geodesic, we solved all of that by 1) automatically setting ATMOS_BASE_PATH
to the absolute path of the root of the repo; and 2) placing atmos.yaml
in /usr/local/etc/atmos
so it works all the time for all binaries (atmos, terraform, TF providers), and you can execute Atmos commands from any folder (not only from the root of the repo)
but yes, we can prob improve all of that
Andriy - thank you for your help as its working now. Each dev on our team can just set their own ATMOS_BASE_PATH
env var
if you use a Docker container, you can do it automatically for all devs
(we recommend https://github.com/cloudposse/geodesic )
Geodesic is a DevOps Linux Toolbox in Docker. We use it as an interactive cloud automation shell. It’s the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
I’ll go down that path once I fully wrap my head around atmos.. baby steps
v1.38.0 what Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs why Atmos supports OPA policies for component validation in a single Rego file and in multiple Rego files. As shown in the example below, you can define some Rego constants, modules and helper functions in a separate file stacks/schemas/opa/catalog/constants/constants.rego, and then import them into the main policy file…
what
Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs
why Atmos supports OPA …
v1.38.0 what Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs why Atmos supports OPA policies for component validation in a single Rego file and in multiple Rego files. As shown in the example below, you can define some Rego constants, modules and helper functions in a separate file stacks/schemas/opa/catalog/constants/constants.rego, and then import them into the main policy file…
2023-06-26
2023-06-27
2023-06-29
Just revisited atmos and looking forward to exploring the latest. Didn’t realize it was written in Go
please review https://atmos.tools/category/quick-start and let us now if you need any help
Take 20 minutes to learn the most important atmos concepts.
so does atmos replace the build-harness
stuff you used to use with the big makefile?
I ask cause I’d been building out a much less sophisticated go cli from things I built with mage originally that helps with work tasks (setting up aks access, azure-cli configuration, pull requests (not on github sadly) etc. Curious if you ended up codifying all your CI tasks in this or just the main workflows.
Planning on looking at it soon, no rush. Have an idea for a pr to show something with cobra I ran into that might be neat for ya’ll.
Oh neat! I see some cool concepts to try now with automatic tfvars generation and more. I see that you probably still mix in the shell effort from build harness as a separate scope and codified your workflow orchestration on this project not so much each individual task. Hoping to play around it this next week as I’m going to be consulting with a team on a big refactor of the terraform stacks at my $work. Might post a few questions on that.
please ask
we can help with the initial setup
to be clear, Atmos does not rely on build-harness (geodesic
does), it’s completely self-contained binary written in Go. Our customers manage different infras with Atmos, from a very simple ones to the stacks with many Orgs, each one having many tenants, with many accounts and regions (which corresponds to more than a thousand infrastructure stacks, for example, in Spacelift)
Yeah I got that. I meant calling linters and other tools seems to still be there while atmos solves a higher level workflow orchestration. That’s different to how I use go & mage right now. Looks interesting to try. I might take ya up on that help! Cheers
Yea, right now, atmos
isn’t designed to replace tools like Mage or Make, although it can do almost the same things. I suppose my point is it’s optimized more for managing very large configurations, and provides workflow mechanics to work with other tools. No other build tools are optimized to manage thousands of environments (e.g. thousands of microservices).
Here you can see some of our inspiration https://atmos.tools/reference/alternatives
To better understand where Atmos fits in, it may be helpful to understand some of the alternative tooling it seeks to replace. There are lots of great tools out there and we’re going through a bit of a “DevOps Rennisance” when it comes to creativity on how to automate systems.
Atmos is somewhat of a mashup of go-task
, variant
, helmfile
, appbuilder
— we took the best of all of them and built a tool around it. Helmfile proved very effective for managing large environments. Now imagine if you use that with any tool. That’s sort of what variant
was for, but it wasn’t optimized for specific tools like terraform. So we built native workflow support for terraform. App builder is awesome because it lets you wrap all the CLIs you depend on into a single, documented tool tool (“help operations teams wrap their myriad shell scripts, multi line kubectl invocations, jq commands and more all in one friendly CLI tool that’s easy to use and share.”). And go-task
is a great simple way of defining workflows. So now take all of that, stick it into one command and you get atmos
Very cool. I need to go check out app builder hadn’t heard of that.
one thing I really did like like when using Pulumi, was creating strongly typed deployment definitions, and generating the yaml from that. I wish that text templating wasn’t the standard go to but it’s hard to resist the inertia something like helm has.
I wonder if terraform had been more prevalent when helm was thought of if terraform definitions would have become the defacto way to define deployments instead of yaml files.
Re: terraform, I’ve wondered the same.
Re, why go templating of YAML, apparently helm tried to move away from this or add other mechanisms, but it didn’t go anywhere. I think the templating, as ugly as it is, “levels the playing field” for adoption.
See the other thread on Cuelang.