#atmos (2023-06)

2023-06-01

Patrick McDonald avatar
Patrick McDonald

Hello, Im running into this error when trying to apply a stack that uses cloudposse/terraform-aws-ecs-cluster

│ Error: Unsupported attribute
│ 
│   on .terraform/modules/autoscale_group/outputs.tf line 23, in output "autoscaling_group_tags":
│   23:   value       = module.this.enabled ? aws_autoscaling_group.default[0].tags : []
│ 
│ This object has no argument, nested block, or exported attribute named "tags". Did you mean "tag"?
1
Patrick McDonald avatar
Patrick McDonald

stacks/catalog/ecs-cluster/defaults.yaml:

components:
  terraform:
    ecs-cluster/defaults:
      metadata:
        component: ecs-cluster
        type: abstract
      settings:
        spacelift:
          workspace_enabled: false
      vars:
        enabled: true
        name: ecs-cluster
        capacity_providers_fargate: true
        capacity_providers_fargate_spot: true
        container_insights_enabled: true
      tag:
        Name: testing

stacks/orgs/metrop/pmc/dev/us-west-2.yaml:

import:
  - mixins/region/us-west-2
  - orgs/metrop/pmc/dev/_defaults
  - catalog/ecs-cluster/defaults

components:
  terraform:
    ecs-cluster:
      vars:
        name: test
atmos describe stacks                                       
pmc-uw2-dev:
  components:
    terraform:
      ecs-cluster:
        backend:
          acl: bucket-owner-full-control
          bucket: pmc-test-terraform-state
          dynamodb_table: pmc-test-terraform-state-lock
          encrypt: true
          key: terraform.tfstate
          region: us-west-2
          role_arn: null
          workspace_key_prefix: ecs-cluster
        backend_type: s3
        command: terraform
        component: ecs-cluster
        deps:
        - mixins/region/us-west-2
        - mixins/stage/dev
        - orgs/metrop/_defaults
        - orgs/metrop/pmc/_defaults
        - orgs/metrop/pmc/dev/us-west-2
        env: {}
        inheritance: []
        metadata: {}
        remote_state_backend:
          acl: bucket-owner-full-control
          bucket: pmc-test-terraform-state
          dynamodb_table: pmc-test-terraform-state-lock
          encrypt: true
          key: terraform.tfstate
          region: us-west-2
          role_arn: null
          workspace_key_prefix: ecs-cluster
        remote_state_backend_type: s3
        settings: {}
        vars:
          environment: uw2
          name: test
          namespace: pmc
          region: us-west-2
          stage: dev
          tenant: pmc
        workspace: pmc-uw2-dev
      ecs-cluster/defaults:
        backend:
          acl: bucket-owner-full-control
          bucket: pmc-test-terraform-state
          dynamodb_table: pmc-test-terraform-state-lock
          encrypt: true
          key: terraform.tfstate
          region: us-west-2
          role_arn: null
          workspace_key_prefix: ecs-cluster
        backend_type: s3
        command: terraform
        component: ecs-cluster
        deps:
        - catalog/ecs-cluster/defaults
        - mixins/region/us-west-2
        - mixins/stage/dev
        - orgs/metrop/_defaults
        - orgs/metrop/pmc/_defaults
        - orgs/metrop/pmc/dev/us-west-2
        env: {}
        inheritance: []
        metadata:
          component: ecs-cluster
          type: abstract
        remote_state_backend:
          acl: bucket-owner-full-control
          bucket: pmc-test-terraform-state
          dynamodb_table: pmc-test-terraform-state-lock
          encrypt: true
          key: terraform.tfstate
          region: us-west-2
          role_arn: null
          workspace_key_prefix: ecs-cluster
        remote_state_backend_type: s3
        settings:
          spacelift: {}
        vars:
          capacity_providers_fargate: true
          capacity_providers_fargate_spot: true
          container_insights_enabled: true
          enabled: true
          environment: uw2
          name: ecs-cluster
          namespace: pmc
          region: us-west-2
          stage: dev
          tenant: pmc
        workspace: pmc-uw2-dev
atmos terraform plan ecs-cluster -s pmc-uw2-dev

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.0.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Releasing state lock. This may take a few moments...
╷
│ Warning: Value for undeclared variable
│ 
│ The root module does not declare a variable named "region" but a value was found in file "pmc-uw2-dev-ecs-cluster.terraform.tfvars.json". If you meant to use
│ this value, add a "variable" block to the configuration.
│ 
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your organization. To reduce the
│ verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Error: Unsupported attribute
│ 
│   on .terraform/modules/autoscale_group/outputs.tf line 23, in output "autoscaling_group_tags":
│   23:   value       = module.this.enabled ? aws_autoscaling_group.default[0].tags : []
│ 
│ This object has no argument, nested block, or exported attribute named "tags". Did you mean "tag"?
╵
exit status 1
Patrick McDonald avatar
Patrick McDonald
`which terraform` version                      
Terraform v1.4.6
on darwin_arm64
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is not related to atmos. This is new terraform aws provider version 5 causing a lot of issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

while we are updating the modules to fix the issues, you can pin your provider to v4 for now

Patrick McDonald avatar
Patrick McDonald

gotcha!

matt avatar
matt
05:52:06 PM

@matt has joined the channel

2023-06-05

Kristoffer avatar
Kristoffer

Hello there, I’m trying out the tutorial here: https://atmos.tools/tutorials/first-aws-environment but atmos terraform apply tfstate-backend –stack ue2-root outputs:

tfstate_backend_dynamodb_table_name = "acme-ue2-root-tfstate-delicate-elf-lock"
tfstate_backend_s3_bucket_arn = "arn:aws:s3:::acme-ue2-root-tfstate-delicate-elf"

while atmos terraform generate backend tfstate-backend –stack ue2-root generates:

   "bucket": "acme-ue2-tfstate-delicate-elf",
   "dynamodb_table": "acme-ue2-tfstate-lock-delicate-elf",

After I edit the backend.tf.json file to match the correct resource names, it succeeds migrating state to S3.

But continuting to deploy the static site, I get

Workspace "uw2-dev" doesn't exist.

I’m probably doing something very wrong, but I cannot figure out what

Your first environment on AWS | atmos

Get your first AWS environment deployed using Atmos, Stacks, and Vendoring

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse)

Your first environment on AWS | atmos

Get your first AWS environment deployed using Atmos, Stacks, and Vendoring

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Do you get a prompt after that message?

Workspace "uw2-dev" doesn't exist.

This is typically a Terraform message to warn you that the workspace doesnt exist yet (since it hasnt been created). You should then be able to continue and prompt Terraform to create that workspace. The default workspace option should be correct

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

If the deployment step fails after that message, then this is a different issue

Kristoffer avatar
Kristoffer

This is the full message

16:44 $ atmos terraform workspace static-site --stack uw2-dev
Initializing modules...

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Using previously-installed hashicorp/random v3.5.1
- Using previously-installed hashicorp/time v0.9.1
- Using previously-installed hashicorp/local v2.4.0
- Using previously-installed hashicorp/aws v5.1.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Workspace "uw2-dev" doesn't exist.

You can create this workspace with the "new" subcommand.
failed to lock s3 state: 2 errors occurred:
	* ResourceNotFoundException: Requested resource not found
	* ResourceNotFoundException: Requested resource not found


exit status 1
16:44 $

I’m running it atmos native on my mac - you think this somehow could be fixed with geodesic? Also notice both the s3 bucket name and dynamodb table generated by atmos are missing root and that lock is misplaced

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)


I’m running it atmos native on my mac - you think this somehow could be fixed with geodesic?
Geodesic is a great way to ensure that your local working environment doesnt have errors, but in this case I dont believe it would resolve the issue

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Why are you selecting the workspace with atmos terraform workspace static-site --stack uw2-dev ? Atmos will select the workspace for you, or create it, as part of any terraform command

Kristoffer avatar
Kristoffer

Ah, I was just trying things and copy/pasted the wrong part - this is the correct one:

10:10 $ atmos terraform deploy static-site --stack uw2-dev
Initializing modules...

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Using previously-installed hashicorp/local v2.4.0
- Using previously-installed hashicorp/aws v5.1.0
- Using previously-installed hashicorp/random v3.5.1
- Using previously-installed hashicorp/time v0.9.1

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Workspace "uw2-dev" doesn't exist.

You can create this workspace with the "new" subcommand.
failed to lock s3 state: 2 errors occurred:
	* ResourceNotFoundException: Requested resource not found
	* ResourceNotFoundException: Requested resource not found


exit status 1
Kristoffer avatar
Kristoffer

It was the dynamodb table that was missing -root- in stacks/catalog/globals.yaml also

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Yes nice find! I just opened a PR to fix that this morning but hadn’t had a chance to reply here

Kristoffer avatar
Kristoffer

2023-06-06

Joe Hosteny avatar
Joe Hosteny

Hi, I am a bit confused by something the example stack, and some of the docs for the modules. Say I have the account module deployed in a stack (gbl-root):

components:
  terraform:
    "infra/account":
      metadata:
        component: infra/account

(thread to follow…)

1
Joe Hosteny avatar
Joe Hosteny

I use this since I am organizing the directory layout as in the examples, with the components/terraform/infra dir. In this configuration, the inferred base component is empty, and so the workspace is set to gbl-root. The state file is written at s3://<ns>-ue2-root-tfstate/account/gbl-root/terraform.tfstate. So, when generating the state file path, it strips of the infra/ portion of the name to come up with the component.

Joe Hosteny avatar
Joe Hosteny

However, when I attempt to deploy the account-map module (infra/account-map), it complains that the config needed for the remote state cannot find account:

Searched all stack YAML files, but could not find config for the component 'account' in the stack 'gbl-root'
Joe Hosteny avatar
Joe Hosteny

This makes sense, since that component reference in the account-map module’s [remote-state.tf](http://remote-state.tf) file is hard-coded to account. If I change the account stack to:

components:
  terraform:
    "account":
      metadata:
        component: infra/account
Joe Hosteny avatar
Joe Hosteny

The infra/account-map deploy is happy, since it can find the component config. However, with this config, the state file is now at s3://<ns>-ue2-root-tfstate/account/gbl-root-account/terraform.tfstate.

Joe Hosteny avatar
Joe Hosteny

I can of course make this change to the location, but I am trying to reconcile why the inferred component for the stack in the prior case (infra/account) seems to differ from the component inferred for the S3 path, which is just account. My suspicion is that first part the S3 path is based on the terraform component’s path in the filesystem, since that is the “base” component, and you could have multiple instantiations of that, e.g., vpc/ue2-prod/vpc-1, vpc/ue2-prod/vpc2. The follow up question is that if that is right, why is the component that is passed to [remote-state.tf](http://remote-state.tf) in various modules in terraform-aws-components parameterized as a variable then?

Joe Hosteny avatar
Joe Hosteny

Am I missing something here in my configuration?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for remote-state, Atmos component name is used, the component that you use in YAML, e.g.

components:
  terraform:
    "infra/account":
Joe Hosteny avatar
Joe Hosteny

Hi @Andriy Knysh (Cloud Posse) , well I feel dumb. I somehow missed this in the docs.

Joe Hosteny avatar
Joe Hosteny

I guess the answer is to override the remote-state locally to parameterize per that example. Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know if you need help with atmos configs

2023-06-07

jose.amengual avatar
jose.amengual

is there a way to reference values inside of atmos yet?

jose.amengual avatar
jose.amengual

we want to pass the version in component.yaml to a tag on a component, so we know the version it was used to deploy

jose.amengual avatar
jose.amengual

maybe by having a yaml file with all the version for a specific infra we could pass that to the vendor command? ( as a new feature)

jose.amengual avatar
jose.amengual

imagine this :

components:
     vpc: 0.0.2
     rds: 1.1.2
jose.amengual avatar
jose.amengual

and then any atmos pulll command can find and read that file that declares the version for all the components and if not found default to the version in component.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

interesting use case, to have the component version on a tag automatically. we need to think about this

jose.amengual avatar
jose.amengual

we do that with regular root modules and we pass the tag on a pipeline where we extract the version and such

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An alternative idea is that the component should be aware.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In this case, it’s easier for the component to read the component.yaml and parse the version.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then pass those values to the context.

jose.amengual avatar
jose.amengual

yes , that would work too

2023-06-08

2023-06-12

jose.amengual avatar
jose.amengual
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

jose.amengual avatar
jose.amengual
invalid import 'map[context:map[owner:mike owner_ip:1.1.1.1/32 owner_tag:pepe] path:catalog/ec2/defaults]' in the file 'lab/us-west-2.yaml'
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

jose.amengual avatar
jose.amengual

no matter what I do , I just get that error

jose.amengual avatar
jose.amengual

even in debug mode

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please show your code, it’s not possible to say anything w/o looking at the code

jose.amengual avatar
jose.amengual

1.35.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

your code, not atmos version

jose.amengual avatar
jose.amengual
---
# stacks/catalog/ec2/defaults.yaml
import: []

components:
  terraform:
    "ec2-{{ .owner }}":
      metadata:
        component: ec2
        # type: abstract
      vars:
        name: "{{ .owner }}"
        tags:
          DELETEME: "true"
          Owner: "{{ .owner_tag }}"
        associate_public_ip_address: true
        instance_type: g5.2xlarge
        availability_zones: ["us-west-2a", "us-west-2b", "us-west-2c"]
        ami: "ami-111111111"
        volume_size: 250
        delete_on_termination: true
        device_encrypted: true
        security_group_rules:
          - type: "egress"
            from_port: 0
            to_port: 65535
            protocol: "-1"
            cidr_blocks: 
              - "0.0.0.0/0"
          - type: "ingress"
            from_port: 22
            to_port: 22
            protocol: "tcp"
            cidr_blocks: 
              - "{{ .owner_ip }}"
jose.amengual avatar
jose.amengual
import:
  - lab/globals

  - path: "catalog/ec2/defaults"
    context:
      owner: "pepe"
      owner_tag: "pepe"
      owner_ip: "1.1.1.1/32"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as it currently stands, all imports must be the same type

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use -path on one of the imports, you have to use the same format on all of them (within one file)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:
  - path: lab/globals

  - path: "catalog/ec2/defaults"
    context: ...
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(don’t use context if you don’t have it for a specific import)

jose.amengual avatar
jose.amengual

I see ok, that worked

jose.amengual avatar
jose.amengual

thanks as always

jose.amengual avatar
jose.amengual

it will be cool to be able to iterate over a list with this imports

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you mean in atmos code?

jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

like :

- path: "catalog/ec2/defaults"
    context:
      owner: [ "pepe", "jack", "arman"]
      owner_tag: [ "pepe", "jack", "arman"]
      owner_ip: ["1.1.1.1/32", "1.2.2.1/32", "1.3.3.1/32"]
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) can we have a more meaningful error message?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i wanted to support both formats in the same file, will prob fix it when I get time

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual indent wrong

jose.amengual avatar
jose.amengual

yes, is the stupid copy paste thing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, just to be clear, when andriy said they all had to be of the same type, it was the import block portion.

e.g. Without context

imports:
- stack1
- stack2
- stack3

or with context:

imports:
- path: stack1
  context: 
    abc: [ 123 ]
- path: stack2
- path: stack3
  context:
    foo: bar

and not

imports:
- stack1
- path: stack2
  context: { ... }
- stack3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note, context can be anything

jose.amengual avatar
jose.amengual

correct

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but for right now, cannot mix a list of string imports and map imports

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes correct (I’ll fix it in Atmos, it just needs to process each import separately; currently it converts the whole YAML into a list of struct and fails if a struct does not confirm to the schema)

2023-06-14

Imran Hussain avatar
Imran Hussain

Which file or for that matter where does one set the {tenant}-{environment}-{stage}. Where do I set the tenant or the environment or the stage

jose.amengual avatar
jose.amengual

in atmos.yaml

jose.amengual avatar
jose.amengual
stacks:
  # Can also be set using `ATMOS_STACKS_BASE_PATH` ENV var, or `--config-dir` and `--stacks-dir` command-line arguments
  # Supports both absolute and relative paths
  base_path: "stacks"
  # Can also be set using `ATMOS_STACKS_INCLUDED_PATHS` ENV var (comma-separated values string)
  included_paths:
     - "**/**"
  # Can also be set using `ATMOS_STACKS_EXCLUDED_PATHS` ENV var (comma-separated values string)
  excluded_paths:
     - "**/_defaults.yaml"
  # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
  name_pattern: "{namespace}-{environment}-{stage}"
jose.amengual avatar
jose.amengual

name_pattern

Imran Hussain avatar
Imran Hussain

Cheers I get that but what are the values of namespace, environment and stage where do I set those value

Imran Hussain avatar
Imran Hussain

is there a way to list the default values

jose.amengual avatar
jose.amengual

you will put them on your stack file

vars:
  environment: lab
  namespace: pepe
  stage: uw2
  region: us-west-2

components:
  terraform:
    vpc:
jose.amengual avatar
jose.amengual

vars: [] are global ( scoped to the stack file) variables

Imran Hussain avatar
Imran Hussain

Cheers and then the stack must be named in accordance to the naming convention {namespace}-{environment}-{stage}

jose.amengual avatar
jose.amengual

correct

jose.amengual avatar
jose.amengual

the file name and location does not matter

jose.amengual avatar
jose.amengual

but the vars:[] values is what it dictates to what stack this components belong to

Imran Hussain avatar
Imran Hussain

Ok let me have a try and see

Imran Hussain avatar
Imran Hussain

hmm no luck there must be something wrong with the setup

Imran Hussain avatar
Imran Hussain
.
├── components
│  └── terraform
│     └── infra-init
│        ├── component.yaml
│        ├── main.tf
│        ├── providers.tf
│        ├── terraform.tf
│        ├── tf_variables.tf
│        └── variables.tfvars
└── stacks
   ├── catalog
   │  └── dvsa-dev-poc.yaml
   ├── mixins
   ├── org
   │  └── dvsa
   │     └── core
   │        ├── dev
   │        ├── prd
   │        ├── pre
   │        └── uat
   ├── terraform
   └── workflows
      └── infra-init.yaml
jose.amengual avatar
jose.amengual

error?

Imran Hussain avatar
Imran Hussain

atmos workflow init/tfstate -f infra-init.yaml -s dvsa-dev-poc –dry-run

Imran Hussain avatar
Imran Hussain

I get no output

Imran Hussain avatar
Imran Hussain

and no error

Imran Hussain avatar
Imran Hussain

echo $? returns 0

jose.amengual avatar
jose.amengual

you need to show your stack file

Imran Hussain avatar
Imran Hussain

dvsa-dev-poc.yaml vars: environment: dev namespace: dvsa stage: poc region: eu-west-1 components: terraform: infra-init/defaults: metadata: workspace_enabled: false

jose.amengual avatar
jose.amengual

anf the atmos.yaml name pattern?

jose.amengual avatar
jose.amengual

and the workflow file too

Imran Hussain avatar
Imran Hussain
name_pattern: "{tenant}-{environment}-{stage}"
Imran Hussain avatar
Imran Hussain

wait I have not defined a tenant

Imran Hussain avatar
Imran Hussain

let me add that

jose.amengual avatar
jose.amengual

you can always use decribe stack command to see what atmos finds and renders

Imran Hussain avatar
Imran Hussain
dvsa-dev-poc:
  components:
    terraform:
      infra-init/defaults:
        backend: {}
        backend_type: ""
        command: terraform
        component: infra-init/defaults
        deps:
        - catalog/dvsa-dev-poc
        env: {}
        inheritance: []
        metadata:
          workspace_enabled: false
        remote_state_backend: {}
        remote_state_backend_type: ""
        settings: {}
        vars:
          environment: dev
          namespace: mot
          region: eu-west-1
          stage: poc
          tenant: dvsa
        workspace: dvsa-dev-poc
Imran Hussain avatar
Imran Hussain

The workflow file is

Imran Hussain avatar
Imran Hussain

more infra-init.yaml workflows: init/tfstate: description: Provision Terraform State Backend for initial deployment. steps: - command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --auto-generate-backend-file=false - command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done type: shell - command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --init-run-reconfigure=false

jose.amengual avatar
jose.amengual

what is you do plan against infra-init/defaults ?

jose.amengual avatar
jose.amengual

that needs to work before you can put it on a workflow

jose.amengual avatar
jose.amengual

specially with deploy command

Imran Hussain avatar
Imran Hussain

I want to plan against infra-init

Imran Hussain avatar
Imran Hussain

it just going to boot strap the environment

Imran Hussain avatar
Imran Hussain

create the S3 and Dynamodb backend

jose.amengual avatar
jose.amengual

but you need all the components defined in stacks to be able to use them

Imran Hussain avatar
Imran Hussain

I have using the cloudposse environment bootstrap

jose.amengual avatar
jose.amengual

infra-init/defaults , tfstate-backend, needs to be defined and plan should work

Imran Hussain avatar
Imran Hussain

there is no backend the backend is local this workflow will create the backend

jose.amengual avatar
jose.amengual

understand, so you deploy that first, you do not need a workflow for that

jose.amengual avatar
jose.amengual

but you could use a workflow

Imran Hussain avatar
Imran Hussain

I want it all captured as this is a POC to use as a Ref Arch for other teams to accelerate there provisioning of multiple environments

Imran Hussain avatar
Imran Hussain

if this can be captured as a workflow its a bit easier

Imran Hussain avatar
Imran Hussain

Maybe the workflow needs to be updated

Imran Hussain avatar
Imran Hussain

workflows: init/tfstate: description: Provision Terraform State Backend for initial deployment. steps: - command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --auto-generate-backend-file=false - command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done type: shell - command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack infra-init --init-run-reconfigure=false

jose.amengual avatar
jose.amengual

no problem, but you need to test first that a plan works before the workflow

jose.amengual avatar
jose.amengual

if you run plan and it fails, the workflow will never work

Imran Hussain avatar
Imran Hussain

Even if it fails I am Ok with the terraform and how to fix that. I am just trying to get all the pieces of atmos figured out so I can use it

Imran Hussain avatar
Imran Hussain

But it should bubble up the errors

Imran Hussain avatar
Imran Hussain

I am just running terraform

jose.amengual avatar
jose.amengual

I guess it should

jose.amengual avatar
jose.amengual

export ATMOS_LOGS_LEVEL=Trace and see

Imran Hussain avatar
Imran Hussain
❯ export ATMOS_LOGS_LEVEL=Trace

user: imranhussain on C02G32WUML85 atmos/stacks/catalog on  main [?] on ☁️  dvsarecallsmgmt (eu-west-1)
❯ atmos workflow init/tfstate -f infra-init.yaml -s dvsa-dev-poc --dry-run

Executing the workflow 'init/tfstate' from '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/workflows/infra-init.yaml'

description: Provision Terraform State Backend for initial deployment.
steps:
- name: step1
  command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack
    dvsa-dev-poc --auto-generate-backend-file=false
- name: step2
  command: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
  type: shell
- name: step3
  command: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack
    dvsa-dev-poc --init-run-reconfigure=false

Executing workflow step: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --auto-generate-backend-file=false
Stack: dvsa-dev-poc

Executing command:
/Users/imranhussain/brew/bin/atmos terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --auto-generate-backend-file=false -s dvsa-dev-poc
Executing workflow step: until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done

Executing command:
until aws s3 ls dvsa-core-eu-west-1-root-tfstate; do sleep 5; done
Executing workflow step: terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --init-run-reconfigure=false
Stack: dvsa-dev-poc

Executing command:
/Users/imranhussain/brew/bin/atmos terraform deploy tfstate-backend -var=access_roles_enabled=false --stack dvsa-dev-poc --init-run-reconfigure=false -s dvsa-dev-poc
Imran Hussain avatar
Imran Hussain

So it looks as if it is trying to run the workflow

Imran Hussain avatar
Imran Hussain

but then it does not show any errors

jose.amengual avatar
jose.amengual

that sleep is pretty hacky, you do not need that

jose.amengual avatar
jose.amengual

that could be messing things up

Imran Hussain avatar
Imran Hussain

I just copied and pasted it from what Eric sent me

Imran Hussain avatar
Imran Hussain

I do not see a component listed

jose.amengual avatar
jose.amengual

like I said try running plan first make sure is works then run your workflow

Imran Hussain avatar
Imran Hussain

OK so go to the component directory and just run the plan

jose.amengual avatar
jose.amengual

no no

jose.amengual avatar
jose.amengual

atmos terraform plan componentname -s stackname

Imran Hussain avatar
Imran Hussain
atmos terraform plan infra-init -s dvsa-dev-poc

Found stack config files:
- catalog/dvsa-dev-poc.yaml
- workflows/infra-init.yaml

Found config for the component 'infra-init' for the stack 'dvsa-dev-poc' in the stack config file 'catalog/dvsa-dev-poc'

Variables for the component 'infra-init' in the stack 'dvsa-dev-poc':
environment: dev
namespace: mot
region: eu-west-1
stage: poc
tenant: dvsa

Writing the variables to file:
/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/components/terraform/infra-init/dvsa-dev-poc-infra-init.terraform.tfvars.json

Executing command:
/Users/imranhussain/.asdf/shims/terraform init -reconfigure

Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading registry.terraform.io/cloudposse/tfstate-backend/aws 1.1.1 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.bucket_label...
- terraform_state_backend.bucket_label in .terraform/modules/terraform_state_backend.bucket_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.replication_label...
- terraform_state_backend.replication_label in .terraform/modules/terraform_state_backend.replication_label
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for terraform_state_backend.this...
- terraform_state_backend.this in .terraform/modules/terraform_state_backend.this
╷
│ Error: Unsupported Terraform Core version
│
│   on terraform.tf line 9, in terraform:
│    9:   required_version = "~> 1.4.6"
│
│ This configuration does not support Terraform version 1.5.0. To proceed, either choose another supported Terraform version or update this version
│ constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

exit status 1
Imran Hussain avatar
Imran Hussain

So at least it tries to do something

Imran Hussain avatar
Imran Hussain

maybe if I removed the –dry-run I would have seen it actually do the actions

jose.amengual avatar
jose.amengual

yes

Imran Hussain avatar
Imran Hussain

so maybe the workflow would have worked if I removed the –dry-run and spit out the same errors

Imran Hussain avatar
Imran Hussain

Alright. I have some TF errors and I can sort those out. I will then go back to the workflow

Imran Hussain avatar
Imran Hussain

Error: configuring Terraform AWS Provider: validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 00a12726-972f-463a-bb70-1435e422efab, api error ExpiredToken: The security token included in the request is expired

Imran Hussain avatar
Imran Hussain

but thats just the Auth into AWS I am assuming that atmos works with environment variables for AWS_SECRET_KEY and so forth

Imran Hussain avatar
Imran Hussain

The plan worked

Imran Hussain avatar
Imran Hussain

I think I do not understand the var correctly are the automatically passed into the terraform run

Imran Hussain avatar
Imran Hussain

But at least I have a plan

Imran Hussain avatar
Imran Hussain

Thanks for all the help really appreciate it.

1
Imran Hussain avatar
Imran Hussain

Is there a way to see what they are currently set to ?

2023-06-15

Imran Hussain avatar
Imran Hussain

If I want to pass in the backend option specifically the key on each stack that is built is there a simple way to do that. Specifically I have some default backend option and I want to add the key element to that backend configuration at each stack or better yet have it included in as part of the run. Would I use the include option for the stack or would I use mixins. Also when the yaml is merged can I give a strategy to say I want to merge or overwrite ? does this make sense

Imran Hussain avatar
Imran Hussain

define in the some file that is included backend: s3: bucket: "somebucket" region: "someregion" encrypt: true dynamodb_table: "somelocktable" kms_key_id: "somerole" The in the stack have something like backend: s3: key: "somekey"

jose.amengual avatar
jose.amengual

you define that as a global file that you import to each stack or you add it to each stack

jose.amengual avatar
jose.amengual

each stack = logical stack defined by a name

jose.amengual avatar
jose.amengual

if you are talking about templetarizing the backend config, that I do not know if is possible

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you define the backend config in the Org _defaults.yaml file

  backend:
    s3:
      encrypt: true
      bucket: "cp-ue2-root-tfstate"
      key: "terraform.tfstate"
      dynamodb_table: "cp-ue2-root-tfstate-lock"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then, regarding workspace_key_prefix for each component: if you don’t specify it, then it will be generated automatically using the Atmos component name, for example

components:
  terraform:
    vpc:
      metadata:
        component: vpc
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in this case, workspace_key_prefix will be vpc

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can override workspace_key_prefix per component (if needed for some reason) like so:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    vpc:
      metadata:
        component: vpc
      backend:
        s3:
          workspace_key_prefix: infra-vpc
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you import this Org global file https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/_defaults.yaml#L8 into the stacks (together with the component config)

  backend:
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and then Atmos deep merges the backend section (including the s3 subsection) when executing commands like atmos terraform plan/applt vpc -s <stack>

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

from the example above, the final deep-merged config for the vpc component would look like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  backend:
    s3:
      encrypt: true
      bucket: "cp-ue2-root-tfstate"
      key: "terraform.tfstate"
      dynamodb_table: "cp-ue2-root-tfstate-lock"
      acl: "bucket-owner-full-control"
      region: "us-east-2"
      workspace_key_prefix: infra-vpc
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and finally, by using this final backend config, Atmos auto-generated the backend file in the component folder, which terraform then uses

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to summarize:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Define just the common fields for the backend in some Org-level global config file
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Import that config file into all stacks
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. For each component, don’t specify workspace_key_prefix, it will be calculated automatically (unless you want tooverride it for any reason)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. The final backend file will be auto-generated by Atmos before calling the terraform commands. When executing atmos terraform plan <component> -s <stack>, Atmos will generate the backend file with all the required fields in the component folder, and then call terraform plan/apply, and terraform will use the generated backend fiel
Imran Hussain avatar
Imran Hussain

Good Morning. Thanks for the great support from all. I have created the defaults file to use

1
Imran Hussain avatar
Imran Hussain

So based on my testing the yaml file does not support anchors and aliases

Imran Hussain avatar
Imran Hussain

but I get the general idea

Imran Hussain avatar
Imran Hussain

and the terraform: vars: {}

Imran Hussain avatar
Imran Hussain

does that point to file like the terraform –var-file option or is it just the terraform –var option that this yaml map covers

Imran Hussain avatar
Imran Hussain

And once again thank you for all the support and guidance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can you share what you did for YAML anchors? We definitely support it (it’s built into the go yaml library). Just be advised that there’s no concept of YAML anchors across files, because files are first read as YAML, thus any references are processed at evaluation time of the YAML.

Imran Hussain avatar
Imran Hussain

Sorry. I did not read the error message to well. I was trying to alias a value but it wanted a map i.e. I was doing this : vars: environment: dev namespace: mot tenant: dvsa stage: poc region: ®ion eu-west-1

terraform: backend_type: s3 # s3, remote, vault, static, azurerm, etc. backend: s3: encrypt: true bucket: "dvsa-poc-terraform-state" dynamodb_table: "dvsa-poc-terraform-state-lock" <<: *region role_arn: null

when I should have been doing this.

vars: environment: dev namespace: mot tenant: dvsa stage: poc region: ®ion eu-west-1

terraform: backend_type: s3 # s3, remote, vault, static, azurerm, etc. backend: s3: encrypt: true bucket: "dvsa-poc-terraform-state" dynamodb_table: "dvsa-poc-terraform-state-lock" region: *region role_arn: null

The error that was thrown which I failed to read correctly was

invalid stack config file 'org/dvsa/_defaults.yaml' yaml: map merge requires map or sequence of maps as the value

Which clearly on a second read shows its a type error where it is expecting a map and I am passing in a scaler value

1
Imran Hussain avatar
Imran Hussain

then the merged dict would include both

2023-06-16

Imran Hussain avatar
Imran Hussain

A few more questions around the import statements that are at the top is there a way to template them. ? I have import: - org/dvsa/_defaults - org/dvsa/dev/_defaults

What I want to do is make this generic so it can be picked up by default based on the vars I have set something along the lines of import: - org/_defaults - org/{tenant}/_defaults - org/{tenant}/{namespace}/{app_region}/_defaults - org/{tenant}/{namespace}/{app_region}/{environment}/_defaults

So the later defaults an override the ones that came before it if they so wish but not be explicit in defining them.

So I have a multiple orgs so we abstract what is common in all orgs into the _default,yaml at that level. The we have for each org we define the _default.yaml at that level then I have a logical app_region one of “dev,uat,prd” at that level there is a _default.yaml that is common across all dev environments then I have multiple environments which all have their own _defaults.yaml which then can be merged in to the components which can override them at the environment level or add depending on what is needed. Or is there a different way to implement what I need to do ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, have you considered organizing the infra the way AWS organizes it?

e.g.

org → ou (tenant) → account → region → resources

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in our case, we dedicate the account to the “Environment” (aka stage)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and we dedicate the org, to the namespace.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

by following this model, when you look in AWS web console it closely models what you also see in IaC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(replace my → with a / for folders)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and “resources” can be further broken out by something like “app” or “layer”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. we do “network”, “compliance”, “data”, “platform” layers

Imran Hussain avatar
Imran Hussain

We have multiple development teams that each have there own development environment in the same account so org/department/project/env/env1..env20

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

IMO, the department ~ OU

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the OU has multiple stages, and the projects probably work together to some degree within the stage

Imran Hussain avatar
Imran Hussain

This is how the AWS account structure has been laid out. Its legacy. There is no budget or appetite to change this multi-tenant situations. Larger discussion are in place to maybe move to the one account == environment but thats been pushed out by 6 months

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Got it, yes, if working within existing conventions, then best to stick with them!

Imran Hussain avatar
Imran Hussain

Is there a way to use templating for the yaml like {environment} or { tenant } like atmos terraform generate varfiles --file-template {component-path}/{environment}-{stage}.tfvars.json

Imran Hussain avatar
Imran Hussain

Is {component-path} that built in ? and can it be used in the _defaults.yaml

Imran Hussain avatar
Imran Hussain

Digging around in the git repo I can see templates and I did see # - path: “mixins/region/{{ .region }}” in https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/eks_cluster_tmpl_hierarchical.yaml

import:
  # Use `region_tmpl` `Go` template and provide `context` for it.
  # This can also be done by using `Go` templates in the import path itself.
  # - path: "mixins/region/{{ .region }}"
  - path: mixins/region/region_tmpl
    # `Go` templates in `context`
    context:
      region: "{{ .region }}"
      environment: "{{ .environment }}"

  # `Go` templates in the import path
  - path: "orgs/cp/{{ .tenant }}/{{ .stage }}/_defaults"

components:
  terraform:
    # Parameterize Atmos component name
    "eks-{{ .flavor }}/cluster":
      metadata:
        component: "test/test-component"
      vars:
        # Parameterize variables
        enabled: "{{ .enabled }}"
        name: "eks-{{ .flavor }}"
        service_1_name: "{{ .service_1_name }}"
        service_2_name: "{{ .service_2_name }}"
        tags:
          flavor: "{{ .flavor }}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use Go templates in 1) the names of the imported files; 2) inside the imported files in any section

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It talks about how to do it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You pass the context to the import. Your context can be anything you want.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s say you have a file called catalog/mixins/app_project.yaml

that looked like

import:
  - org/_defaults
  - org/{tenant}/_defaults
  - org/{tenant}/{{ .namespace }}/{{ .app_region }/_defaults
  - org/{tenant}/{{ .namespace }}/{{ .app_region }}/{{ .environment}}/_defaults

then I had a file in org/{tenant}/{namespace}/{app_region}/{environment}/api.yaml

In that file, I would pass:

import:
- path: mixins/app_project
  context:
    tenant: foo
    namespace: bar
    app_region: use1
    environment: dev
1
Imran Hussain avatar
Imran Hussain

Good Morning. :smile: Maybe I have missed something ?

I have : import: - org/_defaults - org/{{ .tenant }}/_defaults - org/{{ .tenant }}/{{ .environment }}/_defaults

components: terraform: infra-init: metadata: workspace_enabled: false backend: s3: workspace_key_prefix: infra-vpc-dev

`vars:`
  `account_id: "123456782"`

at the org/defaults.yaml I define the variables

Imran Hussain avatar
Imran Hussain

vars: environment: dev namespace: mot tenant: dvsa stage: poc and I get the following error

Imran Hussain avatar
Imran Hussain

no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml' Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')

Imran Hussain avatar
Imran Hussain

If I change the approach using the context then I get the same error:

import: - org/_defaults - org/{{ .tenant }}/_defaults - org/{{ .tenant }}/{{ .namespace }}/{{ .app_region }}/_defaults - org/{{ .tenant }}/{{ .namespace }}/{{ .app_region }}/{{ .environment }}/_defaults using the following stack import: - path: mixins/project context: tenant: dvsa app_region: dev environment: dev01 namespace: mot

components: terraform: infra-init: metadata: workspace_enabled: false backend: s3: workspace_key_prefix: infra-vpc-dev

`vars:`
  `account_id: "123456782"`

I get the following error

no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml' Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')

The file tree looks like the below

├── catalog │ ├── dvsa-dev-poc.yaml -> dvsa-dev-poc.yaml.bc │ ├── dvsa-dev-poc.yaml.bc │ └── dvsa-dev-poc.yaml.tmpl ├── mixins │ └── project.yaml ├── org │ ├── _defaults.yaml │ └── dvsa │ ├── _defaults.yaml │ └── mot │ ├── _defaults.yaml │ ├── dev │ │ ├── _defaults.yaml │ │ └── dev01 │ │ └── _defaults.yaml │ ├── prd │ ├── pre │ └── uat ├── terraform └── workflows └── infra-init.yam

I seem to be missing something with respect to the templating aspect maybe due to when the variables are exposed or in what order they are exposed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(nitpick, please use ``` in stead of single ` for multi-line code blocks)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) I can’t see what’s wrong

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am curious, why the symlink? dvsa-dev-poc.yaml -> dvsa-dev-poc.yaml.bc

Imran Hussain avatar
Imran Hussain

I was testing a few things with different ways of doing the templating and was just a quick way to iterate over the different ways I wanted to use

1
Imran Hussain avatar
Imran Hussain

Just being lazy I suppose

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Imran Hussain let’s review a few things in your config:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

import format like this

import:
  - org/_defaults
  - org/{{ .tenant }}/_defaults
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

does not support Go templating

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to use import with path and context (context is optional)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want us to review your config, please DM the source code and we’ll review

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how you use Go templates in diff parts of the code - import paths, context, and component names https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/catalog/terraform/eks_cluster_tmpl_hierarchical.yaml

import:
  # Use `region_tmpl` `Go` template and provide `context` for it.
  # This can also be done by using `Go` templates in the import path itself.
  # - path: "mixins/region/{{ .region }}"
  - path: mixins/region/region_tmpl
    # `Go` templates in `context`
    context:
      region: "{{ .region }}"
      environment: "{{ .environment }}"

  # `Go` templates in the import path
  - path: "orgs/cp/{{ .tenant }}/{{ .stage }}/_defaults"

components:
  terraform:
    # Parameterize Atmos component name
    "eks-{{ .flavor }}/cluster":
      metadata:
        component: "test/test-component"
      vars:
        # Parameterize variables
        enabled: "{{ .enabled }}"
        name: "eks-{{ .flavor }}"
        service_1_name: "{{ .service_1_name }}"
        service_2_name: "{{ .service_2_name }}"
        tags:
          flavor: "{{ .flavor }}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:

  # This import with the provided hierarchical context will dynamically generate
  # a new Atmos component `eks-blue/cluster` in the `tenant1-uw1-test1` stack
  - path: catalog/terraform/eks_cluster_tmpl_hierarchical
    context:
      # Context variables for the EKS component
      flavor: "blue"
      enabled: true
      service_1_name: "blue-service-1"
      service_2_name: "blue-service-2"
      # Context variables for the hierarchical imports
      # `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
      tenant: "tenant1"
      region: "us-west-1"
      environment: "uw1"
      stage: "test1"

  # This import with the provided hierarchical context will dynamically generate
  # a new Atmos component `eks-green/cluster` in the `tenant1-uw1-test1` stack
  - path: catalog/terraform/eks_cluster_tmpl_hierarchical
    context:
      # Context variables for the EKS component
      flavor: "green"
      enabled: false
      service_1_name: "green-service-1"
      service_2_name: "green-service-2"
      # Context variables for the hierarchical imports
      # `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
      tenant: "tenant1"
      region: "us-west-1"
      environment: "uw1"
      stage: "test1"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you have hierarchical imports (two level) and you provide the context for all of them in the top-level stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  - path: catalog/terraform/eks_cluster_tmpl_hierarchical
    context:
      # Context variables for the EKS component
      flavor: "blue"
      enabled: true
      service_1_name: "blue-service-1"
      service_2_name: "blue-service-2"
      # Context variables for the hierarchical imports
      # `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
      tenant: "tenant1"
      region: "us-west-1"
      environment: "uw1"
      stage: "test1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

btw, these examples https://github.com/cloudposse/atmos/tree/master/examples/complete are working examples (including components and stacks) that are used for Atmos testing. Although the examples are not for a real infra (they have a lot of thing just for testing, including some errors and validation errors which Atmos tests find and report), but all Atmos features are covered by the examples

Imran Hussain avatar
Imran Hussain
atmos describe stacks --components=infra-init -s dvsa-dev-poc
Imran Hussain avatar
Imran Hussain

the command I run

Imran Hussain avatar
Imran Hussain
no matches found for the import 'org/{{ .tenant }}/_defaults' in the file 'mixins/project.yaml'
Error: failed to find a match for the import '/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org/{{ .tenant }}/_defaults.yaml' ('/Users/imranhussain/Documents/Projects/DVSA/code/mot-container-poc/infra/atmos/stacks/org' + '{{ .tenant }}/_defaults.yaml')
Imran Hussain avatar
Imran Hussain

The error I get

Imran Hussain avatar
Imran Hussain

I use direnv to set up my environment variables to find the atmos.yaml

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Imran Hussain

Has your question been answered?

Imran Hussain avatar
Imran Hussain

Yes it was.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we updated included_paths in atmos.yaml to the correct values

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Gabriela Campana (Cloud Posse) let’s create a task to improve Atmos docs to better describe all settings in atmos.yaml for the sections included_paths and excluded_path (so we don’t fotget). A few people already asked the same questions

1
Jawn avatar

I fear this is a dumb question - For the Atmos quickstart, are you supposed to work completely out of the sample repo? I’m at the Create Components section of the quick start and atmos vender pull --component infra/vpc I get this error

failed to find a match for the import '/Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/**/*.yaml'

I basically made a new repo, create the stacks and components folders and added the yaml files for vpc and vpc-flow-control-logs as instructed and was going to pull the component directories down. I feel I could probably fix this by grabbing the entire example repo and work out of that, but I thought the quick start would guide the build out more from scratch. Any help would be great (edited)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe this is an error we should eliminate @Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this happens if you don’t even have a single file in that directory structure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It just errors out. if you did this, it will probably work:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
mkdir -p /Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/test
touch /Users/johnfahl/blah/terraform-atmos-learn/stacks/orgs/test/test.yaml
1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it will probably work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I basically made a new repo, create the stacks and components folders and added the yaml files for vpc and vpc-flow-control-logs as instructed and was going to pull the component directories down.
That sounds right so far.

Do you also have an atmos.yaml?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t have any stack config files, then any import will and should fail (similar to any languages like Java, Python, etc.)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jawn we can review your setup, you can DM us anytime with your code

Jawn avatar

Thanks Erik and Andriy. Erik, the mkdir + touch did the trick. VPC threw an error relative paths require a module with a pwd but it seemed to work and pull down the files. I did create the atmos.yaml per the instructions at ~/.atmos/atmos.yaml

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Jawn avatar

Yes, that’s exactly what I was walking through when I ran into an issue. Starting from the “Quick Start” and getting to the page “Create Components” https://atmos.tools/quick-start/create-components On this page, you won’t be able to pull the repo with the atmos vendor pull unless you create the directory and touch the file. I’m sure it would have worked had I manually copied the files in

Create Components | atmos

In the previous steps, we’ve configured the repository, and decided to provision the vpc-flow-logs-bucket and vpc Terraform

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) do we need to take action here? Or fix the error that @Erik Osterman (Cloud Posse) mentioned?
I believe this is an error we should eliminate @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’m not sure what can be done here. We can make the error more detailed (but it already says that no files were found). If you don’t have any stack config files and you run Atmos commands, it will not find any files and error out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, atmos vendor pull needs a component folder already created (for the simple reason that the folder must already contain the file component.yaml which describes how to vendor it)

Jawn avatar

How about add in the instructions

run this
mkdir -p ~/$REPO/stacks/orgs/test
touch ~/$REPO/stacks/orgs/test/test.yaml
Jawn avatar

as a temp workaround

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it def does not work with empty folders (components and stacks)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can def improve our docs (but yes, the docs can always be improved )

Jawn avatar

Is the quick start docs produced from a repo? I’ll submit a PR

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

contributions are welcome, thank you

Jawn avatar

Tried pushing a branch to create the PR Permission to cloudposse/atmos.git denied to thedarkwriter.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to fork the repo, then open a PR

1
Jawn avatar

I submitted the PR Sorry to be that guy, but as I move forward to the Provision section, it seems like there is more missing scaffolding.

I pulled the files in Create Components and created the files in Create Stacks. Now when I run the apply command on the Provision page atmos terraform apply vpc-flow-logs-bucket-1 -s core-ue2-dev I get an error that looks like file(s) in an account-map directory (which isn’t created) is missing.

- iam_roles in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵

╷
│ Error: Failed to read module directory
│
│ Module directory  does not exist or cannot be read.
╵

╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../account-map: no such file or directory
╵

╷
│ Error: Failed to read module directory
│
│ Module directory  does not exist or cannot be read.
╵

exit status 1

I do see that terraform init did run in the module directory

(⎈ |docker-desktop:default)johnfahl:terraform-atmos-learn/ $ ll components/terraform/infra/vpc-flow-logs-bucket                                                                                                                                    [11:12:04]
total 28
drwxr-xr-x 10 johnfahl staff  320 Jun 19 14:48 .
drwxr-xr-x  4 johnfahl staff  128 Jun 15 15:57 ..
drwxr-xr-x  3 johnfahl staff   96 Jun 19 14:48 .terraform
-rw-r--r--  1 johnfahl staff 3593 Jun 15 15:56 component.yaml
-rw-r--r--  1 johnfahl staff  246 Jun 21 11:06 core-ue2-dev-infra-vpc-flow-logs-bucket-1.terraform.tfvars.json
-rw-r--r--  1 johnfahl staff  887 Jun 18 14:56 main.tf
-rw-r--r--  1 johnfahl staff  268 Jun 18 14:56 outputs.tf
-rw-r--r--  1 johnfahl staff  492 Jun 18 14:56 providers.tf
-rw-r--r--  1 johnfahl staff 1937 Jun 18 14:56 variables.tf
-rw-r--r--  1 johnfahl staff  315 Jun 18 14:56 versions.tf
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Jawn w/o looking at the code, it’s not possible to say anything about the issues

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
provider "aws" {
  region = var.region

  # Profile is deprecated in favor of terraform_role_arn. When profiles are not in use, terraform_profile_name is null.
  profile = module.iam_roles.terraform_profile_name

  dynamic "assume_role" {
    # module.iam_roles.terraform_role_arn may be null, in which case do not assume a role.
    for_each = compact([module.iam_roles.terraform_role_arn])
    content {
      role_arn = module.iam_roles.terraform_role_arn
    }
  }
}

module "iam_roles" {
  source  = "../account-map/modules/iam-roles"
  context = module.this.context
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which means it’s looking for the account-map component https://github.com/cloudposse/terraform-aws-components/tree/main/modules/account-map

2023-06-17

Release notes from atmos avatar
Release notes from atmos
11:24:35 PM

v1.36.0 what Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs why

If validation is configured for a component, Atmos executes the configured OPA Rego policies. If a policy is misconfigured (e.g. invalid Rego syntax or import), the validation can take a long time and eventually fail. Use the –timeout parameter to specify the required timeout

The timeout (in seconds) can be specified on the command line:…

Release v1.36.0 · cloudposse/atmosattachment image

what

Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs

why

If validation is configured for a compone…

Release notes from atmos avatar
Release notes from atmos
11:44:36 PM

v1.36.0 what Add timeout parameter to atmos validate component command Add timeout parameter to settings.validation section in the stack config Update docs why

If validation is configured for a component, Atmos executes the configured OPA Rego policies. If a policy is misconfigured (e.g. invalid Rego syntax or import), the validation can take a long time and eventually fail. Use the –timeout parameter to specify the required timeout

The timeout (in seconds) can be specified on the command line:…

2023-06-18

2023-06-19

Michael Dizon avatar
Michael Dizon

has anyone migrated a stack from an older implementation of atmos (implemented a little over a year ago) to the latest?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we do it all the time, let us know if you need help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a migration doc somewhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos component migrations in YAML config | atmos

Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Were you able to find what you were looking for?

Michael Dizon avatar
Michael Dizon

going to start on it next week

1
Michael Dizon avatar
Michael Dizon

what’s supposed to be in the second bullet point plat-gbl-sandbox plat-ue2-dev

https://atmos.tools/tutorials/atmos-component-migrations-in-yaml/#migrating-state-manually

Atmos component migrations in YAML config | atmos

Learn how to migrate an Atmos component to a new name or to use the metadata.inheritance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB see Looks like some markdown formatting issues and missing copy.

2023-06-20

YoungChool Kim avatar
YoungChool Kim

Hi guys, Is there any who can help me to do atmos vendor pull in scp-style?

# Questions Hi, When I do vendor pull with atmos cli to use ssh to clone the code, but it fails. It works when I configure the uri with http protocol but fails with url-style or scp-style.

Here is the component description I have:

apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
  name: bucket
  description: A bucket to build
spec:
  source:
    uri: github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}} # working
    # uri: [email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}} # case 1 - not working
    # uri: git::<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}> # case 2 - not working
    version: 0.47.1

This is the error message I got.

# case 1

root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from '[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1' and writing to 'components/terraform/infra/bucket'

relative paths require a module with a pwd

# case 2

root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from 'git::<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>' and writing to 'components/terraform/infra/bucket'

error downloading '<ssh://[email protected]/cloudposse/terraform-aws-ec2-instance.git?ref=0.47.1>': /usr/bin/git exited with 128: fatal: not a git repository (or any of the parent directories): .git

Please help me out to pull Terraform modules with atmos. Thank you in advance.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos uses https://github.com/hashicorp/go-getter to load the files

hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

whatever it supports will work (if a protocol is not supported, it will not work)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, we use this style

github.com/cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the other two was not completely tested, I’m not sure if they work correctly, or where the issue might be

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why you did not use /// in [email protected]/cloudposse/terraform-aws-ec2-instance.git?ref={{.Version}}?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe it will work if you use it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make a copy of 3rd-party components in your own repo.

YoungChool Kim avatar
YoungChool Kim

Let me briefly test with your suggestion!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i personally s=did not try the <ssh://git> scheme, not sure if it’s working or not

YoungChool Kim avatar
YoungChool Kim

Hmm. Seems not working…

root@075977a85269:/atmos# atmos vendor pull -c infra/bucket
Pulling sources for the component 'infra/bucket' from '[email protected]/cloudposse/terraform-aws-ec2-instance.git///?ref=0.47.1' and writing to 'components/terraform/infra/bucket'

relative paths require a module with a pwd
YoungChool Kim avatar
YoungChool Kim

Actually I googled a lot for the error message “relative paths require a module with a pwd” but coudn’t find the exact reason for that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://github.com/hashicorp/nomad/issues/8969 - looks like a lot of issues regarding this even HashiCorp is having using their own go-getter

#8969 Using artifact stanza errors with relative paths require a module with a pwd

Nomad version

Nomad v0.12.5

Operating system and Environment details

• macOS 10.15.6 • Ubuntu 20.04

Issue

I’m new to Nomad and I believe this is more of a documentation issue and wanted to put this somewhere other people can find easily. I spent way too much time trying to resolve this and it was a little bit of a burden to getting started with Nomad.

When using the artifact stanza to clone a Git repository, I was getting the error relative paths require a module with a pwd. It took quite a long time and digging to find out how the artifact stanza really worked as well as how to resolve the error.

Reproduction steps

Create a Job file with an artifact and using the docker exec driver. See the missing

Job file (that causes the error)

job "example" {
  datacenters = [
    "dc1"
  ]

  type = "batch"

  group "web" {
    task "setup" {
      artifact {
        source = "[email protected]:username/repo.git"
        destination = "local/repository"
        options {
           sshkey = "<key-in-base64>"
         }
      }

      driver = "docker"

      config {
        image = "alpine"
        args = ["ls"]
      }
    }
  }
}

Since it was unclear how the artifact and paths works for the task, the jobs were failing with relative paths require a module with a pwd which really did not say anything that would help me resolve the issue. I even scoured the documentation on the go-getter repository.

The real issue is that I was not mounting the local directory into the container (I was going step by step to examine the process and filesystem). To resolve the issue I needed to mount the directory using the docker config.volume stanza and the job completed successfully.

Job file (that resolves the error)

job "example" {
  datacenters = [
    "dc1"
  ]

  type = "batch"

  group "web" {
    task "setup" {
      artifact {
        source = "[email protected]:username/repo.git"
        destination = "local/repository"
        options {
           sshkey = "<key-in-base64>"
         }
      }

      driver = "docker"

      config {
        image = "alpine"
        args = ["ls"]

        volumes = [
          "local/repository/:/path/on/container",
        ],
      }
    }
  }
}

I know Nomad is new and this is not an attempt to bash the product, just an illumination on the confusion on how jobs/documentation might need an extra set of eyes from the perspective of someone new to Nomad. More specifically, I think there should be example job files for lots of popular application stacks to make it easier to get started.

YoungChool Kim avatar
YoungChool Kim

This works!

apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
  source:
...
    uri: git::[email protected]:cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}

Note that I gave git::in front of the URL I tested lastly.

YoungChool Kim avatar
YoungChool Kim

Thank you for helping me with this! @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

np

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

from the doc

The git getter accepts both URL-style SSH addresses like git::<ssh://[email protected]/foo/bar>, and "scp-style" addresses like git::[email protected]/foo/bar. In the latter case, omitting the git:: force prefix is allowed if the username prefix is exactly git@.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but it looks like it’s not allowed even if the username prefix is git@

1
YoungChool Kim avatar
YoungChool Kim

Yes I think so

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, please check this thread

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

quick question about atmos vendor. my component.yaml looks like this

uri: github.com/cloudposse/terraform-aws-ec2-instance.git/?ref={{.Version}}
version: 0.47.1

but when I pull, I get this error:

subdir "%253Fref=0.47.1" not found

how should the url be formatted?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sounds vaguely familiar.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This works!

apiVersion: atmos/v1
kind: ComponentVendorConfig
spec:
  source:
...
    uri: git::[email protected]:cloudposse/terraform-aws-ec2-instance.git///?ref={{.Version}}

Note that I gave git::in front of the URL I tested lastly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Glad you got it working.

1
Kubhera avatar
Kubhera

Hi, can we vendor pull from private hosted git? I’m trying to pull but it is just creating the components directory and the command simply exiting after a while, can anyone throw some light ?

output:

Pulling sources for the component ‘account-configuration’ from ‘[email protected]/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref=2.9.7’ into ‘components/terraform/account-configuration’

but I can’t find any files under the specific component folder, its empty… anyone has any idea? please help apiVersion: atmos/v1 kind: AtmosVendorConfig metadata: name: accounts-vendor-config description: Atmos vendoring manifest spec: # imports or sources (or both) must be defined in a vendoring manifest imports: []

sources: # source supports the following protocols: local paths (absolute and relative), OCI (<https://opencontainers.org>), # Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP, # and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>. # In 'source', Golang templates are supported <https://pkg.go.dev/text/template>. # If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'source'. - component: "account-configuration" source: "<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}" #source: "[github.com/cloudposse/terraform-aws-components.git///?ref={{.Version}}](http://github.com/cloudposse/terraform-aws-components.git///?ref={{.Version}})" version: "1.2.7" targets: - "components/terraform/account-configuration"

hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

It worked now :slightly_smiling_face: i had to add force protocol prefix as git::. source: "git::<https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io/enterprise/platform-tooling/terraform-modules/terraform-datautility-aws-account-configuration.git///?ref={{.Version}}"

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Though <https://CentralCIRepoToken>:<my_token_goes here>@gitlab.env.io is not the recommended implementation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No hardcoded tokens should be committed in URLs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It should “just work” if you have your SSH agent configured. if you’re using it in automation, then you’ll want to use the netrc approach.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

On GitHub, there’s an action to make this easier: https://github.com/marketplace/actions/setup-netrc

It appears you’re using GitLab, so there’s probably an equivalent way of doing it there.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah yes, i missed the hardcoded token. netrc should be used here

Release notes from atmos avatar
Release notes from atmos
01:44:37 AM

v1.37.0 what Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs why

Having the spacelift_stack and atlantis_project outputs from the atmos describe component command is useful when using the command in GitHub actions related to Spacelift and Atlantis

The –include-spacelift-admin-stacks flag for the atmos describe affected command allows including the Spacelift admin…

Release v1.37.0 · cloudposse/atmosattachment image

what

Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs

why

Havi…

Release notes from atmos avatar
Release notes from atmos
02:04:35 AM

v1.37.0 what Add spacelift_stack and atlantis_project outputs to atmos describe component command Add –include-spacelift-admin-stacks flag to atmos describe affected command Update Atmos docs why

Having the spacelift_stack and atlantis_project outputs from the atmos describe component command is useful when using the command in GitHub actions related to Spacelift and Atlantis

The –include-spacelift-admin-stacks flag for the atmos describe affected command allows including the Spacelift admin…

2023-06-21

2023-06-22

Imran Hussain avatar
Imran Hussain

I can see in this file “https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/tenant1/dev/us-east-2.yaml” which I think is a stack that you make use of this construct

name: "{tenant}-{environment}-{stage}-{component}"

Which are not template variables what are these and where do they come from can they be used anywhere ?

import:
  - mixins/region/us-east-2
  - orgs/cp/tenant1/dev/_defaults
  - catalog/terraform/top-level-component1
  - catalog/terraform/test-component
  - catalog/terraform/test-component-override
  - catalog/terraform/test-component-override-2
  - catalog/terraform/test-component-override-3
  - catalog/terraform/vpc
  - catalog/terraform/tenant1-ue2-dev
  - catalog/helmfile/echo-server
  - catalog/helmfile/infra-server
  - catalog/helmfile/infra-server-override

vars:
  enabled: true

terraform:
  vars:
    enabled: false

components:
  terraform:
    "infra/vpc":
      vars:
        name: "co!!,mmon"
        ipv4_primary_cidr_block: 10.10.0.0/18
        availability_zones:
          - us-east-2a
          - us-east-2b
          - us-east-2c

settings:
  atlantis:

    # For this `tenant1-ue2-dev` stack, override the org-wide config template specified in `examples/complete/stacks/orgs/cp/_defaults.yaml`
    # in the `settings.atlantis.config_template_name` section
    config_template:
      version: 3
      automerge: false
      delete_source_branch_on_merge: false
      parallel_plan: true
      parallel_apply: false
      allowed_regexp_prefixes:
        - dev/

    # For this `tenant1-ue2-dev` stack, override the org-wide project template specified in `examples/complete/stacks/orgs/cp/_defaults.yaml`
    # in the `settings.atlantis.project_template_name` section
    project_template:
      # generate a project entry for each component in every stack
      name: "{tenant}-{environment}-{stage}-{component}"
      workspace: "{workspace}"
      workflow: "workflow-1"
      dir: "{component-path}"
      terraform_version: v1.3
      delete_source_branch_on_merge: false
      autoplan:
        enabled: true
        when_modified:
          - "**/*.tf"
          - "varfiles/$PROJECT_NAME.tfvars.json"
      apply_requirements:
        - "approved"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the template is used for Atlantis integration https://atmos.tools/integrations/atlantis

Atlantis Integration | atmos

Atmos natively supports Atlantis for Terraform Pull Request Automation.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos processes the templates to create the real Atlantis project name. Those are not Go templates

Imran Hussain avatar
Imran Hussain

in the same file I also see

{ workflow }
jose.amengual avatar
jose.amengual

any known usage of Atmos on TFC?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you run a custom script on TFC (to run atmos commands to generate varfiles and backend)?

jose.amengual avatar
jose.amengual

I do not know

jose.amengual avatar
jose.amengual

you could poentially run an action before hand

Patrick McDonald avatar
Patrick McDonald

Hello, I’m trying to use remote-state.tf and I’m getting this error, any clues how to solve this?

Error: failed to find a match for the import '/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*.yaml' ('/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs' + '**/*.yaml')
│ 
│ 
│ CLI config:
│ 
│ base_path: ""
│ components:
│   terraform:
│     base_path: components/terraform
│     apply_auto_approve: false
│     deploy_run_init: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(since remote-state is processed by https://github.com/cloudposse/terraform-provider-utils)

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management)

Patrick McDonald avatar
Patrick McDonald

Thanks for the reply! - This is contents of remote-state.tf. I am specifying the path to the atmos.yaml file.

module "remote_state_vpc" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.4.3"

  component = "vpc"
  atmos_cli_config_path = "/usr/local/etc/atmos/atmos.yaml"
  context = module.this.context
}

Heres the full output:

╷
│ Error: failed to find a match for the import '/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*.yaml' ('/Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs' + '**/*.yaml')
│ 
│ 
│ CLI config:
│ 
│ base_path: ""
│ components:
│   terraform:
│     base_path: components/terraform
│     apply_auto_approve: false
│     deploy_run_init: true
│     init_run_reconfigure: true
│     auto_generate_backend_file: true
│   helmfile:
│     base_path: ""
│     use_eks: true
│     kubeconfig_path: ""
│     helm_aws_profile_pattern: ""
│     cluster_name_pattern: ""
│ stacks:
│   base_path: stacks
│   included_paths:
│   - orgs/**/*
│   excluded_paths:
│   - '**/_defaults.yaml'
│   name_pattern: '{tenant}-{environment}-{stage}'
│ workflows:
│   base_path: stacks/workflows
│ logs:
│   file: /dev/stdout
│   level: Info
│ commands: []
│ integrations:
│   atlantis:
│     path: ""
│     config_templates: {}
│     project_templates: {}
│     workflow_templates: {}
│ schemas:
│   jsonschema:
│     base_path: ""
│   cue:
│     base_path: ""
│   opa:
│     base_path: ""
│ initialized: false
│ stacksBaseAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks
│ includeStackAbsolutePaths:
│ - /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/orgs/**/*
│ excludeStackAbsolutePaths:
│ - /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/stacks/**/_defaults.yaml
│ terraformDirAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp/components/terraform
│ helmfileDirAbsolutePath: /Users/pmcdonald/workspace/atmos-metrop/components/terraform/transfer-sftp
│ stackConfigFilesRelativePaths: []
│ stackConfigFilesAbsolutePaths: []
│ stackType: ""
│ 
│ 
│   with module.remote_state_vpc.data.utils_component_config.config[0],
│   on .terraform/modules/remote_state_vpc/modules/remote-state/main.tf line 1, in data "utils_component_config" "config":
│    1: data "utils_component_config" "config" {
│ 
╵
Releasing state lock. This may take a few moments...
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

oh, if you are using that in the remote state, you need to set these two variables:

  atmos_base_path       = "<path to the root of the repo>"
  atmos_cli_config_path = "/usr/local/etc/atmos"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and /usr/local/etc/atmos , not /usr/local/etc/atmos/atmos.yaml (don’t include atmos.yaml, just the path to it

Patrick McDonald avatar
Patrick McDonald

does atmos_base_path require an absolute or relative path?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

absolute

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c the remote state code gets executed from the components folders

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a relative path can’t be used

Patrick McDonald avatar
Patrick McDonald

in a shared env where peoples repo path on disk would be different, how would an absolute path work?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try without atmos_base_path first , just fix the issue with atmos.yaml path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


in a shared env where peoples repo path on disk would be different, how would an absolute path work?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we put it into /usr/local/etc/atmos on local computer and in a Docker container

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so any Atmos code (atmos binary and the utils provider) can find it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try this

module "remote_state_vpc" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.4.3"

  component = "vpc"
  atmos_cli_config_path = "/usr/local/etc/atmos"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding the base path

# Base path for components, stacks and workflows configurations.
# Can also be set using `ATMOS_BASE_PATH` ENV var, or `--base-path` command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are independent settings (supporting both absolute and relative paths).
# If `base_path` is provided, `components.terraform.base_path`, `components.helmfile.base_path`, `stacks.base_path` and `workflows.base_path`
# are considered paths relative to `base_path`.
base_path: ""
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use relative paths, but it will depend from which directory you execute the commands

Patrick McDonald avatar
Patrick McDonald

this worked:

module "remote_state_vpc" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.4.3"

  component             = "vpc"
  atmos_cli_config_path = "/usr/local/etc/atmos"
  atmos_base_path       = "/Users/pmcdonald/workspace/atmos-metrop"
  context               = module.this.context
}
Patrick McDonald avatar
Patrick McDonald

it didnt work without atmos_base_path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so relative paths are not for all use cases, though they will work in some

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in Docker container (geodesic), we automatically set ATMOS_BASE_PATH to the absolute path of the root of the repo

Patrick McDonald avatar
Patrick McDonald

I see.. ok that makes sense

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I understand that all of that is not simple and intuitive. But since the commands get executed from diff folders (atmos CLI from one, Terraform calls all the providers from the components folders), it’s not easy to come up with a generic way where both absolute and relative paths would work in all possible cases

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

something needs to give

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) maybe we should implement a search paths, and include the git root as one of them.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in geodesic, we solved all of that by 1) automatically setting ATMOS_BASE_PATH to the absolute path of the root of the repo; and 2) placing atmos.yaml in /usr/local/etc/atmos so it works all the time for all binaries (atmos, terraform, TF providers), and you can execute Atmos commands from any folder (not only from the root of the repo)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but yes, we can prob improve all of that

Patrick McDonald avatar
Patrick McDonald

Andriy - thank you for your help as its working now. Each dev on our team can just set their own ATMOS_BASE_PATH env var

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use a Docker container, you can do it automatically for all devs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/geodesic

Geodesic is a DevOps Linux Toolbox in Docker. We use it as an interactive cloud automation shell. It’s the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/

Patrick McDonald avatar
Patrick McDonald

I’ll go down that path once I fully wrap my head around atmos.. baby steps

Release notes from atmos avatar
Release notes from atmos
08:24:37 PM

v1.38.0 what Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs why Atmos supports OPA policies for component validation in a single Rego file and in multiple Rego files. As shown in the example below, you can define some Rego constants, modules and helper functions in a separate file stacks/schemas/opa/catalog/constants/constants.rego, and then import them into the main policy file…

Release v1.38.0 · cloudposse/atmosattachment image

what

Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs

why Atmos supports OPA …

Release notes from atmos avatar
Release notes from atmos
08:44:34 PM

v1.38.0 what Refactor Atmos components validation with OPA Allow creating a catalog of reusable Rego modules, constants and helper functions to be used in OPA policies Update docs why Atmos supports OPA policies for component validation in a single Rego file and in multiple Rego files. As shown in the example below, you can define some Rego constants, modules and helper functions in a separate file stacks/schemas/opa/catalog/constants/constants.rego, and then import them into the main policy file…

2023-06-26

2023-06-27

2023-06-29

sheldonh avatar
sheldonh

Just revisited atmos and looking forward to exploring the latest. Didn’t realize it was written in Go golang

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please review https://atmos.tools/category/quick-start and let us now if you need any help

Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh yes, we gave up on Makefile a long time ago (inside joke)

cool-doge1
sheldonh avatar
sheldonh

so does atmos replace the build-harness stuff you used to use with the big makefile?

I ask cause I’d been building out a much less sophisticated go cli from things I built with mage originally that helps with work tasks (setting up aks access, azure-cli configuration, pull requests (not on github sadly) etc. Curious if you ended up codifying all your CI tasks in this or just the main workflows.

Planning on looking at it soon, no rush. Have an idea for a pr to show something with cobra I ran into that might be neat for ya’ll.

sheldonh avatar
sheldonh

Oh neat! I see some cool concepts to try now with automatic tfvars generation and more. I see that you probably still mix in the shell effort from build harness as a separate scope and codified your workflow orchestration on this project not so much each individual task. Hoping to play around it this next week as I’m going to be consulting with a team on a big refactor of the terraform stacks at my $work. Might post a few questions on that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please ask

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can help with the initial setup

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

to be clear, Atmos does not rely on build-harness (geodesic does), it’s completely self-contained binary written in Go. Our customers manage different infras with Atmos, from a very simple ones to the stacks with many Orgs, each one having many tenants, with many accounts and regions (which corresponds to more than a thousand infrastructure stacks, for example, in Spacelift)

1
sheldonh avatar
sheldonh

Yeah I got that. I meant calling linters and other tools seems to still be there while atmos solves a higher level workflow orchestration. That’s different to how I use go & mage right now. Looks interesting to try. I might take ya up on that help! Cheers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, right now, atmos isn’t designed to replace tools like Mage or Make, although it can do almost the same things. I suppose my point is it’s optimized more for managing very large configurations, and provides workflow mechanics to work with other tools. No other build tools are optimized to manage thousands of environments (e.g. thousands of microservices).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here you can see some of our inspiration https://atmos.tools/reference/alternatives

Atmos Alternatives | atmos

To better understand where Atmos fits in, it may be helpful to understand some of the alternative tooling it seeks to replace. There are lots of great tools out there and we’re going through a bit of a “DevOps Rennisance” when it comes to creativity on how to automate systems.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos is somewhat of a mashup of go-task, variant, helmfile, appbuilder — we took the best of all of them and built a tool around it. Helmfile proved very effective for managing large environments. Now imagine if you use that with any tool. That’s sort of what variant was for, but it wasn’t optimized for specific tools like terraform. So we built native workflow support for terraform. App builder is awesome because it lets you wrap all the CLIs you depend on into a single, documented tool tool (“help operations teams wrap their myriad shell scripts, multi line kubectl invocations, jq commands and more all in one friendly CLI tool that’s easy to use and share.”). And go-task is a great simple way of defining workflows. So now take all of that, stick it into one command and you get atmos

sheldonh avatar
sheldonh

Very cool. I need to go check out app builder hadn’t heard of that.

one thing I really did like like when using Pulumi, was creating strongly typed deployment definitions, and generating the yaml from that. I wish that text templating wasn’t the standard go to but it’s hard to resist the inertia something like helm has.

I wonder if terraform had been more prevalent when helm was thought of if terraform definitions would have become the defacto way to define deployments instead of yaml files.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re: terraform, I’ve wondered the same.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re, why go templating of YAML, apparently helm tried to move away from this or add other mechanisms, but it didn’t go anywhere. I think the templating, as ugly as it is, “levels the playing field” for adoption.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See the other thread on Cuelang.

2023-06-30

    keyboard_arrow_up