#atmos (2023-03)

2023-03-01

Michael Dizon avatar
Michael Dizon

How can I plan or deploy the account module in the gbl region under the mgmt or core tenant in the the example within the atmos repo, when the stack name pattern is formatted as ’{tenant}-{environment}-{stage}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
atmos terraform plan account -s mgmt-gbl-root
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

accounts are provisioned in the root account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to create a YAML config file in stacks/catalog/account/defaults.yaml

components:
  terraform:
    account:
      vars: ...
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then in your top-level stack for mgmt-gbl-root you need to import that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

assuming this folder structure (and it can be anything suitable to your needs):

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
stacks/
   catalog/
     account/
       defaults.yaml
   orgs/
     org1/
       mgmt/
         root/
            global-region.yaml
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the global-region.yaml file, import the account Atmos component:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:
  - catalog/account/defaults
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then execute

atmos terraform plan account -s mgmt-gbl-root
Michael Dizon avatar
Michael Dizon

excellent, i’ll give that a go when i get back to my desk

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, we did launch the refarch channel, which is more focused on questions like these related to our terraform components)

1

2023-03-06

Viacheslav avatar
Viacheslav

Hi guys, I faced to strange behavior of stacks imports. I have 4 stacks ./atmos/stacks/backend.yaml - with some vars ./atmos/stacks/base.yaml with some vars and import

import:
- backend.yaml
`./environments/advisor/asa/backendoverlay.yaml` - with some vars
./environments/advisor/asa/newbase.yaml - with some vars and import

import:

  • base
    Idea is to overwrite common layer settings with environment settings - this is why stacks locates in different folders
    all 4 stacks are available in `atmos describe stacks`
    But when I try to modify newbase to overwrite a backend
    `./environments/advisor/asa/newbase.yaml`
    

    import:

  • base
  • backendoverlay
    I see this error
    

    no matches found for the import ‘backendoverlay’ in the file ‘/home/viacheslav/work/repos/helmfile-atmos/environments/advisor/asa/base.yaml’ Error: failed to find a match for the import ‘/home/viacheslav/work/repos/helmfile-atmos/atmos/stacks/backendoverlay.yaml’ (‘/home/viacheslav/work/repos/helmfile-atmos/atmos/stacks’ + ‘backendoverlay.yaml’) ``` Why Atmos looks only in one path during import, but for both during describe? Can I import anything from environments/advisor/asa/ ?

Viacheslav avatar
Viacheslav

atmos.yaml for stacks looks like:

stacks:
  base_path: "atmos/stacks"
  included_paths:
    - "../../environments/advisor/**/*"
    - "**/*"
  excluded_paths:
  - "../../environments/advisor/**/atmos.yaml"
  - "../../environments/advisor/**/secrets.yaml"
  - "../../environments/advisor/**/versions.yaml"
  - "../../environments/advisor/**/stateValues.yaml"
  name_pattern: "{stage}"

ATMOS_BASE_PATH points to the root of the repo (which contains atmos/stacks and environments/advisor folders)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all stacks must be under the base_path - this is the root directory for all stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# Base path for components, stacks and workflows configurations.
# Can also be set using 'ATMOS_BASE_PATH' ENV var, or '--base-path' command-line argument.
# Supports both absolute and relative paths.
# If not provided or is an empty string, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path' and 'workflows.base_path'
# are independent settings (supporting both absolute and relative paths).
# If 'base_path' is provided, 'components.terraform.base_path', 'components.helmfile.base_path', 'stacks.base_path' and 'workflows.base_path'
# are considered paths relative to 'base_path'.
base_path: ""
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the stacks and components folder must be under base_path

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the path to a stack is calculated by base_path (if present) + stacks.base_path + the import

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the path to a terraform component is calculated by base_path (if present) + components.terraform.base_path

Viacheslav avatar
Viacheslav

@Andriy Knysh (Cloud Posse) Thanks, got it. So if I need to use “environments” folder, or “components” or anything else to import stacks, then I need to set stacks.base_path equal to the repository root, to access all stacks in nested folders, right? I just was confused, that I can describe and apply stacks that leave outside the stacks.base_path , but can’t import them. Thanks!

johncblandii avatar
johncblandii

Is there a way to define a stack dependency within the stack yaml?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a very very limited support for Spacelift. Not for manual deployment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for that, you can use workflows

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Workflows | atmos

Workflows are a way of combining multiple commands into one executable unit of work.

johncblandii avatar
johncblandii

my fault…specifically for spacelift

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

btw, we are working on improvements to it using the latest features that Spacelift added to their TF provider and UI

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so I would not use the old thing that we have (and it’s very limited anyway, and does not work in all cases)

johncblandii avatar
johncblandii

is that something coming out soon’ish?

johncblandii avatar
johncblandii

yup…that’s exactly what i was looking at

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use it on your own now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but we don’t have support for it yet anywhere (we are working on it, don’t know about ETA)

johncblandii avatar
johncblandii

good deal. under a large microscope so can’t lend time right now, but if i get a window I’ll look at adding support

johncblandii avatar
johncblandii

(…manually on this end)

johncblandii avatar
johncblandii

is support for Spaces in already as well @Andriy Knysh (Cloud Posse)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, spaces are working (I’d say not perfectly, some issues come up periodically). @RB was working on them

johncblandii avatar
johncblandii

ok, sweet. I don’t know how we’d use them just yet, but I’m sure we’ll look into that soon

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#129 Adding missing space_id parameter in new global administrator push po…

• This fixes an issue w/ the new administrative GIT_PUSH policy when using spaces (unless otherwise specified, it will be created in legacy, which is a problem for anyone using spaces!

1
kevcube avatar
kevcube

Hi, I was looking at atmos documentation and I noticed the command atmos vendor diff is not documented. Also some other things:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos vendor diff is not implemented

1
kevcube avatar
kevcube

I was having a very hard time finding something like a complete reference page for the component.yaml. sometihng similar to the github actions reference page that shows every option and you can click in to more relevant docs.

I was specifically looking for documentation on the mixins section of component.yaml, and couldn’t find, so i just copied from other places in our codebase

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos vendor pull is

kevcube avatar
kevcube

ok, i never even dug further, I just saw it here and looked for docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Vendoring | atmos

Use Component Vendoring to make a copy of 3rd-party components in your own repo.

kevcube avatar
kevcube

also some hyperlinks in the docs point you to the docs dir in the atmos repo, which is nearly empty. i’ll try to find an example

kevcube avatar
kevcube

@Andriy Knysh (Cloud Posse) yes I saw that page, but wasn’t sure if those options were exhaustive, and also they don’t really explain themselves. one single example wasn’t enough for me to feel that i have a full understanding of what that file is doing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
# 'vpc' component vendoring config

# 'component.yaml' in the component folder is processed by the 'atmos' commands
# 'atmos vendor pull -c infra/vpc' or 'atmos vendor pull --component infra/vpc'

apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
  name: vpc-vendor-config
  description: Source and mixins config for vendoring of 'vpc' component
spec:
  source:
    # Source 'uri' supports the following protocols: Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP,
    # and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
    # In 'uri', Golang templates are supported  <https://pkg.go.dev/text/template>
    # If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
    uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}}
    version: 1.91.0
    # Only include the files that match the 'included_paths' patterns
    # If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
    # 'included_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
    # <https://en.wikipedia.org/wiki/Glob_(programming)>
    # <https://github.com/bmatcuk/doublestar#patterns>
    included_paths:
      - "**/*.tf"
      - "**/*.tfvars"
      - "**/*.md"

  # mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
  # mixins are processed in the order they are declared in the list
  mixins:
    # <https://github.com/hashicorp/go-getter/issues/98>
    # Mixins 'uri' supports the following protocols: local files (absolute and relative paths), Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP
    # - uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
    # This mixin `uri` is relative to the current `vpc` folder
    - uri: ../../mixins/context.tf
      filename: context.tf

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(and yes, the docs still needs improvements, but it takes enormous amount of time to describe every single feature; we’ve spent months on the docs already, and it’s still not 100% ready except some sections)

1
kevcube avatar
kevcube

Yeah I get it - documentation is my least favorite part of shipping a product Just wanted to share my experience

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check component.yaml in the examples here https://github.com/cloudposse/atmos/tree/master/examples/complete/components/terraform/infra, it describes some diff use cases (but the schema is the same as shown here https://atmos.tools/core-concepts/components/vendoring

kevcube avatar
kevcube

I would recommend for vendor diff to at least add stub documentation explaining what you told me, or just disable the command via a flag in the CLI

kevcube avatar
kevcube

And finally one other piece of feedback which I will start a new thread for

kevcube avatar
kevcube

I feel one of the core parts of the atmos experience is reusing open source code which is great, but the way things are “vendored” in using a git clone at a point in time and changes being made via mixins, otherwise a vendor pull will silently overwrite your changes - this makes it very hard to upstream. I know any change to atmos in this area is likely a large architectural change, but if components were somehow vendored in as git submodules, or in some way that would enable us to use existing tooling to maintain changes against upstream, that would help to foster upstreaming/contribution

kevcube avatar
kevcube

overall I like opportunities to reduce SLOC that I am maintaining in infra projects, which is why i use OSS modules as much as possible, but vendoring in a component can quickly add 1k+ lines to my PR that I worry my team won’t have time to properly review

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, thanks for the feedback. atmos (at least currently is not a tool to upstream and downstream changes to TF components. It has just one simple command vendor pull` to get a component first time w/o copying it manually

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is just a very small part of the whole vendoring/upstream/downstream process

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the larger part is how to keep the components in syn, always up to date, and always tested, and if a user changes something, how to make sure that it works for all other people

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is a very complicated process (and not related to atmos)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are discussing this internally

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and we need a lot of things to be implemented before this is ready for prime time

kevcube avatar
kevcube

do you find people using atmos vendor commands in CI to make sure that their component library matches the remote?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. automatic testing of all components, all new changes, etc. (including on platforms like Spacelift)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


do you find people using atmos vendor commands in CI to make sure that their component library matches the remote?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

see the above, this is not the main part, the more important part is how to keep hundreds of components up to date and tested

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in our experience, people get a component and change it, then they want to upsteream, then everything is different and not tested

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we need a process for this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with hundreds of components, something always changes somewhere

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I have not seen a single infra yet where the same component was exactly the same (well, except for very simple components where there is nothing to change)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we would be happy to add additional features to atmos to help with all of this, once we figure out the process and all the details

kevcube avatar
kevcube

great thank you. yeah as I’ve been learning to use it, i have had some pain points and other desires for the tool, and I thought it would be best to communicate those up to the maintainers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes thank you for all the feedback

kevcube avatar
kevcube

and third, would you consider decoupling the terraform command apply, plan, from the atmos terraform command?

I would rather do atmos terraform -c component -s stack plan because then it feels more that atmos will explicitly pass everything after the stack to terraform. when it is atmos terraform plan -c .. -s .. then i wonder what atmos is doing under the hood, if they truly support every terraform command (including future commands if atmos is not updated)

a minor UX nitpick that I have learned to live with but was confusing at first

kevcube avatar
kevcube

it especially feels weird to do atmos terraform apply -c component -s stack -auto-approve (I only just now learned about atmos deploy)

because I am passing flags so far removed from the terraform that I assume apply is going to atmos instead and I wonder if it will forward my auto-approve

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we try to support all terraform commands transparently

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

including the future ones

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe something will not work, but we review it on a case by case basis

kevcube avatar
kevcube

yes and terraform’s CLI has been very good about compatibility so far so I assume it will be alright. My concern is mainly the ordering of the commands. And I am familiar with it now, but as a beginner it was counterintuitive. It would probably be more disruptive to change it at this stage.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i would say it’s a lot of work and disruption :)

kevcube avatar
kevcube

Also regarding stacks - is there any plan to give the ability at the atmos level to retrieve outputs from one stack to be passed as inputs to another stack? I prefer to never use remote_state references in native terraform, I come from a terragrunt world where dependencies’ outputs can be transformed into inputs easily, and that’s something I’m missing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

currently we support this abstraction over remote state https://atmos.tools/core-concepts/components/remote-state

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no need to use native TF data sources with a ton of configs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and yes, something like you mentioned can be added as well - just need to figure out the interface first and how all of that would fit together (e.g. do we read the real state, or just use vars from stack config, which ill not work in all cases anyway)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

again, all your feedback is greatly appreciated @kevcube

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(this is much simple in principle than the vendoring/upstreaming/downstreaming thing)

el avatar

hey all :wave: is there a way to specify Terraform provider/version information in a stack file? In particular I’d like to configure the TF Kubernetes provider with the correct context without having to pass it in as a variable (e.g. generating a providers.tf.json the same way atmos can generate backend.tf.json).

el avatar

alternatively I guess I could use the KUBE_CTX variable in my atmos config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

generating providers.tf.json is not currently supported, but you can use regular variables in [provider.tf](http://provider.tf) and define them in the stack config YAML

el avatar

gotcha, thanks! that’s what I’m doing currently

2023-03-07

2023-03-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hi folks, i have a question about Atmos as i was going over the concepts and trying to map to what been said in various office-hourse in past year or so.

As per Atmos docs you have components/terraform directory and the full list points to https://github.com/cloudposse/terraform-aws-components. Now my question goes around:

what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry

2023-03-14

kevcube avatar
kevcube

atmos terraform output -s path/to/stack component -json doesn’t work, atmos seemingly interprets the -js and looks for a stack called “on”

kevcube avatar
kevcube

Searched all stack YAML files, but could not find config for the component 'component' in the stack 'on'.

kevcube avatar
kevcube

I was only able to get this to work with .. atmos terraform "output -json" -s path/to/stack component

kevcube avatar
kevcube

Hi,

I want to extract some JSON properties from terraform output, but have some troubles with it. When I am trying to save terraform output to the file:

atmos terraform "output -json > output.json" main -s security --skip-init

I got a error message, because Atmos recognize it as a single terraform command, not as a command and argument:

│ Error: Unexpected argument
│ 
│ The output command expects exactly one argument with the name of an output variable or no arguments to show all outputs.

Alternative option also doesn’t fit my requirements:

atmos terraform "output -json" main -s security --skip-init > output.json

because output contains not only terraform outputs, but also atmos logs like

...
Executing command:
/usr/bin/terraform output -json
...

Is there a way to pass > output.json argument to the terraform in Atmos, or maybe turn off stdout of Atmos for a specific workflow step? Does Atmos native way allow it? Final goal is to read the service principal password, created in terraform and call az login command to switch the user before running the next step.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, we know about the issues. Those are diff issues, the one in GH is to redirect only TF outputs to a file using >, for which we need to add log levels to atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll fix those

Viacheslav avatar
Viacheslav

@kevcube try atmos terraform output -s path/to/stack component --skip-init --args --json

2023-03-15

i5okie avatar

hi, could someone please tell me what this tool he’s using for authenticating? https://youtu.be/0dFEixCK0Wk?t=2525

i5okie avatar

ah. lol i tried so many iterations on google. that was not one i thought of. thanks!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Authenticate with AWS using Leapp | The Cloud Posse Developer Hub

Learn how to use Leapp to supply AWS credentials to tools used within Geodesic.

Release notes from atmos avatar
Release notes from atmos
09:54:36 PM

v1.31.0 what Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs why

Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than second level. For example: https://user-images.githubusercontent.com/7356997/224600584-d77e3fe6-a7a4-4d6d-a691-7cd2a5603963.png The…

Release v1.31.0 · cloudposse/atmosattachment image

what

Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs

why

Fix an i…

attachment image
Release notes from atmos avatar
Release notes from atmos
10:14:37 PM

v1.31.0 what Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than the second level Add atmos describe dependants command Update docs why

Fix an issue when the terraform components were in subfolders in the components/terraform folder deeper than second level. For example: https://user-images.githubusercontent.com/7356997/224600584-d77e3fe6-a7a4-4d6d-a691-7cd2a5603963.png The…

fast_parrot1

2023-03-17

johncblandii avatar
johncblandii

my-ecs-service inherits ecs-service/with-lb inherits ecs-service/default.

This doesn’t work unless in my-ecs-service I also add an inherits line for ecs-service/default. It would seem if I inherit something, I also inherit what it inherits.

Is this the expected functionality? Could this apply the deep merge on inheritance like it does for other parts?

NOTE: I get that this is an array so YAML arrays aren’t merged so I’m asking if something else could be done here.

YAML example:

components:
  terraform:
    my-ecs-service:
      metadata:
        component: ecs-service
        inherits:
          - ecs-service/with-lb

--

components:
  terraform:
    ecs-service/with-lb:
      metadata:
        component: ecs-service
        inherits:
          - ecs-service/default
        type: abstract

--

components:
  terraform:
    ecs-service/default:
      metadata:
        component: ecs-service
        type: abstract
RB avatar

Yes this makes sense. Like you said, the inherits key is a list and lists are stomped on a deep merge. I don’t believe there is any other way to do this for now other than to list all the inherits

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I believe @Andriy Knysh (Cloud Posse) is working on something for this

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Matt Calhoun

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@johncblandii from the 5 types of inheritance (see https://www.simplilearn.com/tutorials/cpp-tutorial/types-of-inheritance-in-cpp)

5 Different Types of Inheritance in C++ With Examples | Simplilearn

Explore the different types of inheritance in C++, such as single multiple multilevel hierarchical and hybrid inheritance with examples. Read on!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos currently supports two (Single and Multiple) https://atmos.tools/core-concepts/components/inheritance

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in your example, it’s “Hierarchical Multilevel Inheritance”, and I’m working on it right now (you will be able to use it next week)

party_parrot2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

with what we have now (“Multiple Single-Level Inheritance”), this

my-ecs-service inherits ecs-service/with-lb inherits ecs-service/default
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can be modeled like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
my-ecs-service:
  inherits:
    - ecs-service/default
    - ecs-service/with-lb

# The order is important, the items are processed in the order they are defined in the list, see  <https://atmos.tools/core-concepts/components/inheritance>
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(yes, you need to specify all inherited components in the list since Single-Level inheritance is not transitive)

johncblandii avatar
johncblandii

as usual, your timing is impeccable @Andriy Knysh (Cloud Posse).

Next week would be perfect. We’re getting a lot of new people into doing stacks and I’d like to avoid confusion on their behalf.

fiesta_parrot1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

v1.32.0 what Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance why Allow creating complex hierarchical stack configurations and make them DRY In Hierarchical Inheritance, every component can act as a base component for one or more child (derived) components, and each child component can inherit from one of more base…

johncblandii avatar
johncblandii

I saw that. Need to dig in

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

johncblandii avatar
johncblandii

oh snap…* in import lines?

Viacheslav avatar
Viacheslav

A quick question about atmos vendor pull authentication. Currently go-getter supports basic and headers authentication for http(s) sources https://github.com/hashicorp/go-getter/blob/main/README.md#http-http. But I didn’t found something about it in Atmos vendor utils: https://github.com/cloudposse/atmos/blob/c1679524cf66d241e0426672bfadbee6447aed69/internal/exec/vendor_utils.go#L196 Does Atmos support any kind of authentication for vendors? In my case I need to authenticate on JFrog Artifactory (both basic and header auth supported) to download a zip with component.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the basic auth should be supported if you add username:password@ to the hostname in the URL- but that’s prob a bad idea since you will have the username/pass in the repo in YAML

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

headers are not supported

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in ay case, the problem here is where to store the secrets. Maybe we could use ENV vars. This is a separate project that required consideration (designing first, e.g. where those secrets come from etc.)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Viacheslav this was a one reason we chose to use go-getter, but as @Andriy Knysh (Cloud Posse) alluded to, not sure on the most practical way to pass secrets that is not opinionated on the platform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you propose some suggestions, we’ll take that into account.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(perhaps open a github issue for it under atmos)

1
Viacheslav avatar
Viacheslav

@Erik Osterman (Cloud Posse) @Andriy Knysh (Cloud Posse) thanks guys! Passing on secrets is always a controversial topic.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our component.yaml already supports go templates. Maybe can read an ENV?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If not, maybe something easy to add if it solves this use-case

Amos avatar

Hi @Erik Osterman (Cloud Posse) / @Andriy Knysh (Cloud Posse) Jfrog artifactory under the hood support the native authentication mechanism for their private terraform registries implementation.

Option 1: After running terraform login <some terraform private registry>, credentials file being created under ~/.terraform.d/credentials.tfrc.json . If Atmos can look for this path on component initialisation, it might solve the problem for any terraform registry use case as long as it work with native terraform registry authentication.

Reference: ==> Terraform Credentials Storage

Option 2(might be the preferred): Atmos will fetch the following environment variable in case of private registries, E.g TF_TOKEN_cloudposse_jfrog_io=<terraform-private-registry-token> , terraform translate the uri to [cloudposse.jfrog.io](http://cloudposse.jfrog.io) .

Reference ==> Environment Variable Credentials

I’ve created this PR

Command: login | Terraform | HashiCorp Developerattachment image

The terraform login command can be used to automatically obtain and save an API token for Terraform Cloud, Terraform Enterprise, or any other host that offers Terraform services.

CLI Configuration | Terraform | HashiCorp Developerattachment image

Learn to use the CLI configuration file to customize your CLI settings, including credentials, plugin caching, provider installation methods, etc.

#369 [Enhancement] Vendor pull from terraform private registries

Describe the Feature

This feature is an enhancement for Atmos Components to be able to fetch terraform modules from private registries based on https.

Expected Behavior

Ability to fetch terraform modules from private terraform registries

Use Case

Many companies use private registries and repositories to:

• Decoupled from vendor servers. • Fasten their pipeline builds. • Avoid breaking routine work when problems appear with vendor servers. • Improve security. • etc…

E.g:

• Docker • Apt • Rpm • Npm • Maven • Helm • Terraform • etc…

Describe Ideal Solution

Option 1:

After running terraform login <some terraform private registry>, credentials file being created under ~/.terraform.d/credentials.tfrc.json .
If Atmos can look for this path on component initialization, it might solve the problem for any terraform registry use case as long as it work with native terraform registry authentication.

Reference: ==> Terraform Credentials Storage

Option 2 (might be the preferred):

Atmos will fetch the following environment variable in case of private registries TF_TOKEN_cloudposse_jfrog_io=<terraform-private-registry-token> , terraform translate the uri to [cloudposse.jfrog.io](http://cloudposse.jfrog.io) .

Reference: ==> Environment Variables Credentials

Alternatives Considered

No response

Additional Context

No response

johncblandii avatar
johncblandii

So for replacing values in a stack yaml is shown on https://atmos.tools/core-concepts/stacks/imports.

it isn’t working for me on v1.31.0. Do we have to use path/context on the import for it to work? (meaning it can’t read the values without being pushed those values)

yaml:

...
            map_environment:
              AWS_ENV: "{{ .stage }}"
            map_secrets:
              NEW_RELIC_LICENSE_KEY: "/{{ .stage }}/newrelic/license_key"
...

describe component:

      map_environment:
        APP_NAME: report-generator
        AWS_ENV: '{{ .stage }}'
        NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: true
        NEW_RELIC_ENABLED: true
      map_secrets:
        NEW_RELIC_LICENSE_KEY: /{{ .stage }}/newrelic/license_key
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

johncblandii avatar
johncblandii

well, it doesn’t look like path/context works either.

Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

johncblandii avatar
johncblandii

any thoughts @Andriy Knysh (Cloud Posse)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Go templates are supported only in imports, and you have to provide all the values for the templates in the context for each import

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t provide the values, Atmos does not know anything about how to get them from any other place

johncblandii avatar
johncblandii
import:
  - path: catalog/terraform/ecs-service/default
    context:
      stage: whatever
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, looks good

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue?

johncblandii avatar
johncblandii
        AWS_ENV: '{{ .stage }}'
johncblandii avatar
johncblandii

^ generated value

johncblandii avatar
johncblandii

from describe component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

w/o seeign the whole solution it’s not possible to understand it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    # Parameterize Atmos component name
    "eks-{{ .flavor }}/cluster":
      metadata:
        component: "test/test-component"
      # Parameterize variables
      vars:
        enabled: "{{ .enabled }}"
        name: "eks-{{ .flavor }}"
        service_1_name: "{{ .service_1_name }}"
        service_2_name: "{{ .service_2_name }}"
        tags:
          flavor: "{{ .flavor }}"

johncblandii avatar
johncblandii

imports look like this:

plat-ue1-dev -> mixins/services/all -> mixins/services/service1/all -> mixins/services/service1/app1 -> ecs-service/default

johncblandii avatar
johncblandii

ok…checking the link

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
import:
  - path: mixins/region/us-west-2
  - path: orgs/cp/tenant1/test1/_defaults

  # This import with the provided context will dynamically generate
  # a new Atmos component `eks-blue/cluster` in the current stack
  - path: catalog/terraform/eks_cluster_tmpl
    context:
      flavor: "blue"
      enabled: true
      service_1_name: "blue-service-1"
      service_2_name: "blue-service-2"

  # This import with the provided context will dynamically generate
  # a new Atmos component `eks-green/cluster` in the current stack
  - path: catalog/terraform/eks_cluster_tmpl
    context:
      flavor: "green"
      enabled: false
      service_1_name: "green-service-1"
      service_2_name: "green-service-2"

johncblandii avatar
johncblandii

yeah, pretty much what i have. maybe it is too many imports down?

johncblandii avatar
johncblandii

or maybe if an import of mine imports the same file it is a problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have to provide the context to each import in the chain

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Stack Imports | atmos

Imports are how we reduce duplication of configurations by creating reusable baselines. The imports should be thought of almost like blueprints. Once

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the point is, Atmos takes an import like

path: "catalog/terraform/eks_cluster_tmpl_hierarchical"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the context for the import

context:
      # Context variables for the EKS component
      flavor: "blue"
      enabled: true
      service_1_name: "blue-service-1"
      service_2_name: "blue-service-2"
      # Context variables for the hierarchical imports
      # `catalog/terraform/eks_cluster_tmpl_hierarchical` imports other parameterized configurations
      tenant: "tenant1"
      region: "us-west-1"
      environment: "uw1"
      stage: "test1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and just calls Go template functions on the imported file (it’s the template) providing the context as the data for the template

johncblandii avatar
johncblandii

and can it use the global vars to pass them in?

johncblandii avatar
johncblandii

or would they be available without context?

johncblandii avatar
johncblandii

backstory: all of the ECS services have similar values, but they differ by stage

johncblandii avatar
johncblandii

goal: create the env var at the ecs-service/default level instead of baking it into Terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I understand what you are talking about, and I was thinking about that for some time now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

instead of providing the (hardcoded) context to every import, you are talking about some global “context” to be used for all possible imports

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i can be missing something, but that does not work in principle (I need to think about it more)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we already have that global context, it’s the vars section

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but… to get that final vars section, Atmos needs to process all imports and all inheritance chains

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now, if we want to use the global context before Atmos processes all imports with Go templates, then it’s a chicken and egg problem: to get the final vars, it needs to process all the imports (including imports with templates); but to process all the imports with templates, it already needs the final vars (global context)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why it’s not supported yet, we need to think about it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, from what you want “backstory: all of the ECS services have similar values, but they differ by stage” - we’ve implemented many use cases like that already, can help you with it (if you send your full solution or add me to the repo)

johncblandii avatar
johncblandii

Exactly what I’m talking about….a global context. I get why it’s problematic for sure which is why I wanted to ask so I know the boundaries when teaching this internally.

johncblandii avatar
johncblandii

Hopefully we’re getting rolling very soon which will open up the repo for a full review.

Id sefinitely love to get feedback

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the global context derived from the final vars is a chicken and an egg problem as I see it)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we can help reviewing it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

“backstory: all of the ECS services have similar values, but they differ by stage” - there are a few ways of doing it in Atmos w/o inventing new features

johncblandii avatar
johncblandii

ok sweet. definitley look forward to it

johncblandii avatar
johncblandii

if you have any links to relevant implementation, i can pick the pieces up on how we can do it

johncblandii avatar
johncblandii

the biggest part for this will come soon and I’ll probably just use the path/context approach for those ssm params

2023-03-18

johncblandii avatar
johncblandii

context: all non-prod (6-8 accounts) uses 1 value and prod has a different value.

For the Go template imports, is there a good way to provide a default value?

I’d like to not have to copy the value 6-8 times and would prefer to provide a default value then just override in prod the 1 time.

I have to copy this N times and tweak for prod right now:

  - path: "ssm-parameters-tmpl"
    context:
      license_path: nonprod.xml

I could prob do abstract versions then just inherit the default and override specifics. A simpler version in 1 file would be ideal.

possible options (in the tmpl file):

context:
  defaults:
    license_path: nonprod.xml

…or inline…

"/{{ .stage }}/license/path":
  value: "{{ .license_path | nonprod.xml}}"
  description: iText license path
  overwrite: true
  type: String

The difference with the former is it allows a DRY solution in case a value is needed multiple times. Support for both would be ideal.

Thoughts @Andriy Knysh (Cloud Posse)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

imports with Go templates don’t support default values (as of now)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but all of that can be done w/o using Go templates

johncblandii avatar
johncblandii

right. just curious if any patterns emerged

johncblandii avatar
johncblandii

or if it would make sense to do so

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are other patterns that can be used (w/o using Go templates in the imports)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you want to use them, you can provide default value in the templates https://stackoverflow.com/questions/44532017/how-can-i-add-a-default-value-to-a-go-text-template

How can I add a default value to a go text/template?

I want to create a golang template with a default value that is used if a parameter is not supplied, but if I try to use the or function in my template, it gives me this error:

template: t220:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

whatever Go templates support (and they support a lot of features), can be used

johncblandii avatar
johncblandii

oh we can or or in here?

johncblandii avatar
johncblandii

ohhhh snap!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you provide default values in the templates, that means you don’t have to specify them in context every time

johncblandii avatar
johncblandii
08:58:43 PM

my man

johncblandii avatar
johncblandii

going to toy with this a bit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Learn Go Template Syntax | Nomad | HashiCorp Developerattachment image

Learn the syntax for Go’s text/template package, the basis of the template processing engines in Nomad, Consul, and Vault.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use or, and, if, else etc.

johncblandii avatar
johncblandii

so basically we can do these like helmfiles now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the only thing you need to keep in mind is that after the template is processed, the result should be a valid YAML

1
johncblandii avatar
johncblandii

{{ if eq .stage "prod" }} works

johncblandii avatar
johncblandii
09:12:48 PM

nick cage

1
johncblandii avatar
johncblandii

Go templates in atmos stacks are a game changer!

1
1

2023-03-20

Release notes from atmos avatar
Release notes from atmos
07:44:37 PM

v1.32.0 what Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance why Allow creating complex hierarchical stack configurations and make them DRY In Hierarchical Inheritance, every component can act as a base component for one or more child (derived) components, and each child component can inherit from one of more base…

Release v1.32.0 · cloudposse/atmosattachment image

what

Add Hierarchical Inheritance and Multilevel Inheritance for Atmos Components Update docs https://atmos.tools/core-concepts/components/inheritance

why

Allow creating complex hierarchical sta…

Component Inheritance | atmos

Component Inheritance is one of the principles of Component-Oriented Programming (COP)

1

2023-03-21

Release notes from atmos avatar
Release notes from atmos
03:24:38 PM

v1.32.1 what Update atmos describe affected command why Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the base_path attribute in atmos.yaml, or using the ENV var ATMOS_BASE_PATH (as it’s done in geodesic) If the atmos base path is absolute, find the relative path between the local repo path and the atmos base path. This relative path (the difference) is then used to join with the remote (cloned) repo path

Release v1.32.1 · cloudposse/atmosattachment image

what

Update atmos describe affected command

why

Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the bas…

Release notes from atmos avatar
Release notes from atmos
03:44:36 PM

v1.32.1 what Update atmos describe affected command why Handle atmos absolute base path when working with the cloned remote repo in the describe affected command Absolute base path can be set in the base_path attribute in atmos.yaml, or using the ENV var ATMOS_BASE_PATH (as it’s done in geodesic) If the atmos base path is absolute, find the relative path between the local repo path and the atmos base path. This relative path (the difference) is then used to join with the remote (cloned) repo path

johncblandii avatar
johncblandii

is there a plan to add in graphviz or similar support to map out component connections, @Andriy Knysh (Cloud Posse)?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a complicated topic, but yes we were thinking about a user interface to help with visualizing the whole infra

johncblandii avatar
johncblandii

absolutely complicated. good to know it is a potential offering

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Especially now with multiple-levels of multiple inheritance shesh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Who would want that?!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

johncblandii avatar
johncblandii

johncblandii avatar
johncblandii

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, snap, looks like you already implemented the graphviz for this?!

johncblandii avatar
johncblandii

nah, that was just google. LOL

1

2023-03-22

Release notes from atmos avatar
Release notes from atmos
06:54:35 PM

v1.32.2 what & why Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page

Release v1.32.2 · cloudposse/atmosattachment image

what & why

Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page

Release notes from atmos avatar
Release notes from atmos
07:14:36 PM

v1.32.2 what & why Converted ‘Get Help’ nav bar item to CTA button More user friendly 404 page

2023-03-24

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed

jbdunson avatar
jbdunson

Thanks for bringing this over! Still wrestling with it

Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jbdunson what dependencies are those? Terraform component deps?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that describe affected takes into account all dependencies from YAML stack configs (all sections) and the component terraform folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anything changed in the component folder will make the component affected

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos does not know anything about random files somewhere in the file system, and it can’t know if your terraform code depends on some files outside of the component folder

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you place your files in a subfolder in the component folder, that should work

1
jbdunson avatar
jbdunson

Great! thanks for the clarity :)

jbdunson avatar
jbdunson

@Andriy Knysh (Cloud Posse) is the subfolder pickup a recent change? I’m using atmos v1.26.0

Currently my component looks like

• component ◦ main.tfversions.tfproviders.tf ◦ non_tf_subfolder ▪︎ subfolder • file1.txt • file2.txt

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was added some time ago

jbdunson avatar
jbdunson

When I make a change to file1 or file2, describe affected doesn’t seem to pick it up, it’s it because I have additional sub folders?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should check everything in the component folder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you run atmos describe affected --verbose=true, do you see those files changes in the output?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also note, you have to not only change a file, you have to commit it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s all about git

jbdunson avatar
jbdunson

I do - the path to file.txt is outputted but the “Affected components and stacks:” array is blank

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos uses go-git to get a list of changed files

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

show the output from the command

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll test this use case. If it’s not working, we’ll fix it in the next release. For now, if you put the files into the component folder (not in a subfolder), it should work ok

1
jbdunson avatar
jbdunson
jbdunson avatar
jbdunson

apologies for the delay, hope this helps ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, thanks, we’ll test it (and cut a new release with a fix if not working now)

jbdunson avatar
jbdunson

cool, thank you for looking into it! Appreciate the support

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
jbdunson avatar
jbdunson

Awesome - will give it a go, thanks for the quick turnaround @Andriy Knysh (Cloud Posse)

jbdunson avatar
jbdunson

@Andriy Knysh (Cloud Posse) can confirm the feature works well with our use case!

1
Release notes from atmos avatar
Release notes from atmos
08:14:38 PM

v1.32.3 what & why Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words…

Release v1.32.3 · cloudposse/atmosattachment image

what & why

Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words commands (if atmos component is not provided on t…

Consolidated index by zdmytriv · Pull Request #351 · cloudposse/atmosattachment image

what

Use consolidated index between atmos.tools and docs.cloudposse.com

why

Use consolidated index between atmos.tools and docs.cloudposse.com

references

Release notes from atmos avatar
Release notes from atmos
08:34:37 PM

v1.32.3 what & why Use consolidated search index between atmos.tools and docs.cloudposse.com #351 Fix panic on “index out of range” in terraform two-words…

2023-03-25

Release notes from atmos avatar
Release notes from atmos
10:14:38 PM

v1.32.4 what Update atmos describe affected command why Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level test For example, if we have the policies sub-folder in the component folder components/terraform/top-level-component1, and we have some files in the sub-folder (e.g. components/terraform/top-level-component1/policies/policy1.rego), and if the files changed, atmos describe affected would mark all Atmos components that use…

Release v1.32.4 · cloudposse/atmosattachment image

what

Update atmos describe affected command

why

Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level

test For example, if …

Release notes from atmos avatar
Release notes from atmos
10:34:34 PM

v1.32.4 what Update atmos describe affected command why Check if not only the files in the component folder itself have changed, but also the files in all sub-folders at any level test For example, if we have the policies sub-folder in the component folder components/terraform/top-level-component1, and we have some files in the sub-folder (e.g. components/terraform/top-level-component1/policies/policy1.rego), and if the files changed, atmos describe affected would mark all Atmos components that use…

2023-03-27

2023-03-28

kevcube avatar
kevcube

Is there a good way to read the component name of a stack from within that component? For example I need what is the equivalent of TF_VAR_spacelift_stack_id from here but in a local run

Environment - Spacelift Documentation

This article describes the environment in which each workload (run, task) is executed

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos describe component xxx -s yyy

Environment - Spacelift Documentation

This article describes the environment in which each workload (run, task) is executed

kevcube avatar
kevcube

But I am looking to reference that component name inside the terraform code.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, terraform code is supposed to be generic and not related to the configuration (separation of logic and config)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but you can use remote-state to get the remote state of any atmos component

kevcube avatar
kevcube

yeah, but in this situation I am needing to alter the spacelift stack from within the spacelift stack, to add GCP credentials. Feature request to atmos to expose some informational environment variables like spacelift does in that link

kevcube avatar
kevcube

and I need to first run from local, because I am privileged in GCP, so I can’t rely on entirely spacelift’s variables in this case

kevcube avatar
kevcube

@RB thanks, that’s exactly what I’m talking about.

jose.amengual avatar
jose.amengual

so I have this

block_device_mappings: 
        - device_name: "/dev/sda1"
          no_device: false
          virtual_name: null
          ebs: 
            volume_size: 20
            delete_on_termination: true
            encrypted: true
            volume_type: "gp2"
            iops: null
            kms_key_id: null
            snapshot_id: null

in a type: abstract component and then I use it like so:

1
jose.amengual avatar
jose.amengual
asg/pepe:
      metadata:
        component: asg
        type: real
        inherits:
          - asg/pepe/defaults
      vars:
        name: "pepe"
        enabled: true
        block_device_mappings:
        - device_name: /dev/sda1
          ebs:
            volume_size: 200
jose.amengual avatar
jose.amengual

but then if I describe the component

jose.amengual avatar
jose.amengual

the block mapping ends like :

 block_device_mappings:
  - device_name: /dev/sda1
    ebs:
      volume_size: 200
jose.amengual avatar
jose.amengual

all the other options removed

jose.amengual avatar
jose.amengual

I guess this is because it is a map?

jose.amengual avatar
jose.amengual

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cannot deep merge lists

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a list, we don’t merge lists for many reasons. in this case you need to copy all the settings from one component to the other

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe can use go templates as a workaround

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Atmos supports those natively

jose.amengual avatar
jose.amengual

what other type does not do deep merge?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Lists are the only thing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s because what does it mean? Do you append the items? Do you prepend the items? Do you merge on index in the list?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s no one way to do it. But with maps and scalars it’s straightforward.

jose.amengual avatar
jose.amengual

yes, it is hard

jose.amengual avatar
jose.amengual

you guys should not allow list ever again as I put vars

jose.amengual avatar
jose.amengual

lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So it’s not that it’s hard, it’s easy to implement anyone of those algorithms. But someone is going to want the behavior of appending. Someone is going to want prepending. And another is going to want deep merging on index.

jose.amengual avatar
jose.amengual

yes and they all have tradeoffs

this1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in all latest terraform modules we used maps everywhere

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for all vars

jose.amengual avatar
jose.amengual

this one is the asg module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not the latest

jose.amengual avatar
jose.amengual

not updated

jose.amengual avatar
jose.amengual

the asg module have this :

variable "block_device_mappings" {
  description = "Specify volumes to attach to the instance besides the volumes specified by the AMI"

  type = list(object({
    device_name  = string
    no_device    = bool
    virtual_name = string
    ebs = object({
      delete_on_termination = bool
      encrypted             = bool
      iops                  = number
      kms_key_id            = string
      snapshot_id           = string
      volume_size           = number
      volume_type           = string
    })
  }))

`

jose.amengual avatar
jose.amengual

how you go about changing that to a map?

jose.amengual avatar
jose.amengual

do you have an example of some of the other modules you guys updated?

jose.amengual avatar
jose.amengual

I think I found something

jose.amengual avatar
jose.amengual
variable "block_device_mappings" {
  description = "Specify volumes to attach to the instance besides the volumes specified by the AMI"
  type = map(object({
    device_name  = string
    no_device    = bool
    virtual_name = string
    ebs = object({
      delete_on_termination = bool
      encrypted             = bool
      iops                  = number
      kms_key_id            = string
      snapshot_id           = string
      volume_size           = number
      volume_type           = string
    })
  }))
  default = {}
}

locals {
  block_device_map = { for bdm in var.block_device_mappings : bdm.device_name => bdm }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, it looks good and you can use it (but to modify the public module to use it would take some effort to maintain backwards compatibility)

jose.amengual avatar
jose.amengual

I have a wrapper already so I’m good

2023-03-29

jose.amengual avatar
jose.amengual
Atmos - Everything you need to create color palettesattachment image

Use professional tools to find colors, generate uniform shades and create your palette in minutes.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

lol

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts. I assume this is because it has been a working strategy for Cloud Posse.

However, it seems to be making it much more difficult to adopt into our tooling.

E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account. This is because the stage variable from the name_pattern is tied to the stack living directly under an OU tenant You can change the name_pattern but it won’t break the overall assumption that stacks actually cannot be per-account. The assumption is more strict than that, because we’re limited to the following variables in the name_pattern: • namespace • tenant • stage • environment Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers? These sandbox accounts might live in a Sandbox OU (tenant), but they aren’t necessarily separate stages of one another, at all. There is no feasible strategy with the name_pattern without breaking the behavior of other stacks. One option could be to combine our account name and region into the environment variable (possibly without side-effects?) like so: sandbox-account-1-use1.yaml But then we would be left with several directories where nesting would be better organized like so: sandbox-account-1/use1.yaml

I can only think that we should have an additional variable in the name_pattern for example: name to truly identify the account.

I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Zach B all those context variables (namespace, tenant, environment, stage) are optional, you can use all of them, ot just two, or even just one

In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts. I assume this is because it has been a working strategy for Cloud Posse.

However, it seems to be making it much more difficult to adopt into our tooling.

E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account. This is because the stage variable from the name_pattern is tied to the stack living directly under an OU tenant You can change the name_pattern but it won’t break the overall assumption that stacks actually cannot be per-account. The assumption is more strict than that, because we’re limited to the following variables in the name_pattern: • namespace • tenant • stage • environment Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers? These sandbox accounts might live in a Sandbox OU (tenant), but they aren’t necessarily separate stages of one another, at all. There is no feasible strategy with the name_pattern without breaking the behavior of other stacks. One option could be to combine our account name and region into the environment variable (possibly without side-effects?) like so: sandbox-account-1-use1.yaml But then we would be left with several directories where nesting would be better organized like so: sandbox-account-1/use1.yaml

I can only think that we should have an additional variable in the name_pattern for example: name to truly identify the account.

I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

regarding the structure of stacks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. The folder structure is for humans to organize the stack config (so you understand where the config for each org, OU, account, region is). Atmos does not care about the folder structure and how you organize the files in the folders
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Atmos cares about context (namespace, tenant, environment, stage) - Atmos stack names are constructed from the context variables which must be defined in the stack config files
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Create Atmos Stacks | atmos

In the previous step, we’ve configured the Terraform components and described how they can be copied into the repository.

Zach B avatar

Right - We’ve been using these context variables for a while now with Cloud Posse modules and the null label.

I did eventually realized the directory structure is irrelevant. Thanks for clarifying.

I think, as I pointed out to Erik in the thread in the other channel, I had a case where there actually weren’t enough context variables that atmos uses to be specific enough for our hierarchy.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I see you are using 4 variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos supports 4

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(granted, the names could be not perfect for your case, e.g. you call the namespace something else)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to make use of 4 context vars, you need to update stacks.name_pattern in atmos.yaml

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here’s a working example of using all 4 context vars

stacks:

  # Can also be set using `ATMOS_STACKS_NAME_PATTERN` ENV var
  name_pattern: "{namespace}-{tenant}-{environment}-{stage}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

do you need more than 4?

Zach B avatar

Right - At the moment what I’ve done is combined our OU names and account names into the single tenant variable, so that we could support separate accounts under the same OU, that aren’t necessarily directly related.

Zach B avatar

Such as workloads-data and workloads-jam

Zach B avatar

I think more than 4 would eliminate the need to do what I have done here, yes.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes it would

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but that will require a lot of redesigning including the label module

Zach B avatar

Makes sense

Zach B avatar

I’ll see how far this gets me. Luckily, we don’t actually use the tenant variable to name any of our resources, so this appears to work for us.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


At the moment what I’ve done is combined our OU names and account names into the single tenant variable, so that we could support separate accounts under the same OU
@Andriy Knysh (Cloud Posse) don’t we do this as well for disambiguation?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. there could be multiple “prod” accounts, across multiple OUs

Zach B avatar

Of course another strategy could be to ensure each distinguished account does live in its own OU or sub-OU, but that is certainly unnecessary to support tooling.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as a side note, if you’re operating in AWS make sure you’re acutely aware of resource name length limits. Make sure to try to keep each context parameter as short as possible.

Zach B avatar

Right, we do make use of id_length_limit - You’ve really put together a lot of necessary configuration!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Zach B also, if ecme is used in all stacks, you don’t need to include it, and you have one more context var to use

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. if a company operates under just one namespace, we don’t include it in the stack names

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c it makes all names longer and is not necessary

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

only if we need to use multiple Orgs, we use namespace

Zach B avatar

That makes sense. And that is the case for us (single namespace throughout).

We have been including it in our resource names by default. One idea was that it would add a little bit of additional uniqueness to resources that required uniqueness, such as S3 buckets.

Although there is a good consideration for dropping it completely.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, tenant is just roughly corresponds to an OU

Zach B avatar

Are there any examples of IAM policies in atmos? This is usually a tricky one.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if let’s say your sandbox account is not in any OU, you can still crate a virtual tenant and use it

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not about the Org structure per se, it’s more about naming conventions (how all the resources get their names/IDs)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Are there any examples of IAM policies in atmos? This is usually a tricky one

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos as a CLI does not care about IAM and access, it just cares about configurations (and making them DRY)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all those IAM roles are Terraform concerns

Zach B avatar

So essentially, a component.

Zach B avatar

E.g. ECS task execution role/policy. These policies can be very granular and change from ECS task to ECS task.

I assume the best way would still be to create a component for it that is used by an atmos stack.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, all of that is component’s concerns. In Atmos, you create config for the component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Zach B regarding using the namespace. 1) You can (and should) use it in the label module for it to be part of the resource names/IDs; but 2) you should not use it for Atmos stack names b/c it’s the same Org and the namespace is the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’m saying that those two things are configured separately

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform:
  vars:
    label_order:
    - namespace
    - tenant
    - environment
    - stage
    - name
    - attributes
    descriptor_formats:
      account_name:
        format: "%v-%v"
        labels:
        - tenant
        - stage
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how to configure the label module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and Atmos stack pattern is configured in atmos.yaml, which can be same as label_order above, or can be completely diff (e.g. not using namespace in the stack name pattern)

Zach B avatar

Thanks @Andriy Knysh (Cloud Posse) It seemed like passing the namespace in through atmos defaults was the simplest way to get it down into the components to eventually be used by the label module though.

I’m doing my best to understand this part.

Zach B avatar
Zach B
02:50:30 PM

@Zach B has joined the channel

1
wave1
Zach B avatar

Is it possible for atmos to generate backend files when the backend uses blocks?

E.g.:

terraform {
  backend "remote" {
    hostname = "app.terraform.io"
    organization = "company"

    workspaces {
      prefix = "my-app-"
    }
  }
}

How can the workspaces block be included with YAML?:

---
terraform:
  backend_type: remote
  backend:
    remote:
      organization: company
      hostname: app.terraform.io
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I believe the YAML is simply a YAML representation of the HCL data structure (e.g. HCL -> JSON -> YAML)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, workspaces are managed by atmos. You can overwrite it, but that will lose some of the convenience that atmos provides.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) I don’t see any parameter in the atmos.yaml config to manage the workspace format: https://atmos.tools/cli/configuration

CLI Configuration | atmos

Use the atmos.yaml configuration file to control the behavior of the atmos CLI.

Zach B avatar

What I was experiencing was that atmos was creating a new workspace, but storing the state locally. It appeared I needed to define the remote state for Terraform Cloud in some way.

Zach B avatar

I believe I needed atmos to generate the backend files for the components at least, but it could not generate the “workspaces” block from YAML.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

workspaces.prefix is not supported (we did not test remote backend much, we mostly use s3). Please open an issue in atmos repo and we’ll review it

Zach B avatar

Thanks for clarifying

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that you can add [backend.tf](http://backend.tf) or backend.tf.json manually for each component and configure atmos to not generate backend files - this will work for you w/o waiting for a fix

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, you can override the auto-generated TF workspace per component in the metadata section

    "test/test-component-override-3":
      metadata:
        # Terraform component. Must exist in `components/terraform` folder.
        # If not specified, it's assumed that this component `test/test-component-override-3` is also a Terraform component
        # in `components/terraform/test/test-component-override-3` folder
        component: "test/test-component"
        # Override Terraform workspace
        # Note that by default, Terraform workspace is generated from the context, e.g. `<tenant>-<environment>-<stage>`
        terraform_workspace: test-component-override-3-workspace
Zach B avatar

It seems like this would require changing the backend.tf each time a different stack uses that component. Because each stack’s component usage will correspond to a different workspace, but each component only has one backend.tf at a given time. (I think)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

S3 backend has

      backend:
        s3:
          workspace_key_prefix: infra-vpc
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and that allows you to use a static backend file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use

workspaces {
      prefix = "my-app-"
    }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for remote backend (in the manually created backend file), it should do the same, no?

Zach B avatar

Yes that should do the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Zach B also (sorry if I misled you), whatever you put into backend.remote section in YAML, will be generated into the backend.tf.json file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
terraform:
  backend_type: remote
  backend:
    remote:
      organization: company
      hostname: app.terraform.io
      workspaces:
        prefix: "my-app-"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

will work and will be in the generated backend file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so remote should work in the same way as s3 (auto-generated ), and TF workspaces auto-generated, and using workspace prefix

Zach B avatar

@Andriy Knysh (Cloud Posse) I tried that, it did not work, at least for “workspaces.name”

Zach B avatar

I assume it is supported for “workspaces.prefix” instead?

Zach B avatar

Also, Terraform Cloud recommends using the “cloud” block for backend configuration rather than the “backend remote” block if you are using Terraform Cloud for state management. I’m assuming atmos does not support generating this “cloud” block and will only generate a “backend” block?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I tried that, it did not work, at least for “workspaces.name”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean if you configure it in YAML and generate backend, it will end up in the backend.tf.json file (all blocks, all maps, all lists - they are converted from YAML verbatim). If a generated block works with TF Cloud, needs to be tested

Zach B avatar

Ahh I wonder if my issue is that atmos was converting from YAML to HCL. Maybe if I try YAML to JSON this will produce better results.

Zach B avatar

It appeared that by default, atmos was generating backend.tf rather than backend.tf.json.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Also, Terraform Cloud recommends using the “cloud” block for backend configuration rather than the “backend remote” block if you are using Terraform Cloud for state management. I’m assuming atmos does not support generating this “cloud” block and will only generate a “backend” block?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the cloud block is not supported (when we implemented it, TF did not have that block yet)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so atmos does the following

// generateComponentBackendConfig generates backend config for components
func generateComponentBackendConfig(backendType string, backendConfig map[any]any) map[string]any {
	return map[string]any{
		"terraform": map[string]any{
			"backend": map[string]any{
				backendType: backendConfig,
			},
		},
	}
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


It appeared that by default, atmos was generating backend.tf rather than backend.tf.json

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I would say it’s the other way around but I don’t know what you are doing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos terraform plan/apply … always generates the backend file in JSON

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but this command https://atmos.tools/cli/commands/terraform/generate-backends can generate it in JSON, TF backend block, and HCL

atmos terraform generate backends | atmos

Use this command to generate the Terraform backend config files for all Atmos terraform components in all stacks.

Zach B avatar

https://sweetops.slack.com/archives/C031919U8A0/p1680133540898069?thread_ts=1680129423.450129&channel=C031919U8A0&message_ts=1680133540.898069

(Sorry, for some reason it’s not letting me do Slack replies)

Anyway, when I tried this with the “name” property rather than the “prefix” property, and ran “atmos terraform generate backend….” Atmos generated a backend.tf file without the “workspaces” block.

Im going to give it another go.

terraform:
  backend_type: remote
  backend:
    remote:
      organization: company
      hostname: app.terraform.io
      workspaces:
        prefix: "my-app-"
Zach B avatar

This also generates the backend.tf file?

Zach B avatar

Ignore the last 2 messages. Apparently I’m having trouble with Slack for mobile.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok, this looks like a classic example of https://xyproblem.info/

The XY Problem

Asking about your attempted solution rather than your actual problem

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Let me explain a few ways of using backends in atmos:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can manually create the backend files in the component folders (for any type of backend). With TF workspace prefix, it will work for all stacks. It will not work only if your backend requires a separate role (e.g. AWS IAM role) for TF to assume for different accounts (in which case we always generate backends dynamically including the roles for TF to assume)
Zach B avatar

I think it would be better described as “I don’t know how to do multiple things in Atmos, and we’ve encountered those multiple things I don’t know how to do while trying to solve a single problem” - but yes, essentially.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can configure any backend in YAML (using backend section) and then call https://atmos.tools/cli/commands/terraform/generate-backends to auto-generate ALL the backends for all the components at once (and then you can commit the files)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can configure any backend in YAML (using backend section) and then, when calling atmos terraform plan/apply <component> -s <stack>, the backend file for the component in the stack will be generated automatically on the fly
Zach B avatar

#3 I believe is only true if components.terraform.auto_generate_backend_file is set to true in atmos.yaml ?

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Regarding the HCL format for the auto-generated backend files, I think there is is bug in atmos b/c of some restrictions of the HCL Golang library it’s using, so for the HCL format complex maps (maps inside of maps) are not generated correctly, but a simple map is ok, e.g. for s3 backend (I think there is an open issue for this, we’ll have to take a look and see if it can be fixed). So don’t use HCL, use JSON - everything with JSON is ok
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so if you use JSON format, you can use any of the #1,2,3 ways of working with the backens for any type of backend (including remote)

Zach B avatar

Thanks a lot for clarifying all of that. That is the way that I understood it as well up to this point. I am just looking to confirm some behavior at the moment.

Currently, it still appears atmos is generating a [backend.tf](http://backend.tf) file by default during atmos terraform generate backends but it seems like you think it should be backend.tf.json by default?

With components.terraform.auto_generate_backend_file set to true - The backend file sometimes does not generate during a plan/apply and I receive a Terraform error about needed to run terraform init first. When it does generate the backend file, it is backend.tf.json by default.

Zach B avatar

So - it appears we have some strange behavior and potentially some fixes that need to be made.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

JSON is used by default in atmos terraform plan/apply <component> -s <stack>

Zach B avatar

I guess that would leave me wondering why it wouldn’t be used by default in atmos terraform generate backend

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

atmos terraform generate backend also generates a backend for a single component in JSON

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are talking about https://atmos.tools/cli/commands/terraform/generate-backends/ - this uses HCL by default, but you can use the flag --format json

atmos terraform generate backends | atmos

Use this command to generate the Terraform backend config files for all Atmos terraform components in all stacks.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and to answer your question why the HCL is default, and not JSON, b/c we did not know an=bout that bug with complex maps in HCL

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


With components.terraform.auto_generate_backend_file set to true - The backend file sometimes does not generate during a plan/apply and I receive a Terraform error about needed to run terraform init first. When it does generate the backend file, it is backend.tf.json by default.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this ^ we’ve never seen before

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

anyway, if you want to generate all backends for all components for remote backend type, use atmos terraform generate backends --format json

Zach B avatar

Thanks a lot. Makes sense.

One note about atmos generate backend (singular)

It appears too closely tied to the s3 backend maybe? It expects the workspace_key_prefix property in your backend YAML, and will error if it is not present. For remote backends, this property does not exist.

Zach B avatar
atmos terraform generate backend vpc --stack acme-workloads-data-test-use1
{
   "terraform": {
   "backend": {
   "remote": {
   "hostname": "app.terraform.io",
   "organization": "ACME",
   "workspaces": {
   "prefix": "acme"
}
}
}
}
}

Backend config for the 'vpc' component is missing 'workspace_key_prefix'
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, you right about atmos generate backend (singular) - it’s tied to s3 backend (we’ll fix it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the rest, atmos terraform generate backends --format json and atmos terrform plan/apply work with any backends

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Zach B thank you for all the testing, you pointed out to a few issues that we need to fix: support cloud block (any type of block, not only bckend); untie the atmos generate backend (singular) command from s3 backend. Since we mostly use s3 backend, those issues were not visible and not tested)

Zach B avatar

@Andriy Knysh (Cloud Posse) Thank you too, a lot. I think I’m about to point out a much deeper issue though with remote backends in atmos that use the workspaces block in the backend file. Currently looking into it.

Zach B avatar

You said atmos automatically manages workspaces and creates the workspace name, I think.

This, I believe, makes it incompatible with the following:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "acme"
    workspaces {
      name = "workspace-name" # will get error "Error loading state: workspaces not supported", because workspace is already set by atmos?
    }
  }
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, this one too. We need to look into remote backends , this is a separate task

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you get the error or you think you’ll get it?

Zach B avatar
Executing command:
/usr/local/bin/terraform workspace new acme-workloads-data-test-use1

^ this is one reason things can break with remote backends. prefix is supported with workspaces in the backend config, but of course a lot of people prefer the workspaces being automatically created and not require any user interaction.

Zach B avatar

I got the error myself.

Zach B avatar

It appears the terraform workspace new might set the TF_WORKSPACE ENV VAR, which is where things collide when using a remote backend config with a workspace name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, it does not set ENV vars

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think TF cloud workspaces are diff things from TF workspaces, no?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I personally used TF cloud 2-3 years ago, so don’t have understanding how it works now

Zach B avatar

I think a potential fix would be not to run terraform workspace new depending on the backend type? (Might not be that simple)

E.g. workspaces in terraform cloud can be automatically generated if they don’t already exist just by seeing the backend config for the first time.

But I think the issue is that atmos provides a way to create workspaces using your naming conventions without requiring you to hardcode them. That would be another problem to solve if you dropped terraform workspace new for certain backend types.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you ask the thing that knows a lot

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Zach B avatar

You know what’s funny, I asked Chat GPT about atmos last week

Zach B avatar

It recommended it, haha

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so you sure that this error Error loading state: workspaces not supported is about TF workspaces, and not about your TF cloud account (maybe you don’t have workspaces enabled with your plan)?

Zach B avatar

I’m 95% sure. I have at least 20-30 workspaces in that account lol.

Zach B avatar

And have been using it for about a year now.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


You know what’s funny, I asked Chat GPT about atmos last week

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what question did you ask?

Zach B avatar

“What do you think about atmos? (A terraform tool built by CloudPosse)”

Zach B avatar

“Atmos is a Terraform module generator tool built by CloudPosse that aims to simplify the process of creating reusable Terraform modules. The tool offers a number of features such as scaffolding, testing, and publishing modules to the Terraform Registry. I think it’s a great tool for those who are looking to create reusable Terraform modules with ease. The tool’s ability to generate a skeleton for a new module, including a test harness, makes it easier to get started with building modules. Additionally, the tool’s integration with Terraform Cloud makes it easy to automate the process of testing and publishing modules. Overall, if you’re looking to create reusable Terraform modules, I think atmos is definitely worth checking out.”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

well, the answer looks like a mix of common statements which can be applied to any such tool (and some of those don’t apply to our atmos) You give that answer when you don’t know the exact answer. The chat has a lot to learn yet

Zach B avatar

lol..

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but the AI gave a nice explanation what atmos is

ATMOS (Automated Testing Management & Orchestration System)
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To be clear, TF Cloud Workspaces != Terraform Open Source Workspaces. I am pretty sure Open Source Workspaces are incompatible/do not work with TFC Workspaces.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks like there’s a workaround though

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(to be fair to ChatGPT, our atmos.tools docs were not available when they build the first language model)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll add a setting in atmos.yaml to enable/disable TF workspaces auto-generation, and fix a few issues in atmos with remote backend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
CLI-driven Runs - Runs - Terraform Cloud | Terraform | HashiCorp Developerattachment image

Trigger runs from your terminal using the Terraform CLI. Learn the required configuration for remote CLI runs.

2023-03-30

2023-03-31

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

trying to take a look at atmos for our environment… we’re already using many cloudposse Terraform modules for our new deployments so looking to see how easily I can migrate things over to using atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need any help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Quick Start | atmos

Take 20 minutes to learn the most important atmos concepts.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Thanks @Andriy Knysh (Cloud Posse) I’m reading through the docs on there now. We currently used the tfstate-backend module to store our state files remotely in S3 and provide locking.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, the module is a good start to use Atmos (we have the config for it, let us know if you need help)

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@Andriy Knysh (Cloud Posse) let me get your thoughts on this… I had a bit of a hack to using the [context.tf](http://context.tf) to match our expected naming… So I’m tweaking the label_order to produce the name following {namespace}-{environment}-{name}-{attributes}-{stage}{stage} in our case is the region either use1 or use2 and {environment} is our dev, qa, uat, preview or prod. We generate the {stage} using the terraform-aws-utils module… Any thought on a cleaner/simpler way to accomplish the same? This requires me to include the label_order hack to every module to remain consistent

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, include label_order at the top level in the stack configs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also update stacks.name_pattern in atmos.yaml

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

oh okay… so I can simply move that up into atmos then… I would add the label_order and label_as_tags (we get rid of all but name tags) to the *.tfvars we ran and then the stage was set inside the main.tf based on the region being deployed… I guess moving to use atmos we’d just have the stack imports for us-east-1 or us-east-2 and set stage in there and no longer need to use the utils module to generate the short name from the region long name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, in the company-wide defaults, you can add

# orgs/_defaults.yaml
#
# COMPANY-WIDE DEFAULTS
#

terraform:
  vars:
    label_order:
    - namespace
    - tenant
    - environment
    - stage
    - name
    - attributes
    descriptor_formats:
      account_name:
        format: "%v-%v"
        labels:
        - tenant
        - stage
      stack:
        format: "%v-%v-%v-%v"
        labels:
        - namespace
        - tenant
        - environment
        - stage

  backend_type: s3
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and update label_order according to your requirements

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

ah okay… that makes sense now seeing an example that also looks easier to read than what I see in the example that shows the stacks.name_pattern as {tenant}-{environment}-{stage} which if I follow your example there would just mean it would be equivalent to {namespace}-{tenant}-{environment}-{stage}?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we don’t use any *.tfvars files for many reasons, we have all the vars in the stack configs:

  1. No need to have the vars in many diff places
  2. Amos does not see the vars in the files
  3. If all the vars are in YAML, the following Atmos commands will show them (including the sources of all vars: atmos describe component… , atmos describe stacks
  4. If all vars are in the stack configs, you can you the Atmos validation with JSONSchema and OPA to validate the vars and relations b/w the vars https://atmos.tools/core-concepts/components/validation
Component Validation | atmos

Use JSON Schema and OPA policies to validate Components.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And, lastly, if there are .tfvars not managed by atmos, they could take precedence, leading to unpredictable behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This last issue, was the real kicker that led us to officially not recommend them. Users were getting very confusing results.

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

yeah I thought about that as well @Erik Osterman (Cloud Posse)

1
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

I’ve kept my interactions as an independent open-source guy but I think I may have seen that our parent company has engaged you guys at some point/level… I’m trying to look at trying make changes with my boss support but trying to fit it in without disruption to what is already there. Getting things under Terraform at any level thus far has been big win but it’s lead things towards Terraliths as you called them the other week in office hours

1
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

We already have accounts setup and running things under… That would present some challenges to full multi-account deployment but I’ve also said it could be done

    keyboard_arrow_up