#atmos (2022-05)
2022-05-01
what account should tfstate-backend be set up in?
We typically put it in the root, since there is no other account yet when cold starting
Also , we have started taking a hierarchical approach to state backends
Provision the root state backend and then provision the additional buckets using the root state backend as the backend for the other buckets
what tenant would that go into since atmos requires the tenant-environment-stage formatting
That is configurable
You could do one bucket per account
Or based on OU
Or some other convention
i set the tenants up to match the ou, and the environment to match the account names
I think we still provision the buckets in the root account for simplicity sake
i put the root account under mgmt in an env called gbl
yeah
also just want to point out that tenants are optional. the only hard requirement is environment and stage.
i’ve been having issues getting the account-map data when using my sso user signed in with the identity role. when i log the outputs, i see some access denied errors for dynamodb/PutItem and s3/listobjects
tfstate-backend is set up in the root account and, i’ve created the delegated roles in the root account as well
^^ looks like I needed to modify the iam_role_arn_template_template
to use tenants!
2022-05-02
vendoring coming soon to atmos: https://github.com/cloudposse/atmos/pull/145
what
• Add atmos vendor
commands
• Add atmos vendor pull
command
• Improve error messages
• Cleanup code
why
• atmos vendor
commands are used to manage vendoring for components and stacks
• atmos vendor pull -c <component>
command pulls sources and mixins for the specified component
• Support k8s-style YAML config (file component.yaml
) to describe component vendoring configuration. The file is placed into the component folder and then the atmos command atmos vendor pull -c <component>
is executed to pull the sources and mixins for the component
component.yaml
apiVersion: atmos/v1
kind: ComponentVendorConfig
metadata:
name: vpc-flow-logs-bucket-vendor-config
description: Source and mixins config for vendoring of 'vpc-flow-logs-bucket' component
spec:
source:
# 'uri' supports all protocols (local files, Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP),
# and all URL and archive formats as described in <https://github.com/hashicorp/go-getter>
# In 'uri', Golang templates are supported <https://pkg.go.dev/text/template>
# If 'version' is provided, '{{.Version}}' will be replaced with the 'version' value before pulling the files from 'uri'
uri: github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref={{.Version}}
version: 0.194.0
# Only include the files that match the 'included_paths' patterns
# If 'included_paths' is not specified, all files will be matched except those that match the patterns from 'excluded_paths'
# 'included_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
# <https://en.wikipedia.org/wiki/Glob_(programming)>
# <https://github.com/bmatcuk/doublestar#patterns>
included_paths:
- "**/*.tf"
- "**/*.tfvars"
- "**/*.md"
# Exclude the files that match any of the 'excluded_paths' patterns
# Note that we are excluding 'context.tf' since a newer version of it will be downloaded using 'mixins'
# 'excluded_paths' support POSIX-style Globs for file names/paths (double-star `**` is supported)
excluded_paths:
- "**/context.tf"
# mixins override files from 'source' with the same 'filename' (e.g. 'context.tf' will override 'context.tf' from the 'source')
# mixins are processed in the order they are declared in the list
mixins:
# <https://github.com/hashicorp/go-getter/issues/98>
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>
filename: context.tf
- uri: <https://raw.githubusercontent.com/cloudposse/terraform-aws-components/{{.Version}}/modules/datadog-agent/introspection.mixin.tf>
version: 0.194.0
filename: introspection.mixin.tf
• The URIs (uri
) in the vendoring config support all protocols (local files, Git, Mercurial, HTTP, HTTPS, Amazon S3, Google GCP), and all URL and archive formats as described in https://github.com/hashicorp/go-getter
• included_paths
and excluded_paths
support POSIX-style Globs for file names/paths (double-star **
is supported as well)
test
atmos vendor pull -c infra/vpc-flow-logs-bucket
Pulling sources for the component 'infra/vpc-flow-logs-bucket'
from 'github.com/cloudposse/terraform-aws-components.git//modules/vpc-flow-logs-bucket?ref=0.194.0'
and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
Including the file 'README.md' since it matches the '**/*.md' pattern from 'included_paths'
Excluding the file 'context.tf' since it matches the '**/context.tf' pattern from 'excluded_paths'
Including the file 'default.auto.tfvars' since it matches the '**/*.tfvars' pattern from 'included_paths'
Including the file 'main.tf' since it matches the '**/*.tf' pattern from 'included_paths'
Including the file 'outputs.tf' since it matches the '**/*.tf' pattern from 'included_paths'
Including the file 'providers.tf' since it matches the '**/*.tf' pattern from 'included_paths'
Including the file 'variables.tf' since it matches the '**/*.tf' pattern from 'included_paths'
Including the file 'versions.tf' since it matches the '**/*.tf' pattern from 'included_paths'
Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-null-label/0.25.0/exports/context.tf>'
for the component 'infra/vpc-flow-logs-bucket'
and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
Pulling the mixin '<https://raw.githubusercontent.com/cloudposse/terraform-aws-components/0.194.0/modules/datadog-agent/introspection.mixin.tf>'
for the component 'infra/vpc-flow-logs-bucket'
and writing to 'examples/complete/components/terraform/infra/vpc-flow-logs-bucket'
v1.4.12 what Add atmos vendor commands Add atmos vendor pull command Improve error messages Cleanup code why atmos vendor commands are used to manage vendoring for components and stacks atmos vendor pull -c command pulls sources and mixins for the specified component Support k8s-style YAML config (file component.yaml) to describe component vendoring configuration. The file is placed into the component folder and then the atmos command atmos vendor pull -c is executed to pull the sources and mixins…
what Add atmos vendor commands Add atmos vendor pull command Improve error messages Cleanup code why atmos vendor commands are used to manage vendoring for components and stacks atmos vendor pul…
2022-05-03
2022-05-05
v1.4.13 what Improve error handling and error messages Add atmos validate stacks command why Check and validate all YAML files in the stacks folder Detect invalid YAML and print the file names and the line numbers test atmos validate stacks
Invalid YAML file ‘catalog/invalid-yaml/invalid-yaml-1.yaml’ yaml: line 15: found unknown directive name
Invalid YAML file ‘catalog/invalid-yaml/invalid-yaml-2.yaml’ yaml: line 16: could not find expected ‘:’
Invalid YAML file…
what Improve error handling and error messages Add atmos validate stacks command why Check and validate all YAML files in the stacks folder Detect invalid YAML and print the file names and the l…
2022-05-09
v1.4.14 what Support terraform two-words commands why Support terraform workspace : terraform workspace list terraform workspace select terraform workspace new terraform workspace delete terraform workspace show
references https://www.terraform.io/cli/commands/workspace test atmos terraform workspace list test/test-component-override-3 -s tenant1-ue2-dev
Executing command: /usr/local/bin/terraform workspace list default…
The workspace command helps you manage workspaces.
2022-05-10
i’m running into an issue with remote-state
for the account-map
component. i can run plan
and deploy
while authenticated with the root account, but when authenticated through the identity account, I run into this error
Error: Error loading state error
│
│ with module.iam_roles.module.account_map.data.terraform_remote_state.s3[0],
│ on .terraform/modules/iam_roles.account_map/modules/remote-state/s3.tf line 8, in data "terraform_remote_state" "s3":
│ 8: backend = "s3"
│
│ error loading the remote state: failed to lock s3 state: 2 errors occurred:
│ * ResourceNotFoundException: Requested resource not found
│ * ResourceNotFoundException: Requested resource not found
│
while debugging, I pasted the role_arn
for a role in the account where the s3 bucket and dynamodb resources are located and everything worked fine.
from my stack config:
components:
terraform:
vpc:
backend_type: s3
remote_state_backend:
s3:
role_arn: arn:aws:iam::1234567890:role/namespace-environment-root-admin
do you have /usr/local/etc/atmos/atmos.yaml
?
yeah, i think it was a really dumb mistake on my end. i’m confirming something now
dumb mistake. i didn’t know that I needed to have the remote_state
fields in the stack that was being referenced. i was treating it like the backend
config
Is this something we can add to the new validate command, @Andriy Knysh (Cloud Posse) ?
i guess it was missing [remote-state.tf](http://remote-state.tf)
@Michael Dizon ?
[remote-state.tf](http://remote-state.tf)
was there, but i did not have the remote_state_backend
fields populated in the yaml config for account-map
or as a global var
it seems like the utils_stack_config_yaml
data component isn’t pulling in the config, but i don’t know how to debug/confirm what’s happening
2022-05-11
Excited to see that docs site PR get merged. Let us know when it’s live!
It’s just a skeleton, we’ll populate the sections. But yes, we will have atmos docs soon :)
2022-05-16
v1.4.15
what
Various fixes and improvements
Better error handling and error messages
Improve validate stacks command
Improve describe stacks command
why
In this configuration
components:
terraform:
“test/test-component-override”:
# The component
attribute specifies that test/test-component-override
inherits from the test/test-component
base component,
# and points to the test/test-component
Terraform component in the components/terraform
folder
component:…
what Various fixes and improvements Better error handling and error messages Improve validate stacks command Improve describe stacks command why In this configuration components: terraform: …
2022-05-19
v1.4.16 what Support {attributes} token in the components.helmfile.cluster_name_pattern CLI config why Allow using cluster_name_pattern in the following format {namespace}-{tenant}-{environment}-{stage}-{attributes}-eks-cluster (note that the tokens can be defined in any order) When deploying multiple EKS clusters into the same AWS account and region, we can use attributes (blue, green, etc.) to be part of the EKS cluster names to name the clusters differently test // Define variables for a component…
what Support {attributes} token in the components.helmfile.cluster_name_pattern CLI config why Allow using cluster_name_pattern in the following format {namespace}-{tenant}-{environment}-{stage}…
2022-05-20
v1.4.17 what Improve atmos vendor pull command Add more vendoring examples and tests why Show an example of vendoring a component with subfolders in the root folder (account-map with modules subfolder), and how to configure included_paths test vendoring a component with modules subfolder atmos vendor pull -c infra/account-map
Including ‘README.md’ since it matches the ‘*/.md’ pattern from ‘included_paths’ Including ‘context.tf’ since it matches the ‘*/.tf’ pattern from ‘included_paths’ Including…
what Improve atmos vendor pull command Add more vendoring examples and tests why Show an example of vendoring a component with subfolders in the root folder (account-map with modules subfolder),…
v1.4.17 what Improve atmos vendor pull command Add more vendoring examples and tests why Show an example of vendoring a component with subfolders in the root folder (account-map with modules subfolder), and how to configure included_paths test vendoring a component with modules subfolder atmos vendor pull -c infra/account-map
Including ‘README.md’ since it matches the ‘*/.md’ pattern from ‘included_paths’ Including ‘context.tf’ since it matches the ‘*/.tf’ pattern from ‘included_paths’ Including…
2022-05-23
v1.4.18 what Update atmos workflow command why Allow specifying the stack for workflows on the command line The stack defined on the command line (atmos workflow -f -s ) has the highest priority, it overrides all other stacks attributes terraform-plan-all-test-components: description: | Run ‘terraform plan’ on ‘test/test-component’ and all its derived components. The stack must be provided on the command line: atmos workflow terraform-plan-all-test-components -f workflow1 -s…
2022-05-30
I’m from Grizzly Force, we were a paying client back in the day well before Atmos, but I had updated everything to run in the atmos way
I can no longer find a Dockerfile for atmos in your repo’s
actually, thats not correct, i’ve now grabbed the new Dockerfile from the atmos/examples/complete folder, it doesn’t appear to reference variant at all. there’s also no makefile though. i think i must have missed some critical changes.
To make this easier, I have my components and my stacks, how do i get this to work quickly, I don’t have time to be stalled. I use {namespace}-{stage}-{name}
for my ATMOS_STACKS_NAME_PATTERN
i would like to use vendir to pull in my components from github
❯ ls
Permissions Size User Group Date Modified Name
.rw-r--r-- 3.4k cody cody 30 May 17:16 atmos.yaml
drwxr-xr-x - cody cody 30 May 17:15 components
.rw-r--r-- 1.2k cody cody 30 May 16:22 Dockerfile
drwxr-xr-x - cody cody 30 May 16:22 rootfs
drwxr-xr-x - cody cody 15 Mar 11:04 stacks
❯ atmos terraform plan vpc -s ca1-dev
Searched all stack files, but could not find config for the component 'vpc' in the stack 'ca1-dev'.
Check that all attributes in the stack name pattern '{namespace}-{stage}-{name}' are defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
The example is the quickest way to get started
i’ve followed the example as closely as possible, but my resources have come from an old version
Since your stack is cal-dev, the stack name pattern should be {environment}-{stage}
should i change my namespace var to environment?
will that cause all my resources that already exist to plan and apply correctly? or will it want to destroy everything?
atmos finds a component in a stack not by the names of the yaml files, but by the context, meaning by the vars inside the files
{environment}-{stage} is your stake name pattern
yes, i’ve pasted more in the main channel before you responded, it shows how my setup is
So the environment and stage vars should be defined in the files
If you are using namespace as the environment/region (which we usually don’t ), then your stack name pattern should be {namespace}-{stage}
And on the command line, you should call it : -s gfca-dev
ok, i will try that thx. one minute
The stack name is about the context variables, not about the file names, file names can be anything, and they can be in any folder at any level
beauty, it runs now. my aws config isn’t configured since im outside of the typical container, i’ll get that setup and try to run
what makefile are you guys using to build and install the container these days?
make all
yes, but what is the makefile you use?
If you are using geodesic, as the example does
the example doesn’t have a makefile
what is your workflow to build a new project? Download the geodesic project, download atmos inside of it?
taking a further look, i dont think that makes sense
is it possible that the atmos example is missing a Makefile?
It doesn’t have a make file
ok, so i want to use atmos inside geodesic, is there documentation?
The Dockerfile is ok and working
thank you again for your help btw, there’s just a disconnect here since you guys have done some updates
And it uses geodesic
so just build the container and run it?
Yes
i wanted a sweet sweet makefile that did that for me
that was my approach before, from you guys
I’ll add a Make file to the example
amazing, thank you, will that take some time?
So the make file should be the same as you used before
It builds geodesic, doesn’t matter what’s inside
that would be great, but it seems to break when trying to use vendir inside the container, do you still use vendir?
oh, you’re correct, it was the dockerfile that was breaking i think
trying to do vendir
No
See the Dockerfile from the example
kk
Add a Makefile that you had before
From an infrastructure repo
yes, the install is broken but returns an error message
Did you use any Makefile before and called make all on it?
❯ make install
########################################################################################
# Attach a terminal (docker run --rm --it ...) if you want to run a shell.
# Run the following to install the script that runs
# Geodesic with all its features (the recommended way to use Geodesic):
#
# docker run --rm cloudposse/geodesic:latest-debian init | bash
#
# After that, you should be able to launch Geodesic just by typing
#
# geodesic
#
########################################################################################
28 ## Install wrapper script from geodesic container
29 install:
30 @docker run --rm $(DOCKER_IMAGE_NAME) | bash -s $(DOCKER_TAG) || (echo "Try: sudo make install"; exit 1)
ok i was able to install and run geodesic. unfortunately the symlink is broken to my projects dir, but that is a docker problem, not geodesic
do you have a trick for that? or i’ll need to figure out how to re-mount it
i can probably do that inside the geodesic bash script
yeah that worked
argh, atmos isn’t installed in geodesic
√ . [cody] atmos ⨠ atmos terraform plan vpc -s gfca-dev
bash: atmos: command not found
⧉ geodesic
√ . [cody] atmos ⨠ /localhost/bin/atmos
Cannot run while in a geodesic shell
neat
ah, thats not atmos, thats why, its geodesic
export DOCKER_ORG ?= XXXXXXXX
export DOCKER_IMAGE ?= $(DOCKER_ORG)/infrastructure
export DOCKER_TAG ?= latest
export DOCKER_IMAGE_NAME ?= $(DOCKER_IMAGE):$(DOCKER_TAG)
export APP_NAME = XXXXXXXXXX
GEODESIC_INSTALL_PATH ?= /usr/local/bin
export INSTALL_PATH ?= $(GEODESIC_INSTALL_PATH)
export SCRIPT = $(INSTALL_PATH)/$(APP_NAME)
export ADR_DOCS_DIR = docs/adr
export ADR_DOCS_README = $(ADR_DOCS_DIR)/README.md
BUILD_HARNESS_EXTENSIONS_PATH := $(CURDIR)/.build-harness-extensions
-include $(shell curl -sSL -o .build-harness "<https://cloudposse.tools/build-harness>"; echo .build-harness)
.DEFAULT_GOAL := default
## Initialize build-harness, install deps, build docker container, install wrapper script and run shell
all: init deps build install run
@exit 0
## Install dependencies (if any)
deps:
@exit 0
## Build docker image
build:
@make --no-print-directory docker/build
## Push docker image to registry
push:
@$(call fail,Refusing to push $(DOCKER_IMAGE_NAME) to docker hub)
## Install wrapper script from geodesic container
install:
@docker run --rm $(DOCKER_IMAGE_NAME) | bash -s $(DOCKER_TAG) || (echo "Try: sudo make install"; exit 1)
## Start the geodesic shell by calling wrapper script
run:
$(SCRIPT)
a working Makefile to be used with the Dockerfile from the example (you will need to adjust it according to your needs)
Thank you! I will add vendir back in as well. I like the requirement of pulling the components from a separate git directory. It forces me to push all changes into git before they can be pulled and applied.
the new atmos is a Go (Golang) program, it does not need vendir
Yes I suppose I could just use go get? I’m not super familiar with go
Will figure that out
@Andriy Knysh (Cloud Posse) i’ve used your makefile, its returning an error
❯ make all
Makefile:19: *** missing separator. Stop.
i think my editor has changed tabs to spaces
or slack, or your editor
with your makefile, i still get an error
Successfully tagged grizzlyforce/grizzly-atmos:latest
########################################################################################
# Attach a terminal (docker run --rm --it ...) if you want to run a shell.
# Run the following to install the script that runs
# Geodesic with all its features (the recommended way to use Geodesic):
#
# docker run --rm cloudposse/geodesic:latest-debian init | bash
#
# After that, you should be able to launch Geodesic just by typing
#
# geodesic
#
########################################################################################
/usr/local/bin/grizzly-atmos
/bin/bash: line 1: /usr/local/bin/grizzly-atmos: No such file or directory
make: *** [Makefile:39: run] Error 127
i am using the Dockerfile from the example
@Andriy Knysh (Cloud Posse) here is my Makefile with my custom paths
export DOCKER_ORG ?= grizzlyforce
export DOCKER_IMAGE ?= $(DOCKER_ORG)/grizzly-atmos
export DOCKER_TAG ?= latest
export DOCKER_IMAGE_NAME ?= $(DOCKER_IMAGE):$(DOCKER_TAG)
export APP_NAME = grizzly-atmos
GEODESIC_INSTALL_PATH ?= /home/cody/.local/bin
export INSTALL_PATH ?= $(GEODESIC_INSTALL_PATH)
export SCRIPT = $(INSTALL_PATH)/$(APP_NAME)
export ADR_DOCS_DIR = docs/adr
export ADR_DOCS_README = $(ADR_DOCS_DIR)/README.md
BUILD_HARNESS_EXTENSIONS_PATH := $(CURDIR)/.build-harness-extensions
-include $(shell curl -sSL -o .build-harness "<https://cloudposse.tools/build-harness>"; echo .build-harness)
.DEFAULT_GOAL := default
## Initialize build-harness, install deps, build docker container, install wrapper script and run shell
all: init deps build install run
@exit 0
## Install dependencies (if any)
deps:
@exit 0
## Build docker image
build:
@make --no-print-directory docker/build
## Push docker image to registry
push:
@$(call fail,Refusing to push $(DOCKER_IMAGE_NAME) to docker hub)
## Install wrapper script from geodesic container
install:
@docker run --rm $(DOCKER_IMAGE_NAME) | bash -s $(DOCKER_TAG) || (echo "Try: sudo make install"; exit 1)
## Start the geodesic shell by calling wrapper script
run:
$(SCRIPT)
when i run make all, this is the final output
❯ make all
exit 0
Removing existing build-harness
Cloning <https://github.com/cloudposse/build-harness.git#master>...
Cloning into 'build-harness'...
remote: Enumerating objects: 152, done.
remote: Counting objects: 100% (152/152), done.
remote: Compressing objects: 100% (124/124), done.
remote: Total 152 (delta 8), reused 76 (delta 5), pack-reused 0
Receiving objects: 100% (152/152), 98.44 KiB | 1.33 MiB/s, done.
Resolving deltas: 100% (8/8), done.
Building grizzlyforce/grizzly-atmos:latest from ./Dockerfile with [] build args...
Sending build context to Docker daemon 7.058MB
Step 1/20 : ARG GEODESIC_VERSION=1.1.0
Step 2/20 : ARG GEODESIC_OS=debian
Step 3/20 : ARG ATMOS_VERSION=1.4.17
Step 4/20 : ARG TF_VERSION=1.1.9
Step 5/20 : FROM cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}
---> 04de881826eb
Step 6/20 : ENV MOTD_URL="<https://geodesic.sh/motd>"
---> Using cache
---> 23f2af0d89e8
Step 7/20 : ENV AWS_SAML2AWS_ENABLED=false
---> Using cache
---> 0f5ca92a68fa
Step 8/20 : ENV AWS_VAULT_ENABLED=false
---> Using cache
---> dd44650a087e
Step 9/20 : ENV AWS_VAULT_SERVER_ENABLED=false
---> Using cache
---> cd9458e189b4
Step 10/20 : ENV GEODESIC_TF_PROMPT_ACTIVE=false
---> Using cache
---> 2f05253767bb
Step 11/20 : ENV DIRENV_ENABLED=false
---> Using cache
---> 0e8e7bb1dc6e
Step 12/20 : ENV AWS_SDK_LOAD_CONFIG=1
---> Using cache
---> 7924fc733fbd
Step 13/20 : ENV AWS_DEFAULT_REGION=ca-central-1
---> Using cache
---> 969fba91f412
Step 14/20 : ARG TF_VERSION
---> Using cache
---> 6c96e15430c3
Step 15/20 : RUN apt-get update && apt-get install -y -u --allow-downgrades terraform-1="${TF_VERSION}-*" && update-alternatives --set terraform /usr/share/terraform/1/bin/terraform
---> Using cache
---> 030486afea9b
Step 16/20 : ARG ATMOS_VERSION
---> Using cache
---> d73a3cc3df21
Step 17/20 : RUN apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"
---> Using cache
---> 122c8c14ba3e
Step 18/20 : COPY rootfs/ /
---> Using cache
---> efe5cf028ea0
Step 19/20 : ENV BANNER="atmos"
---> Using cache
---> c9b92b805e9e
Step 20/20 : WORKDIR /
---> Using cache
---> c6c955f7e195
Successfully built c6c955f7e195
Successfully tagged grizzlyforce/grizzly-atmos:latest
########################################################################################
# Attach a terminal (docker run --rm --it ...) if you want to run a shell.
# Run the following to install the script that runs
# Geodesic with all its features (the recommended way to use Geodesic):
#
# docker run --rm cloudposse/geodesic:latest-debian init | bash
#
# After that, you should be able to launch Geodesic just by typing
#
# geodesic
#
########################################################################################
/home/cody/.local/bin/grizzly-atmos
# Mounting /home/cody into container with workdir /mnt/storage/Cody/projects/grizzlyforce/atmos
# Starting new grizzly-atmos session from cloudposse/geodesic:latest
# Exposing port 34624
# No configured working directory is accessible:
# GEODESIC_WORKDIR is ""
# GEODESIC_HOST_CWD is "/mnt/storage/Cody/projects/grizzlyforce/atmos"
# Defaulting initial working directory to "/conf"
# Geodesic version 1.1.0 based on Alpine Linux v3.15 (3.15.4)
dP oo
88
.d8888b. .d8888b. .d8888b. .d888b88 .d8888b. .d8888b. dP .d8888b.
88' `88 88ooood8 88' `88 88' `88 88ooood8 Y8ooooo. 88 88' `""
88. .88 88. ... 88. .88 88. .88 88. ... 88 88 88. ...
`8888P88 `88888P' `88888P' `88888P8 `88888P' `88888P' dP `88888P'
.88
d8888P
IMPORTANT:
# Unless there were errors reported above,
# * Your host $HOME directory should be available under `/localhost`
# * Your host AWS configuration and credentials should be available
# * Use Leapp on your host computer to manage your credentials
# * Leapp is free, open source, and available from <https://leapp.cloud>
# * Use AWS_PROFILE environment variable to manage your AWS IAM role
# * You can interactively select AWS profiles via the `assume-role` command
| Documentation | <https://docs.cloudposse.com> | Check out documention |
| Public Slack | <https://slack.cloudposse.com> | Active & friendly DevOps community |
| Paid Support | hello@cloudposse.com | Get help fast from the experts |
* Could not find profile name for arn:aws:iam::039005549928:user/cody ; calling it "cody"
* Screen resized to 123x154
⧉ geodesic
√ . [cody] ~ ⨠
it seems to install to my ~/.local/bin/grizzly-atmos fine
but then when i run grizzly-atmos, it pops me into the geodesic container, and there isn’t any atmos installed
it also isn’t using the debian version
this all used to work fine, it’s super frustrating to come back to this everytime and have this flow broken
@Andriy Knysh (Cloud Posse) i got atmos running again, and renamed all the state files and dynamodb keys manually to change ca1 to gfca so that it matched all the resources, i’m now able to run plan against all my infrastructure
i haven’t been able to get the container to install properly, but i think i’m going to abandon your guys container anyways, thanks for all your help
I’m not on a computer now, but I pushed the Makefile into the update-docs-4 atmos branch
In your container, you can install atmos by using just one command as shown in the example Dockerfile
Yes thank you!