#atmos (2023-05)
2023-05-09
v1.35.0 what Update atmos describe component command Update docs https://atmos.tools/cli/commands/describe/component/ why Add sources of the values from the component’s sections (vars, env, settings) to the command output
Sources of Component ENV Variables The sources.env section of the atmos describe component command output shows the final deep-merged component’s environment variables and their inheritance chain….
what
Update atmos describe component command Update docs https://atmos.tools/cli/commands/describe/component/
why
Add sources of the values from the component’s sections (vars, env, settings) to…
Use this command to describe the complete configuration for an Atmos component in an Atmos stack.
v1.35.0 what Update atmos describe component command Update docs https://atmos.tools/cli/commands/describe/component/ why Add sources of the values from the component’s sections (vars, env, settings) to the command output
Sources of Component ENV Variables The sources.env section of the atmos describe component command output shows the final deep-merged component’s environment variables and their inheritance chain….
2023-05-11
@Andriy Knysh (Cloud Posse) how do you get the json output of the plan command in with atmos?
atmos terraform show ecs-service -s pepe -json
that does not work
Searched all stack YAML files, but could not find config for the component 'ecs-service' in the stack 'on'.
Check that all variables in the stack name pattern '{namespace}-{environment}' are correctly defined in the stack config files.
Are the component and stack names correct? Did you forget an import?
the error is not related to getting the outputs
the component or stack does not exist
use the correct component and stack, then try the show
command
no, when I add the ``–json it takes the
on` as the stack name
if I remove the --json
everything work
i’m not sure, need to test it
you can always use https://atmos.tools/cli/commands/terraform/shell and then plain TF commands
This command starts a new SHELL
configured with the environment for an Atmos component in a stack to allow execution of all native terraform commands inside the shell without using any atmos-specific arguments and flags. This may by helpful to debug a component without going through Atmos.
I can’t do shell on the cicd tool
i just ran
atmos terraform show vpc -s xxxx --json
and it’s working ok
--
not -
which version of atmos are you running?
1.35.0
I’m running 1.34.1
I did not see in the docs if the aditional -
was required
it works for atmos terraform show vpc -s pepe --json
but it does not work for atmos terraform show pepe/vpc -s pepe --json
I was trying it on a component called ecs-service/pepe
so there is something about the name of the component
when it contains /
hmm
just ran
atmos terraform show eks/cluster -s <stack> --json
it’s ok
and this is weird too atmos terraform show vpc -s pepe --json -no-color
I can’t add --no-color
I understand that it does not like -json
with a single -
and i know why
only the first one needs the --
?
yes, this works
atmos terraform show eks/cluster -s <stack> --json -no-color
it does not work for me with the /
in any of the components
plan/apply/deploy no problem
does this work atmos terraform show vpc/xxx -s pepe
? (w/o any additional arguments)
yes, all those stacks/components have been deployed
the show
subcommand with a component with /
in the name but w/o any additional arguments like --json -no-color
that works yes
ohhh I c wait
yes without additional options works
2023-05-12
Anyone else seeing funkiness on https://atmos.tools/quick-start/introduction/? Chrome 112.0.5615.137 on macOS 11.7.6 (if that even matters in 2023)
@Zinovii Dmytriv (Cloud Posse)
Is it the purple box? I have never seen before
Yep, never seen anything like it
What happens in incognito mode?
Same in incognito. But then I opened dev tools to do some more digging and it disappeared. Makes it seem like a local issue, albeit a super weird one.
Yeah, can’t get it back now.
Go figure! Thanks for reporting. Will keep an eye out.
2023-05-13
2023-05-14
Hello, I’ve tried to do atmos tutorials and trying to experiment with it in my workload. I’m getting cryptic errors during terraform plan like
│ Error:
│ Searched all stack YAML files, but could not find config for the component 'account-map' in the stack 'gov-gbl-root'.
│ Check that all variables in the stack name pattern '{tenant}-{environment}-{stage}' are correctly defined in the stack config files.
│ Are the component and stack names correct? Did you forget an import?
Is there a guide where there is a working example with specific geodesic version (latest geodesic image doesn’t seem to include atmos) that I simply could modify and build upon?
Hrmmm are you using the tutorial here? https://github.com/cloudposse/tutorials/blob/main/Dockerfile
ARG VERSION=latest
ARG OS=debian
ARG CLI_NAME=tutorials
ARG TF_1_VERSION=1.3.0
ARG ATMOS_VERSION=1.16.0
FROM cloudposse/geodesic:$VERSION-$OS
# Install ubuntu universe repo so we can install more helpful packages
RUN apt-get install -y software-properties-common && \
add-apt-repository "deb <http://archive.ubuntu.com/ubuntu> bionic universe" && \
gpg --keyserver <hkp://keyserver.ubuntu.com:80> --recv 3B4FE6ACC0B21F32 && \
gpg --export --armor 3B4FE6ACC0B21F32 | apt-key add - && \
apt-get update && \
apt-get install -y golang-petname
ARG TF_1_VERSION
# Install terraform.
RUN apt-get update && apt-get install -y -u --allow-downgrades \
terraform-1="${TF_1_VERSION}-*" && \
update-alternatives --set terraform /usr/share/terraform/1/bin/terraform
# Install Atmos
ARG ATMOS_VERSION
RUN apt-get install -y --allow-downgrades \
atmos="${ATMOS_VERSION}-*" \
vendir
# Geodesic message of the Day
ENV MOTD_URL=""
ENV AWS_VAULT_BACKEND=file
ENV DOCKER_IMAGE="cloudposse/tutorials"
ENV DOCKER_TAG="latest"
# Geodesic banner
ENV BANNER="Tutorials"
COPY rootfs/ /
COPY ./ /tutorials
WORKDIR /
This has atmos pinned at a version
the best atmos tutorial is https://atmos.tools/category/quick-start
Take 20 minutes to learn the most important atmos concepts.
it has all the parts including installing atmos into a container (it’s one line of code), configuring atmos.yaml, and all the stacks
the error
Searched all stack YAML files, but could not find config for the component 'account-map' in the stack 'gov-gbl-root'.
must be because you don’t have account-map
component defined in the gov-gbl-root
in any of the YAML files
2023-05-23
is there any live coding “from scratch” of atmos files? the help video in the main doc is kinda un-helpful in that regard as it only shows all of those tasty files without ever telling HOW they’re built or written.. in other words, I’m specifically looking for a video (or an office hour) where it shows the process, from scratch, of creating atmos files, picking a TF module, an helm chart and going all the way from 0 to plan/deploy
oh wait.. maybe the answer above has a pointer
I think I went through that tutorial already, but I’ll look again
@Lele please look at https://atmos.tools/category/quick-start
Take 20 minutes to learn the most important atmos concepts.
yeah that’s what I’m looking now
it describe (some) of the components and stacks that are defined here https://github.com/cloudposse/atmos/tree/master/examples/complete
(the repo has more stuff for testing and debugging, and I agree it’s not clear what to use, we should improve that)
there’s also a couple comments and improvements that I’d want to suggest, like it’s not great to suggest to mount $HOME to /localhost when it’s enough to do
-v $HOME/.aws:/localhost/.aws
to make aws
happy
also, if you need any help with setting that up, let us know
there’s also a couple comments and improvements that I’d want to suggest, like it’s not great to suggest to mount $HOME to /localhost when it’s enough to do
is there any other reason to mount the entire homedir to localhost?
that’s not Atmos related per se, that’s https://github.com/cloudposse/geodesic (if you use geodesic as your container)
Geodesic is a DevOps Linux Toolbox in Docker. We use it as an interactive cloud automation shell. It’s the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
ah yes true
I think I found it in the docs somewhere and in some videos
let’s discuss everything related to geodesic in #geodesic
Atmos and geodesic are diff products
sure
they can (and are) used together, or you can use one w/o the other
yeah, in fact I’m using atmos
through the nixpkgs
package
also atmos
is not available in the bare latest geodesic as pointed out in the other comment, so I guess it all goes full circle
# Geodesic: <https://github.com/cloudposse/geodesic/>
ARG GEODESIC_VERSION=2.1.3
ARG GEODESIC_OS=debian
# atmos: <https://github.com/cloudposse/atmos>
ARG ATMOS_VERSION=1.34.1
# Terraform: <https://github.com/hashicorp/terraform/releases>
ARG TF_VERSION=1.4.5
FROM cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}
# Geodesic message of the Day
ENV MOTD_URL="<https://geodesic.sh/motd>"
# Geodesic banner message
ENV BANNER="atmos"
ENV DOCKER_IMAGE="cloudposse/atmos"
ENV DOCKER_TAG="latest"
# Some configuration options for Geodesic
ENV AWS_SAML2AWS_ENABLED=false
ENV AWS_VAULT_ENABLED=false
ENV AWS_VAULT_SERVER_ENABLED=false
ENV GEODESIC_TF_PROMPT_ACTIVE=false
ENV DIRENV_ENABLED=false
ENV NAMESPACE="cp"
# Enable advanced AWS assume role chaining for tools using AWS SDK
# <https://docs.aws.amazon.com/sdk-for-go/api/aws/session/>
ENV AWS_SDK_LOAD_CONFIG=1
ENV AWS_DEFAULT_REGION=us-east-2
# Install specific version of Terraform
ARG TF_VERSION
RUN apt-get update && apt-get install -y -u --allow-downgrades \
terraform-1="${TF_VERSION}-*" && \
update-alternatives --set terraform /usr/share/terraform/1/bin/terraform
# Install atmos
ARG ATMOS_VERSION
RUN apt-get update && apt-get install -y --allow-downgrades atmos="${ATMOS_VERSION}-*"
COPY rootfs/ /
WORKDIR /
installing Atmos is a few lines of code
Atmos changes very ofter (often weekly), so if we bind Atmos with geodesic, you will have some old version and will want to install a newer one anyway
interesting, I’ll check the changelog
I guess probably it’s just a bit confusing saying that geodesic is suggested to keep the environment consistent and everything is bundled (aws
, terraform
, …) in there… but atmos
is not
Haha, yes - I see the irony in there. Also that we for the longest time shipped terragrunt
with it, but didn’t use it. We’ve been trying to make our image more lean, and ensure it’s not too cloudposse specific. That said, maybe something we should reconsider.
2023-05-25
Hello, new to atmos and going through the tutorial. I can’t find any info on setting up the backend / state bucket. I found this command atmos terraform generate backends
but it creates an empty backend.tf.json
file.
the backend needs to be configured in the defaults for all stacks, see https://github.com/cloudposse/atmos/blob/master/examples/complete/stacks/orgs/cp/_defaults.yaml#L7 as an example
backend_type: s3 # s3, remote, vault, static, azurerm, etc.
sorry we don’t have that described in the docs, we will add it
@Gabriela Campana (Cloud Posse) has joined the channel
2023-05-26
2023-05-30
hi team! I’m taking my first steps into Atmos with github actions and I am running into an error where it tells me my terraform workspace already exists when trying to do an atmos terraform plan
This same step works on my local machine, so @jose.amengual recommended i reach out here
he is using the new github atmos actions
can you post your workflow John ?
maybe @Andriy Knysh (Cloud Posse) might have an idea of what is happening
name: Pull Request Workflow
on: [pull_request]
permissions:
id-token: write
contents: read
env:
AWS_ROLE: xxxxx
AWS_REGION: us-west-2
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: hashicorp/setup-terraform@v2
- run: terraform fmt -check
get-affected-stacks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
- id: affected
name: Get affected stacks
uses: cloudposse/[email protected]
outputs:
affected: ${{ steps.affected.outputs.affected }}
matrix: ${{ steps.affected.outputs.matrix }}
terraform-plan:
runs-on: ubuntu-latest
needs: get-affected-stacks
strategy:
matrix: ${{fromJson(needs.get-affected-stacks.outputs.matrix)}}
fail-fast: true
max-parallel: 1
steps:
- uses: actions/checkout@v3
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
- name: Assume AWS Role
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.AWS_ROLE }}
aws-region: ${{ env.AWS_REGION }}
- name: Atmos Plan
run: atmos terraform plan ${{matrix.component}} -s ${{matrix.stack}}
what TF workspace is selected when you execute atmos terraform plan
locally? And what TF workspace is selected when it gets executed in the GH action?
locally
atmos terraform plan oidc -s platformlive-dev-uw2
...
Switched to workspace "platformlive-dev-uw2".
github
atmos terraform plan oidc -s platformlive-dev-uw2
...
Workspace "platformlive-dev-uw2" already exists
exit status 1
so it has nothing to do with atmos configuration, in both cases the same workspace gets selected
not sure i know why this workspace already exists in GH action
@Matt Calhoun did you see something like that when you worked with the github actions?
under what circumstances does atmos choose to create a workspace rather than switch to an existing one?
it always execute workspace select
first. If there is an error from Tf that the workspace already exists, it executes workspace new
@Matt Calhoun was working on the GH action, he might have more insight into it
Hmm…the issue I saw with atmos and workspaces was early on atmos was exiting with an error when we created a workspace that didn’t exist. We corrected that issue in atmos v1.29.0, though.
Are you sure the assume role step is succeeding and the role you’ve assumed has rights to read/write the state bucked and the dynamodb table? Can you post the output of the assume role
step as well as the atmos terraform plan
step?
thanks for jumping in Matt & Andriy, i really appreciate it. Here is the output of get-caller-identity
Run aws sts get-caller-identity
{
"UserId": "AROA6GZP2L3HCTKSNPMAO:GitHubActions",
"Account": "***",
"Arn": "arn:aws:sts::***:assumed-role/platform-live-github-deployer-role/GitHubActions"
}
Whilst we’re debugging, this role has been granted the AdministratorAccess
policy.
and here is what i see with atmos terraform plan
Run atmos terraform plan oidc -s platformlive-staging-ue2
path: /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos-bin
/home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos-bin terraform plan oidc -s platformlive-staging-ue2
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v5.0.1...
- Installed hashicorp/aws v5.0.1 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Workspace "platformlive-staging-ue2" already exists
exit status 1
Error: Atmos exited with code 1.
Error: Process completed with exit code 1.
i turned on TF_LOG=DEBUG and found something interesting. It appears to be looking for the wrong statefile during init
looking at the flow of requests to S3, it does the following
2023-05-31T16:51:16.585Z [DEBUG] [aws-sdk-go] DEBUG: Request s3/ListObjects Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /?max-keys=1000&prefix=oidc%2F HTTP/1.1
Host: platformlive-dev-global-remote-state.s3.us-west-2.amazonaws.com
...
finds an object oidc/platformlive-dev-uw2/terraform.tfstate
it does this twice, for some reason
then it does a get object
on terraform.tfstate
with no prefix, and gets a 404 because it doesnt exist
2023-05-31T16:51:17.610Z [DEBUG] [aws-sdk-go] DEBUG: Request s3/GetObject Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /terraform.tfstate HTTP/1.1
Host: platformlive-dev-global-remote-state.s3.us-west-2.amazonaws.com
...
2023-05-31T16:51:17.979Z [DEBUG] [aws-sdk-go] DEBUG: Response s3/GetObject Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 404 Not Found
we are still struggling with this:
Command info:
Terraform binary: terraform
Terraform command: workspace
Arguments and flags: []
Component: oidc
Stack: platformlive-dev-uw2
Working dir: components/terraform/oidc
Executing command:
/usr/local/bin/terraform workspace new platformlive-dev-uw2
Workspace "platformlive-dev-uw2" already exists
exit status 1
Error: Atmos exited with code 1.
Error: Process completed with exit code 1.
s3 access and such is all good
This is a component that has never been applied before
I wonder if the describe affected will create the component since it does not exist and then when we do atmos terraform plan
it tries to do it again because it was not applied?
Command info:
Terraform binary: terraform
Terraform command: workspace list
Arguments and flags: []
Component: oidc
Stack: platformlive-dev-uw2
Working dir: components/terraform/oidc
Executing command:
/usr/local/bin/terraform workspace list
* default
platformlive-dev-uw2
why is atmos trying to create the existing workspace? and this does not happen on the local machine
so to make this work I just did
atmos terraform workspace delete oidc -s platformlive-dev-uw2
and after that I was able to run plan
I downgraded TF, the AWS provider etc
somehow this worked
so in the action this happens :
Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: oidc
Stack: platformlive-dev-uw2
Working dir: components/terraform/oidc
Executing command:
/usr/local/bin/terraform workspace new platformlive-dev-uw2
Workspace "platformlive-dev-uw2" already exists
in my local from the same branch and repo :
Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: oidc
Stack: platformlive-dev-uw2
Working dir: components/terraform/oidc
Executing command:
/usr/local/bin/terraform workspace select platformlive-dev-uw2
Executing command:
@Andriy Knysh (Cloud Posse) how does atmos decides to use select
or new
?
Atmos first runs workspace select
, if there is an error from TF (the standard error saying that the workspace does not exists and you need to tun new
to create one), then Atmos runs workspace new
there is something diff in your action. Looking at the code above that you posted,
Executing command:
/usr/local/bin/terraform workspace new platformlive-dev-uw2
gets called even before workspace select
I guess it’s the combination of diff commands that you are calling from the action
try to simplify it
try to use just atmos terraform plan
w/o using affected
what you need to see is
/usr/local/bin/terraform workspace new
/usr/local/bin/terraform workspace select
try to simplify the action and check it step by step, otherwise you have tooo many diff things in there and don’t know what causes the issues
on you local computer you just run one command atmos terraform plan
try to run JUST this command in the action and see what happens
if it’s still the same error, then the issue is with TF somehow creating that workspace already (I doubt it)
second step, add another step to the action, e.g. to describe affected before running terraform plan
and see if that new step affects the terraform plan
step
I have not seen that error before (but I did not work with Atmos GH actions much), so I don’t have an idea now why it’s happening in the GH action. All I see that in the action you execute more commands than on your local compouter, then you are trying to compare those two but they are diff
divide the problem into smaller pieces as I explained above. You’ll be able to figure out which previous step in your GH action affects the last terraform plan
step
I guess it has nothing to do with the last step atmos terraform plan step
looks like something is calling atmos terraform workspace new
command in a prev step
Executing command:
/usr/local/bin/terraform workspace new platformlive-dev-uw2
w/o first selecting the workspace
Anyway, try to run just the last terraform plan
step in your GH action and see what happens. also, what Atmos version are you running there?
that output I pasted is from just running atmos terraform plan, without the affected
that’s a mystery
please show the GH action with only one step terraform plan
which you tested
this is what I have
- name: Atmos Plan
run: |
export ATMOS_LOGS_LEVEL=Trace
aws sts get-caller-identity
terraform version
# echo ${{matrix.component}}
pwd
ls -l
ls -lR components/terraform/oidc/
<strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'>#atmos</a></strong></a></strong></a></strong></a></strong></a></strong> terraform generate backend oidc -s platformlive-dev-uw2
#cat components/terraform/oidc/backend.tf.json
<strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'>#atmos</a></strong></a></strong></a></strong></a></strong></a></strong> terraform workspace list oidc -s platformlive-dev-uw2
<strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'>#atmos</a></strong></a></strong></a></strong></a></strong></a></strong> terraform workspace select default oidc -s platformlive-dev-uw2
# atmos terraform plan ${{matrix.component}} -s platformlive-dev-uw2
<strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'>#atmos</a></strong></a></strong></a></strong></a></strong></a></strong> terraform clean oidc -s platformlive-dev-uw2
atmos terraform workspace select oidc -s platformlive-dev-uw2
<strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'><strong><a href='/atmos'>#atmos</a></strong></a></strong></a></strong></a></strong></a></strong> terraform workspace delete oidc -s platformlive-dev-uw2
#export TF_LOG=TRACE
#env
atmos terraform plan oidc -s platformlive-dev-uw2
just so you see how much debugging we have done
that version I was trying to force the select
atmos terraform workspace select oidc -s platformlive-dev-uw2
atmos terraform plan oidc -s platformlive-dev-uw2
`
those are the commands I was running the last try
why do you need atmos terraform workspace select
since atmos terraform plan
will select it ?
I was just trying more stuff
ok, so let’s say this is your GH action now
name: Atmos Plan
run: |
export ATMOS_LOGS_LEVEL=Trace
aws sts get-caller-identity
terraform version
# echo ${{matrix.component}}
pwd
ls -l
ls -lR components/terraform/oidc/
atmos terraform plan oidc -s platformlive-dev-uw2
that is the last run just now, with only that command
what’s the Atmos version and what’s the output from the action step?
yea, I see the output
it’s a mystery why it does not call workspace select
I know , this works fine in my computer and a bitbucket pipeline
Run cloudposse/[email protected]
Setup atmos version spec 1.34.0
Attempting to download 1.34.0...
Installing version v1.34.0 from GitHub
Acquiring v1.34.0 from <https://github.com/cloudposse/atmos/releases/download/v1.34.0/atmos_1.34.0_linux_amd64>
Renaming downloaded file...
Successfully renamed atmos from /home/runner/work/_temp/d5040bb7-2eff-4d9f-b07b-812ff202372a to /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos-bin
Installing wrapper script from /home/runner/work/_actions/cloudposse/github-action-setup-atmos/1.0.1/dist/wrapper/index.js to /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos.
Successfully installed atmos to /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos
Successfully set up Atmos version 1.34.0 in /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos
I downgraded to 1.34.0
downgraded tf to 1.4.6
downgraded aws provider to 4.6.x
same result?
the output you see is the result of that
I have no idea what else to try
ok, all of that is very strange and I did not see it before
i’ll look into Atmos if I can fund anything related to it
this is what I have now, I cleaned up a bit
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup TF
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.4.6
cli_config_credentials_token: ""
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
atmos-version: 1.34.0
- name: Assume AWS Role
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.AWS_ROLE }}
aws-region: ${{ env.AWS_REGION }}
- name: Atmos Plan
run: |
export ATMOS_LOGS_LEVEL=Trace
aws sts get-caller-identity
atmos terraform plan oidc -s platformlive-dev-uw2
Did you run atmos terraform plan
from a GH action before and was it working? Or it’s always this issue for you in any GH action?
This is the first time I ever run atmos plan
on a github action
previously I run atmos to generate the atlantis config , backends etc to work with an Atlantis pipeline
and this same project runs on bitbucket but there we have to install and do all the describe affected stuff in steps manually
ok
i’ll review all of this today
thanks Andriy, this is very strange for us
so we just changes to this now :
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup TF
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.4.6
cli_config_credentials_token: ""
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
atmos-version: 1.34.0
- name: Assume AWS Role
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.AWS_ROLE }}
aws-region: ${{ env.AWS_REGION }}
- name: Generate Files
run: |
atmos terraform generate backend oidc -s platformlive-dev-uw2
atmos terraform generate varfile oidc -s platformlive-dev-uw2
- name: Terraform Plan
run: |
cd components/terraform/oidc
terraform init -reconfigure
terraform workspace select platformlive-dev-uw2
terraform plan --var-file=platformlive-dev-uw2-oidc.terraform.tfvars.json
it works just fine
@Andriy Knysh (Cloud Posse) @matt I think I figure it out
if we run :
- name: Atmos Plan
run: |
export ATMOS_LOGS_LEVEL=Trace
wget -q <https://github.com/cloudposse/atmos/releases/download/v${ATMOS_VERSION}/atmos_${ATMOS_VERSION}_linux_amd64> && \
mv atmos_${ATMOS_VERSION}_linux_amd64 /usr/local/bin/atmos && \
chmod +x /usr/local/bin/atmos
atmos terraform plan oidc -s platformlive-dev-uw2
`
that works fine
if we use the atmos setup action to install it it does not
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
atmos-version: 1.35.0
ok, I just tested it like this :
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: false
atmos-version: 1.35.0
and it works
so the wrapper is doing something wonky
Found a bug? Maybe our Slack Community can help.
Describe the Bug
When :
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
install-wrapper: true
and atmos plan command will fail due to the wrapper catching the output of the atmos command and somehow making atmos to believe the workspace
does not exist.
Expected Behavior
The wrapper should not cause atmos plan
command to fail.
Steps to Reproduce
This github action fails :
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup TF
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.4.6
cli_config_credentials_token: ""
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
atmos-version: 1.34.0
- name: Assume AWS Role
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: ${{ env.AWS_ROLE }}
aws-region: ${{ env.AWS_REGION }}
- name: Atmos Plan
run: |
export ATMOS_LOGS_LEVEL=Trace
aws sts get-caller-identity
atmos terraform plan oidc -s platformlive-dev-uw2
ERROR:
Terraform v1.4.6
on linux_amd64
/home/runner/work/platform-live-aws-infrastructure/platform-live-aws-infrastructure
total 40
-rw-r--r-- 1 runner docker 565 Jun 1 15:37 README.md
-rw-r--r-- 1 runner docker 8515 Jun 1 15:37 atmos.yaml
-rw-r--r-- 1 runner docker 4960 Jun 1 15:37 bitbucket-pipelines.yml
drwxr-xr-x 3 runner docker 4096 Jun 1 15:37 components
drwxr-xr-x 3 runner docker 4096 Jun 1 15:37 docs
-rw-r--r-- 1 runner docker 831 Jun 1 15:37 plan.txt
drwxr-xr-x 4 runner docker 4096 Jun 1 15:37 stacks
components/terraform/oidc/:
total 16
-rw-r--r-- 1 runner docker 840 Jun 1 15:37 main.tf
-rw-r--r-- 1 runner docker 203 Jun 1 15:37 providers.tf
-rw-r--r-- 1 runner docker 392 Jun 1 15:37 variables.tf
-rw-r--r-- 1 runner docker 150 Jun 1 15:37 versions.tf
path: /home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos-bin
/home/runner/work/_actions/cloudposse/github-action-setup-atmos/atmos/atmos-bin terraform plan oidc -s platformlive-dev-uw2
Found stack config files:
- _global.yaml
- aws/account-globals.yaml
- aws/nonprod/dev/globals.yaml
- aws/nonprod/dev/us-west-2.yaml
- aws/nonprod/staging/globals.yaml
- aws/nonprod/staging/us-east-2.yaml
- aws/prod/globals.yaml
- aws/prod/us-west-2.yaml
- workflows/aws-core-infra-workflow.yaml
Found config for the component 'oidc' for the stack 'platformlive-dev-uw2' in the stack config file 'aws/nonprod/dev/us-west-2'
Variables for the component 'oidc' in the stack 'platformlive-dev-uw2':
account_map:
automation: null
dev: ***
prod: *****
staging: ***
client_id_list:
- ****
environment: dev
idp_url: ****
namespace: platformlive
region: us-west-2
stage: uw2
tags:
product: Platform Live
thumbprint_list:
- ******
Writing the variables to file:
components/terraform/oidc/platformlive-dev-uw2-oidc.terraform.tfvars.json
Writing the backend config to file:
components/terraform/oidc/backend.tf.json
Executing command:
/usr/local/bin/terraform init -reconfigure
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.0"...
- Installing hashicorp/aws v4.67.0...
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Command info:
Terraform binary: terraform
Terraform command: plan
Arguments and flags: []
Component: oidc
Stack: platformlive-dev-uw2
Working dir: components/terraform/oidc
Executing command:
/usr/local/bin/terraform workspace new platformlive-dev-uw2
Workspace "platformlive-dev-uw2" already exists
exit status 1
Error: Atmos exited with code 1.
Error: Process completed with exit code 1.
Anything that will help us triage the bug will help. Here are some ideas:
Github action runner
Additional Context
https://sweetops.slack.com/archives/C031919U8A0/p1685484685655049
i’m not familiar with what install-wrapper
does, will review this with Matt
thanks for this feedback @jose.amengual, I was able to succefully run atmos plan with the wrapper = false. That said, how can I now post the outputs in Github PR as a comment?
I think cloudposse has another action for this? but I believe I used dflock github actions in one of this implementations
thank you Pepe, I will try to search around for that
i think I have some code I can share a bit later
that would be great, thank you!
we did something like this :
get-affected-stacks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup atmos
uses: cloudposse/[email protected]
with:
install-wrapper: true
- id: affected
name: Get affected stacks
uses: cloudposse/[email protected]
outputs:
affected: ${{ steps.affected.outputs.affected }}
matrix: ${{ steps.affected.outputs.matrix }}
using on of the cloudposse actions
and then :
- name: Atmos Plan
id: plan
run: |
export TF_CLI_ARGS_plan="-no-color -compact-warnings"
export TF_IN_AUTOMATION="true"
aws sts get-caller-identity
atmos terraform plan ${{matrix.component}} -s ${{matrix.stack}}
- name: Format plan
id: format-plan
run: |
plan=$(cat <<'EOF'
${{ format('{0}{1}', steps.plan.outputs.stdout, steps.plan.outputs.stderr) }}
EOF
)
echo "formatted_plan<<EOF" >> $GITHUB_ENV
echo "${plan:0:65536}" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
and after that s github scrip to push that into the comment
I do not recommend to do it this way since it just does not look that nice
most of the code we used it from here: https://github.com/hashicorp-education/learn-terraform-github-actions/blob/main/.github/workflows/terraform-plan.yml
name: "Terraform Plan"
on:
pull_request:
env:
TF_CLOUD_ORGANIZATION: "YOUR-ORGANIZATION-HERE"
TF_API_TOKEN: "${{ secrets.TF_API_TOKEN }}"
TF_WORKSPACE: "learn-terraform-github-actions"
CONFIG_DIRECTORY: "./"
jobs:
terraform:
if: github.repository != 'hashicorp-education/learn-terraform-github-actions'
name: "Terraform Plan"
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Upload Configuration
uses: hashicorp/tfc-workflows-github/actions/[email protected]
id: plan-upload
with:
workspace: ${{ env.TF_WORKSPACE }}
directory: ${{ env.CONFIG_DIRECTORY }}
speculative: true
- name: Create Plan Run
uses: hashicorp/tfc-workflows-github/actions/[email protected]
id: plan-run
with:
workspace: ${{ env.TF_WORKSPACE }}
configuration_version: ${{ steps.plan-upload.outputs.configuration_version_id }}
plan_only: true
- name: Get Plan Output
uses: hashicorp/tfc-workflows-github/actions/[email protected]
id: plan-output
with:
plan: ${{ fromJSON(steps.plan-run.outputs.payload).data.relationships.plan.data.id }}
- name: Update PR
uses: actions/github-script@v6
id: plan-comment
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
// 1. Retrieve existing bot comments for the PR
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const botComment = comments.find(comment => {
return comment.user.type === 'Bot' && comment.body.includes('Terraform Cloud Plan Output')
});
const output = `#### Terraform Cloud Plan Output
\`\`\`
Plan: ${{ steps.plan-output.outputs.add }} to add, ${{ steps.plan-output.outputs.change }} to change, ${{ steps.plan-output.outputs.destroy }} to destroy.
\`\`\`
[Terraform Cloud Plan](${{ steps.plan-run.outputs.run_link }})
`;
// 3. Delete previous comment so PR timeline makes sense
if (botComment) {
github.rest.issues.deleteComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: botComment.id,
});
}
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
});
thank you Pepe, I will give it a try, on the format plan and post plan parts