#spacelift (2022-03)
2022-03-01
SRE vs. DevOps: What’s the Difference Between Them? What does DevOps do? And what about SRE? See our article and find out the differences between DevOps and SRE.
What does DevOps do? And what about SRE? See our article and find out the differences between DevOps and SRE.
2022-03-03
What are Terraform Templates? Basics, Use Cases, Examples What are Terraform Templates? What are Terraform Templates used for? See examples and use cases.
What are Terraform Templates? What are Terraform Templates used for? See examples and use cases.
2022-03-07
@Andriy Knysh (Cloud Posse) I have a spacelift stack (super generic one) that uses a simple module to create an s3 bucket. Then I added atmos.yaml and stack yaml files from example/tutorial. Spacelift seems to understand there is a stack to create. However, the s3 module code to create the bucket does not seem to get invoked. Is there something needed (in either spacelift or my code) to invoke “atmos terraform plan…” or similar? I’m having trouble understanding how the s3 module code will get invoked.
it’s a few steps process:
- Create a PR to add the Spacelift stack(s)
- Merge the PR and confirm in Spacelift - it will create the stack itself
- Open a PR to add TF components (with
spacelift.workspace_enabled: true
)
terraform plan
from the PR will be shown in the “PRs” tab in Spacelift stack
- After the PR is merged, Spacelift will create a tracked run, which you could confirm to create the resources
in short, Spacelift stack is not the same as the resources that the stack creates
Spacelift stack must be created first
I think that’s what I have, but I will verify. Thanks for the quick response!
I have the spacelift stack and tracked runs and pr’s working correctly. My question seems to be more about “What makes adding atmos.yaml and the stack yaml files work?” I notice on the spacelift stack that if I invoke “atmos version” (using their “Tasks” feature) that atmos is not found. What makes adding atmos.yaml perform the composition of the yaml-defined stack/s including diving into ./components/terraform/myModule? Do I need to install atmos onto spacelift instance somehow, or is it enough to add the cloudposse/utils provider? If the latter, I’m still not clear what magic causes the resolution of the stack yaml files into a full terraform stack to be executed. I’ve tried adding the “context.yaml”, and “spacelift”/”stack” modules (to add policies, etc. to the spacelift stacks), but what I’m after is a bare-bones demo of adding atmos stack generation, and I can’t seem to grasp what triggers that feature.
in your repo, you should have a Dockerfile
ARG ATMOS_VERSION=1.3.29
# Install Atmos CLI (<https://github.com/cloudposse/atmos>)
ARG ATMOS_VERSION
RUN apt-get update && apt-get install -y atmos="${ATMOS_VERSION}-*"
then we build the image (and save it to ECR for example)
then each Spacelift stack is configured to use that image
That should make a huge difference! Thanks and I’ll give that a go.
every time Spacelift executes a run (or a task), it downloads the image (which has atmos)
if you want, open a PR and send to me (MD)
I’ll take a look and point out what is missing
Thank you, I’ll get back to this a bit later today and let you know how it’s going. Thanks for your support.
is there any public image with atmos already installed?
no, but there are public images for geodesic
, and this Dockerfile to add atmos
https://github.com/cloudposse/atmos/blob/master/examples/complete/Dockerfile
# Geodesic: <https://github.com/cloudposse/geodesic/>
ARG GEODESIC_VERSION=0.152.2
ARG GEODESIC_OS=debian
# atmos: <https://github.com/cloudposse/atmos>
ARG ATMOS_VERSION=1.3.30
# Terraform
ARG TF_VERSION=1.1.4
FROM cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}
# Geodesic message of the Day
ENV MOTD_URL="<https://geodesic.sh/motd>"
# Some configuration options for Geodesic
ENV AWS_SAML2AWS_ENABLED=false
ENV AWS_VAULT_ENABLED=false
ENV AWS_VAULT_SERVER_ENABLED=false
ENV GEODESIC_TF_PROMPT_ACTIVE=false
ENV DIRENV_ENABLED=false
# Enable advanced AWS assume role chaining for tools using AWS SDK
# <https://docs.aws.amazon.com/sdk-for-go/api/aws/session/>
ENV AWS_SDK_LOAD_CONFIG=1
ENV AWS_DEFAULT_REGION=us-east-2
# Install specific version of Terraform
ARG TF_VERSION
RUN apt-get update && apt-get install -y -u --allow-downgrades \
terraform-1="${TF_VERSION}-*" && \
update-alternatives --set terraform /usr/share/terraform/1/bin/terraform
ARG ATMOS_VERSION
RUN apt-get update && apt-get install -y --allow-downgrades \
atmos="${ATMOS_VERSION}-*"
COPY rootfs/ /
# Geodesic banner message
ENV BANNER="atmos"
WORKDIR /
2022-03-08
@Andriy Knysh (Cloud Posse) after I create this and run the container I should be able to “atmos version”, correct? I get “atmos: command not found”. Seems odd but could you just confirm it’s working for you?
I run this Dockerfile (w/o any modifications) https://github.com/cloudposse/atmos/blob/master/examples/complete/Dockerfile
# Geodesic: <https://github.com/cloudposse/geodesic/>
ARG GEODESIC_VERSION=0.152.2
ARG GEODESIC_OS=debian
# atmos: <https://github.com/cloudposse/atmos>
ARG ATMOS_VERSION=1.3.30
# Terraform
ARG TF_VERSION=1.1.4
FROM cloudposse/geodesic:${GEODESIC_VERSION}-${GEODESIC_OS}
# Geodesic message of the Day
ENV MOTD_URL="<https://geodesic.sh/motd>"
# Some configuration options for Geodesic
ENV AWS_SAML2AWS_ENABLED=false
ENV AWS_VAULT_ENABLED=false
ENV AWS_VAULT_SERVER_ENABLED=false
ENV GEODESIC_TF_PROMPT_ACTIVE=false
ENV DIRENV_ENABLED=false
# Enable advanced AWS assume role chaining for tools using AWS SDK
# <https://docs.aws.amazon.com/sdk-for-go/api/aws/session/>
ENV AWS_SDK_LOAD_CONFIG=1
ENV AWS_DEFAULT_REGION=us-east-2
# Install specific version of Terraform
ARG TF_VERSION
RUN apt-get update && apt-get install -y -u --allow-downgrades \
terraform-1="${TF_VERSION}-*" && \
update-alternatives --set terraform /usr/share/terraform/1/bin/terraform
ARG ATMOS_VERSION
RUN apt-get update && apt-get install -y --allow-downgrades \
atmos="${ATMOS_VERSION}-*"
COPY rootfs/ /
# Geodesic banner message
ENV BANNER="atmos"
WORKDIR /
and then atmos version
all is ok
Connected.
✗ . [none] ~ ⨠ atmos version
v1.3.30
Getting close.
I now have spacelift pointed at this repo https://github.com/dfreeman-cricut/cautious-octo-robot copied from https://github.com/cloudposse/atmos/tree/master/examples/complete
with a working “atmos” image (atop geodesic)
and can run a spacelift “Task” such as atmos terraform plan infra/vpc -s tenant1-ue2-dev
and see output like it would be ready to produce the requested terraform artifacts
Question: What else is needed now to get spacelift to recognize these state changes and permit me to “Confirm” and “Apply”?
Tasks are one-off commands that run. To confirm/apply you need Spacelift to trigger a tracked run. That happens by default when you commit and push to the tracked branch
I ran the Task just to make sure atmos is working in my worker image.
Just now I created a new stack on spacelift and pointed it at my repo, and triggered a Run. I get “No changes. Your infrastructure matches the configuration.”
I’m missing somehow the part that would tell spacelift to use atmos when processing the repo.
Good question. I don’t know how that works sorry. I don’t think you can change the Terraform command that Spacelift runs…
there are a few more steps to setup using atmos + Spacelift:
in the repo in .spacelift/config.yml
version: "1"
stack_defaults:
before_init:
- spacelift-configure-paths
- spacelift-git-use-https
- spacelift-write-vars
- spacelift-tf-workspace
before_plan:
- spacelift-configure-paths
before_apply:
- spacelift-configure-paths
environment:
AWS_SDK_LOAD_CONFIG: true
AWS_CONFIG_FILE: /etc/aws-config/aws-config-cicd
AWS_PROFILE: <namespace>-gbl-identity
ATMOS_BASE_PATH: /mnt/workspace/source
stacks:
infrastructure:
before_init: []
before_plan: []
before_apply: []
in rootfs/usr/local/bin/spacelift-configure-paths
# Link the default terraform binary to Spacelift's Terraform installation path of `/bin/terraform`.
# Because the Terraform commands are executed as just `terraform` by `atmos` (unless otherwise specified)
# and also in scripts, and the default PATH has `/usr/bin` before `/bin`,
# plain 'terraform' would otherwise resolve to the Docker container's
# chosen version of Terraform, not Spacelift's configured version.
ln -sfTv /bin/terraform /usr/bin/terraform
echo "Using Terraform: "
which terraform
terraform version
in rootfs/usr/local/bin/spacelift-git-use-https
#!/bin/bash
set -ex
# Spacelift uses a PAT via a .netrc file so any [email protected]: urls need to be
# converted to HTTPS urls or spacelift will fail. This allows us to use SSH
# paths throughout the codebase so local plans work and allows spacelift to
# work.
# The URL "[email protected]:" is used by `git` (e.g. `git clone`)
git config --global url."<https://github.com/>".insteadOf "[email protected]:"
# The URL "<ssh://[email protected]/>" is used by Terraform (e.g. `terraform init --from-module=...`)
# NOTE: we use `--add` to append the second URL to the config file
git config --global url."<https://github.com/>".insteadOf "<ssh://[email protected]/>" --add
in rootfs/usr/local/bin/spacelift-tf-workspace
terraform init -reconfigure
echo "Selecting Terraform workspace..."
echo "...with AWS_PROFILE=$AWS_PROFILE"
echo "...with AWS_CONFIG_FILE=$AWS_CONFIG_FILE"
atmos terraform workspace "$ATMOS_COMPONENT" --stack="$ATMOS_STACK"
in rootfs/usr/local/bin/spacelift-write-vars
function main() {
if [[ -z $ATMOS_STACK ]] || [[ -z $ATMOS_COMPONENT ]]; then
echo "Missing required environment variable" >&2
echo " ATMOS_STACK=$ATMOS_STACK" >&2
echo " ATMOS_COMPONENT=$ATMOS_COMPONENT" >&2
return 3
fi
echo "Writing Stack variables to spacelift.auto.tfvars.json for Spacelift..."
atmos terraform write varfile "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" -f spacelift.auto.tfvars.json >/dev/null
jq . <spacelift.auto.tfvars.json
}
main
this expects that you are using https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation to create Spacelift stacks since the ENV vars ATMOS_STACK
and ATMOS_COMPONENT
are set in the module
Terraform module to provision Spacelift resources for cloud infrastructure automation
but you should get the idea and modify for your needs
@Darrin F
regarding “What else is needed now to get spacelift to recognize these state changes and permit me to “Confirm” and “Apply”?” - if you use s3 backend and Spacelift has access to it, then the state should be in s3, and it should be accessible both locally and from Spacelift
for Spacelift to be able to access your s3 backend, it should be able to assume a corresponding IAM role
see the .spacelift/config.yml
file above
AWS_CONFIG_FILE: /etc/aws-config/aws-config-cicd
AWS_PROFILE: <namespace>-gbl-identity
AWS_CONFIG_FILE
has all the required profiles (not only for the current account, but all profiles to access all the accounts you are provisioning resources to
AWS_PROFILE
is the name of the profile in the /etc/aws-config/aws-config-cicd
file that defines an IAM role that has all the required permissions to 1) provision the resources into the account; 2) access s3 terraform state backend (which can be in the same account or in a different centralized account that stores TF state for all environments)
Thanks for the info Andriy. Seems there’s a bit more to it than I understood. I appreciate the info and I’ll give that a shot.
you don’t need to use all the scripts that I provided, just the general idea: 1) Spacelift needs to assume some IAM role with permissions to access AWS and s3 backend; 2) Spacelift needs to call atmos terraform write varfile
to parse the YAML config and write the varfile for the component in the stack; 3) the component and the stack are provided to Spacelift using the two ENV vars (but you can provide it in any other way as you see fit)
I’m trying to demo the bare bones needs/concept to my team, so thank you for clarifying and I’ll try and make as few updates as possible for now to get it capable.
Thanks for kicking in this week and helping me out. Got a cloudposse infra/vpc demo working on spacelift. Pretty slick stuff. I’ll jump back here when there are more questions…
2022-03-09
2022-03-11
2022-03-17
Had a terrific demo to my team with spacelift and cloudposse’s terraform modules/approach. Truly appreciate the help in this group; we’ll be in touch!
Oh cool! Glad to hear it. Keep us posted if you roll it out.
2022-03-18
Terraform vs. AWS CloudFormation: The Ultimate Comparison Terraform and CloudFormation are both tools for Infrastructure as Code (IaC). We are comparing the two of them and outlining some of the key differences.
Terraform and CloudFormation are both tools for Infrastructure as Code (IaC). We are comparing the two of them and outlining some of the key differences.
2022-03-20
2022-03-21
Terraform Init – Command Overview with Quick Usage Examples What does terraform init command do? See examples and quick usages. Learn how to init your infrastructure with Terraform.
What does terraform init command do? See examples and quick usages. Learn how to init your infrastructure with Terraform.
2022-03-23
What is Pulumi? Key Concepts and Features Overview In this article you’ll learn what Pulumi is, how it works and why it can be helpful for your Infrastructure as Code (IaC).
In this article you’ll learn what Pulumi is, how it works and why it can be helpful for your Infrastructure as Code (IaC).
2022-03-28
Working with Ansible Playbooks – Tips & Tricks with Examples Playbooks are one of the basic components of Ansible. Learn how to use them and see detailed examples with best practices.
Playbooks are one of the basic components of Ansible. Learn how to use them and see detailed examples with best practices.
@ has joined the channel