#spacelift (2022-02)
2022-02-01
@Erik Osterman (Cloud Posse) has joined the channel
@omry has joined the channel
@kevcube has joined the channel
@Paweł Hytry - Spacelift has joined the channel
@marcinw has joined the channel
@Michael Perez has joined the channel
@Taylor has joined the channel
@Andriy Knysh (Cloud Posse) has joined the channel
@Imran Hussain has joined the channel
@RB has joined the channel
@Andy Miguel has joined the channel
@Steven Hopkins has joined the channel
@Dan Meyers has joined the channel
@johncblandii has joined the channel
@Yonatan Koren has joined the channel
@Hugo Samayoa has joined the channel
@Leo Przybylski has joined the channel
@Lucky has joined the channel
@Maxim Mironenko (Cloud Posse) has joined the channel
@Ben Smith (Cloud Posse) has joined the channel
@Jeremy G (Cloud Posse) has joined the channel
@Max Lobur (Cloud Posse) has joined the channel
Spacelift has joined this channel by invitation from SweetOps.
@ has joined the channel
@ has joined the channel
@ has joined the channel
@ has joined the channel
@ has joined the channel
2022-02-02
Receive webhooks from Spacelift.io and turn them into Slack messages - GitHub - alexjurkiewicz/spacelift-webhook-receiver: Receive webhooks from Spacelift.io and turn them into Slack messages
this repo needs more stars
Receive webhooks from Spacelift.io and turn them into Slack messages - GitHub - alexjurkiewicz/spacelift-webhook-receiver: Receive webhooks from Spacelift.io and turn them into Slack messages
I’m curious what is meant by this text at the bottom of the Pulumi vendors page [1] :
Spacelift module CI/CD isn’t currently available for Pulumi.
Is this referring to the terraform module dependency tracking/testing feature(s)?
From this article you can learn how Pulumi is integrated into Spacelift
Hey! Yes. For Terraform, Spacelift features a module registry for your private modules, as well as automated testing of module usage scenarios, version usage tracking, and pretty overviews (inputs, outputs, providers used).
We don’t have something like that for Pulumi.
From this article you can learn how Pulumi is integrated into Spacelift
Hey folks — For the spacelift-automation module — Is there a way to set global spacelift settings for a Stack? As in, if I wanted to add labels to all components in a stack is there a way to do something along the lines of the following?
settings:
spacelift:
labels:
- example
components:
terraform:
example1:
vars: {}
example2:
vars: {}
Where both of the above components will get the example
label?
settings:
section is the same first-class section as vars
, env
, etc.
Ah so the above will work then? I’ll check that out. I hadn’t seen an example of it so I didn’t think it was a thing
it can be set at any level: globals, stage globals, terraform globals, base components, the current component
it gets deep-merged in the same order, so the component settings override base component(s), which override terraform settings, which overrides any global settings
(and for others follow along, @Matt Gowie’s question relates to our implementation for spacelift here: https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation)
Terraform module to provision Spacelift resources for cloud infrastructure automation - GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacel…
@Jim Park has joined the channel
I wrote this nice pattern in our terraform-spacelift-stack module to allow creating Spacelift hooks in source repos. Maybe others will find it useful:
locals {
base_hook_string = "if test -d .spacelift/hooks/%s ; then echo ; find .spacelift/hooks/%s -type f -executable -print | while
read -r hook ; do echo Running $hook ; $hook || exit $rc ; done ; fi"
}
resource "spacelift_stack" "this" {
# ...
before_init = [format(local.base_hook_string, "before_init", "before_init")]
after_init = [format(local.base_hook_string, "after_init", "after_init")]
# etc
}
Then you can add hooks as executable files in your source repo to $project_root/.spacelift/after_apply/
Best Infrastructure as Code Tools (IaC) for DevOps More and more Engineering organizations are moving towards Infrastructure as Code (IaC). See our article and find out the best IaC tools avaliable for your DevOps team.
More and more Engineering organizations are moving towards Infrastructure as Code (IaC). See our article and find out the best IaC tools avaliable for your DevOps team.
fyi, this particular article is no longer visible
More and more Engineering organizations are moving towards Infrastructure as Code (IaC). See our article and find out the best IaC tools avaliable for your DevOps team.
2022-02-08
I had a terraform destroy
spacelift task fail because it was trying to destroy a backup vault that still had recovery points in it. I manually deleted the recovery points and then attempted to run destroy again. Unfortunately, I’m getting the error :
│ Error: Can't ask approval for state migration when interactive input is disabled.
│
│ Please remove the "-input=false" option and try again.
Any ideas how to fix this?
Hello @Isaac! Looks like it has something to do with needing interactive input for the task. Did you run terraform destroy
or terraform destroy -auto-approve
?
I ran terraform destroy -auto-approve
.
Hmm, generally we will invoke terraform with -input=false
because it is executed in the worker and there’s no way to pass interactive input there atm
So sounds like there’s no way to resolve this from within Spacelift? Is there a way for me to download the statefile and work on this from my machine then?
So, just a small correction, I double checked and what I said does not apply to tasks, which are being directly passed to the worker. So for some reason running a terraform destroy -auto-approve
task results in this error. Could you tell me what kind of resource is it? Maybe it is something returned from the tf provider?
Sure, the two resources left to destroy are aws_backup_vault
and aws_kms_key
2022-02-10
Sharing to the community:
spacectl can now be installed via Homebrew: brew install spacelift-io/spacelift/spacectl
and new repo has been created//github.com/spacelift-io/homebrew-spacelift>
Kudos to @ & @
2022-02-14
Hi. I’m fairly new to Spacelift, trying it out as a replacement for TF Cloud for our business. I was wondering if anybody here has used stacks with AWS integration using public workers. The thing I’m trying to wrap my head around is how to go about doing a PR, that requires a terraform import
as part of the release. Usually with TF Cloud, I had a backend
configuration in my .tf
files, and I would do a terraform import
locally (with the remote state), then merge the PR. With Spacelift, how do you deal with PRs that would require an import as part of the release? There seems to be no documentation I can find around this process.
I use spacelift a lot. Each of our modules have a backend. Its not often I have to import something but when I do, I can simply run the import command locally in my module.
@Phil Hadviger is the import part of your normal terraform apply process? do you have yo import every time? or is it a rare occurrence?
It’s a rare occurence. Basically I invited an account to my AWS org, and now want a PR to reflect the aws_organizations_account
resource, but I need the new account imported into the state so it doesn’t error thinking it needs to create something.
But I can’t even find any docs about how I would locally connect to spacelift stored state, using the backend
option in TF.
or if i have to somehow do it in the tasks in the spacelift console
Hi Phil! I’m Chris a Solutions Engineer at Spacelift! In regards to using terraform import, I believe you’re on the right track here - a task command from the Spacelift UI would be a solution here.
We use an s3 backend so we can do an import locally. You are right Phil, I forgot you can do an import via a task run too. We’ve done this as well. The good thing about doing it as a task run vs locally is having the auditable log in spacelift of the action :)
@ I’ll give the task a try. I’m not 100% sure if TF won’t complain, since technically the resource
` is not yet in the codebase, but only in the PR. While I definitely love the AWS role based integration so far, it does have me a bit confused in terms of how to translate TF Cloud workflows to Spacelift. Have to try in a few hours when I get back home.
@RB Yeah if we store the state in S3, vs Spacelift it would be easier to manage this locally. I’m giving the Spacelift state management a try, since I would love to not have to manage the S3 states separately, but I’m finding this setup so far a bit too abstracted.
For future reference if anyone stumbles upon this thread, we updated the Spacelift docs here and added a section on using terraform import
as a Spacelift task:
https://docs.spacelift.io/vendors/terraform/state-management#importing-resources-into-your-terraform-state
@Phil Hadviger,regarding your statement,
since technically the resource
` is not yet in the codebase,
for the import to be successful, you have to have the code in place, and it has to match the state of the resource, particularly WRT unspecified parameters with default values that differ from the actual values.
That’s where verifying that your code matches what you want to import requires you to run plans locally, until the code matches the config.
I’m not sure how to access Spacelift-managed state from local, but with remote states (we use S3) you can do those plans — and as was pointed out — do the import locally as well.
@Eric Berg Yeah, with S3 as a backend I don’t have the same issue and do things like you describe. Since the Spacelift remote backend is abstracted away, I have to check in the code first, see errors there, and then use tasks to import. It’s not impossible, just different. And in reality I always still have the option to just use S3. Spacelift backend doesn’t really offer any additional features over S3 from I can tell outside of me not having to worry about setting up the S3 bucket and managing it.
I guess the arguable benefit of doing this entirely using commits and tasks is that there is a record of all this work in Spacelift, and potentially less need to have local permissions to the state at all. But it just doesn’t seems as easy to debug when you are doing things that requires a lot of imports as part of a refactor.
2022-02-15
Terraform Locals – How to Use Them (Examples) Terraform local variables. Learn what are they and how to use Terraform locals. See examples.
Terraform local variables. Learn what are they and how to use Terraform locals. See examples.
2022-02-16
Terraform Count and For_Each Meta-Argument Overview See how and when to use Terraform count and for_each. Learn things to keep in mind when working with these Terraform meta-arguments.
See how and when to use Terraform count and for_each. Learn things to keep in mind when working with these Terraform meta-arguments.
2022-02-18
2022-02-22
Ansible Tutorial for Beginners: Installation, Playbook, Examples What is Ansible? How does it work? See the Ansible tutorial for beginners with playbook and commands explained with examples.
What is Ansible? How does it work? See the Ansible tutorial for beginners with playbook and commands explained with examples.
Can someone point me to the docs on importing existing Spacelift stacks into Terraform?
We have a snippet about doing that with Spacelift Tasks on this page: https://docs.spacelift.io/vendors/terraform/state-management
On a side note, I am working on updating the Terraform provider documentation to provide an example of the import command for each resource type.
I need the syntax to import into the Spacelift Terraform provider. I don’t see that there.
I probably asked you and forgot already, @. Thanks…that’s what I need.
It would be terraform import [spacelift_stack.my](http://spacelift_stack.my)_stack my-stack
Hey, @, for importing spacelift_policy_attachment
resources, I tried this, with proper values:
terraform import spacelift_policy_attachment.this policy-id/stack-name
but I got this error:
Error: expecting attachment ID as $policyId/$projectId
What’s projectId
here?
And BTW, this is another example of where having easy access to the proper resource ID’s on the UI would be helpful. In this example, I would probably be on the Policies
tab of the stack in question, where the IDs of both the stack (which I assume is what is meant by projectId
) as well as the policy.
Hey @ . Let me look into this.
projectId
is either the ID of the stack or module. I will document this when I document the import feature.
You example should work. Are the policy and stack IDs valid?
There is a widget to copy the ID at the top of the stack and policy pages.
Right. Did you count the clicks to get to a policy id from the stacks screen or vice versa? Having IDs available for all resources displayed on each page would be idea.
@Eric Berg I talked to our frontend team. They added the feature request to their backlog. They are planning an overhaul of the UI so it will likely be part of it but I have no ETA to share at this time.
UI overhaul is something we’re definitely looking forward to
2022-02-23
Is there a way to see the config for each run? Specifically, I’m thinking about the environment, but full context for the run would be very nice.
if you use atmos 1.x in spacelift, then it will show all the yaml inputs directly in the spaceloft preparing/init stage iirc
2022-02-24
Is there a way to use Spacelift-managed state from outside Spacelift? I think not…
Hm, great question - Not sure on this one to be honest, I’ll ask internally to our engineering team.
The closest thing I could think of (which likely isn’t what you’re looking for), would be using the local-preview feature, which allows you to trigger plans from your local within Spacelift using the Spacelift CLI… Figured I’d mention this anyway just in case this was any use.
Does local-preview run local code or does it go off the branch?
we use private spacelift worker pools and manage state in our s3 bucket
the private spacelift worker pools have access cross-account to read/write to the s3 bucket
spacectl’s local-preview
runs against your local, not-pushed-yet, changes. To do so, it creates an archive of the local folder content and uploads it to a worker.
Unless you explicitly export it using a Spacelift task, the state can’t be accessed outside authorized runs and tasks.
Good idea. that’s what I got from reading that doc.
So, the short answer to your question Eric is no. But using the local-preview
spacectl command works around that limitation.
oh woops, sorry didn’t realize that the question was in reference to managed state ^
At least that gives me the option of migrating to managing state in S3, if and when.
My main concern is development. I don’t want to have to push to SL for every init, refresh, plan, and apply…or imports. Thus my question about using local source somehow.
That is exactly what spacectl’s local-preview
does.
It is basically like running Terraform (or any other supported IaC tool) locally except that it runs in Spacelift with the added benefits of the tool.
cool. excellent. thanks. I’ll check that out.
you can eject from spacelift-managed state by running terraform state pull
as a task. But at that point you’ll need to destroy the stack, since it will have permanently logged sensitive data in the task output
Is there a difference between the TRIGGER
button on the stack pages and the RETRY
button on the run pages?
Say, for example, I changed the base_dir
between runs. Is there a difference in what is triggered by those two buttons? Would RETRY
possibly still use the base_dir
from the run it’s retrying, instead of the newly-updated value? or any other settings…
They are the same.
I added a test pre_init
hook to a stack, clicked the Retry button, and the new hook was triggered.
Thanks for the confirmation!