#spacelift (2024-07)

2024-07-02

RickA avatar

I’ve heard rumor that there’s an alleged way of manipulating online workers to allow more room for “burst” runs and not pay billing overages.

I can’t figure out how that math might work. Anyone have any insight on how you manage to turn workers on/off to benefit Spacelift billing?

Our basic use case is there’s light traffic except for release periods 1-2 times a week for 2-3 hours. So we need maximum throughput for releases, but can have minimal numbers or none a vast majority of the time.

RickA avatar

Our contract is for P95 so there’s not a lot of burst room in my math.

loren avatar

i use this module, and enable the lambda autoscaler, https://github.com/spacelift-io/terraform-aws-spacelift-workerpool-on-ec2/

spacelift-io/terraform-aws-spacelift-workerpool-on-ec2

Terraform module deploying a Spacelift worker pool on AWS EC2 using an autoscaling group

loren avatar

can scale to 0 to minimize ec2 cost

loren avatar

we set min to 1 to minimize wait time for most prs

RickA avatar

But you don’t do that because of any Spacelift billing benefits, correct? I’m researching an alleged method of managing workers to impact Spacelift benefits.

loren avatar

max is like 70 workers, and we still stay under P95

loren avatar

so, i’d say there are billing benefits

loren avatar

5 workers * 24 hrs/day * 30 days/month = 3600 worker-hours. P95 is 3420 worker-hours

RickA avatar

Our contract isn’t for worker hours. It’s for workers. Meaning 5 workers in a 30 day month gives us 720 billable hours and a P95 of 684 - or a 36 hour buffer where we’re allowed to run over 5 workers.

Further they do metric captures per minute. So any hour in which I run more than 5 workers for 3+ minutes means that’s an overage hour - one of my 36 available.

RickA avatar

Does your contract allow worker hours and we need to negotiate better?

loren avatar

no, it’s the same as yours

RickA avatar

Is my math wrong then or do your workloads just finish bursting inside of 36 hours regularly?

loren avatar

they aren’t clear on their formula, in my opinion, so i can’t say for certain

loren avatar

but the way they calculate it, i think we’re under “2 workers” in their P95 calculation, even though we burst from 1 to 30+ several times a week

loren avatar

we don’t have 5 workers running constantly, so i think we get to recoup a lot of that time, however they do their calculation

RickA avatar

We moved to the Kubernetes worker management so our big issue is that we manually scale up/down and if you forget you eat up 36 hours in…well 36 hours or less.

But also our workloads mean 100-200 stacks at a time for 2-3 hours 1-2 times a week. On a high side that’s 24 hours just on releases when we scale up to +10 over our default 4.

Which….you hit 70. If we scaled up obscenely then our throughput would be faster and we could be done within 1 hour let’s say each time. Meaning 8 hours of burst instead of 24 because we’re not abusing their leeway enough.

RickA avatar

Even if your P95 is at 2 you’re still paying for your 5 workers though. So there’s no benefit to running less than 5 from a Spacelift perspective (there is from the AWS perspective…).

loren avatar

there is, because it keeps our avg down to the point where we can burst to several dozen

loren avatar

if we ran 5 all the time, we’d have no burst

loren avatar

and 5 is the minimum they contract for. can’t reduce it and pay any less anyway

loren avatar

i haven’t checked to see if the lambda autoscaler works with kubernetes. i suppose it could

RickA avatar

If we write the numbers 1-20 down in a row, say into a spreadsheet, and you want to calculate the P95 value then you’re going to pull the 19th value in the list. Sorted highest to lowest that 19th value is 19.

If you replace 1-18 with a 0, sort the list the same way, and pull the P95 value you’re still going to get 19.

Am I calculating P95 incorrectly?

loren avatar

according to your definition of what you are taking the P95 of, or theirs?

loren avatar

i don’t think they define theirs very well, so i’d be rather surprised if any of us could perform a calculation that came out correct

RickA avatar

I’ve taken the usage data they allow you to download and have calculated it in the past successfully. Will see if I can show an example with a little Excel time.

2024-07-09

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

following up on the discussion on triggering Spacelift admin stacks with GitHub Actions

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Here’s our documentation for triggering spacelift runs from GHA using GitHub Comments: https://docs.cloudposse.com/reference-architecture/fundamentals/spacelift/#triggering-spacelift-runs

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Also, this discussion above is similar and may help to read through: https://sweetops.slack.com/archives/C0313BVSVQD/p1718707848601429

The refarch config for the Spacelift admin stacks in each tenant includes the following config (e.g. for plat)

context_filters:
  tenants: ["plat"]
  administrative: false # We don't want this stack to also find itself

We have a few cases where we might want some child stacks for a tenant’s admin stack to be administrative • to create Spacelift terraform resources (e.g. policies or integrations) • (not yet tried) to create a new admin stack for a child OU of a parent OU (keyed off ‘tenant’) Is there a context filter pattern for a tenant’s admin stack that allows for administrative child stacks, whilst still not allowing the stack to find itself?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

And finally, regarding how to include admin-stacks in the atmos describe affected GHA. The action you’re using already includes admin-stacks: https://github.com/cloudposse/github-action-atmos-affected-trigger-spacelift/blob/main/action.yml#L90

However you will need to add the Spacelift GIT_PUSH policy for triggering on PR comments to the stacks in Spacelift. And remove the GIT_PUSH policy that triggers on every commit

        atmos-include-spacelift-admin-stacks: "true"
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

cc @Andriy Knysh (Cloud Posse) @michaelyork06

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@michaelyork06 here

Michael York avatar
Michael York

@Gabriela Campana (Cloud Posse) Here

2
Michael York avatar
Michael York

@Elena Strabykina (SavvyMoney) ^

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Elena Strabykina (SavvyMoney) @michaelyork06 please let me know if you have any questions or need further assistance

Elena Strabykina (SavvyMoney) avatar
Elena Strabykina (SavvyMoney)

Hi @Gabriela Campana (Cloud Posse). We would like to follow up after our call with Cloud Posse team last week. • Enable/use existing atmos feature to detect affected administrative stacks and run terraform plan only for those. There is a flaw in this atmos feature: it marks an administrative stack as affected if there is a change in its child stack, also it sounded like there is an issue with detection of deleted child stacks - to tell the truth actual logic is not clear to me. To mitigate the flows, we have two options: ◦ Cloudposse seemed to be interested in modifying atmos to detect added/removed child stacks and only then mark their parent as affected. If they implement this, the issue with admin stacks should be solved - we would like to follow up and confirm if/when you guys plan to do this.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Elena Strabykina (SavvyMoney) We are discussing this internally and will get back to you asap

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ll follow up via email, since this is a public forum.

2

2024-07-17

2024-07-22

2024-07-25

2024-07-29

    keyboard_arrow_up