#office-hours (2024-05)

“Office Hours” are every Wednesday at 11:30 PST via Zoom. It’s open to everyone. Ask questions related to DevOps & Cloud and get answers! https://cloudposse.com/office-hours

Public “Office Hours” are held every Wednesday at 11:30 PST via Zoom. It’s open to everyone. Ask questions related to DevOps & Cloud and get answers!

https://cpco.io/slack-office-hours

Meeting password: sweetops

2024-05-01

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:00:33 PM

@here office hours is starting in 30 minutes! Remember to post your questions here.

2024-05-02

2024-05-08

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:00:21 PM

@here office hours is starting in 30 minutes! Remember to post your questions here.

2024-05-12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This lays to rest my questions over Mercedes auto pilot and Tesla FSD, as we discussed a couple of weeks ago.

https://youtu.be/h3WiY_4kgkE?si=LtM9DZzvM-ny6FUM

1

2024-05-13

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Noovolari has officially come to an end.attachment image

We have decided to close down Noovolari. This decision, marks the end of an amazing journey.

jose.amengual avatar
jose.amengual

ohhhh no, what is going to happen to Leapp?

Noovolari has officially come to an end.attachment image

We have decided to close down Noovolari. This decision, marks the end of an amazing journey.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


The Leapp open-source project will continue under the stewardship of beSharp, our parent company. We are still assessing the resources and effort that will be allocated to this project, but we are hopeful about its future and will keep you informed of any developments.

managedkaos avatar
managedkaos
What caused the UniSuper Google Cloud outageattachment image

Duplication across geographies no defense against the ‘one-of-a-kind’ accidental deletion

managedkaos avatar
managedkaos

Talk about a win for hybrid cloud!
“UniSuper had duplication in two geographies as a protection against outages and loss. However, when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies.”

Fortunately, UniSuper had backups at another cloud provider.

What caused the UniSuper Google Cloud outageattachment image

Duplication across geographies no defense against the ‘one-of-a-kind’ accidental deletion

10001

2024-05-15

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:01:15 PM

@here office hours is starting in 30 minutes! Remember to post your questions here.

1
1
Enrique Lopez avatar
Enrique Lopez

Hey @Erik Osterman (Cloud Posse) what is the zoom link?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry, I missed your message.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you signed up at cloudposse.com/office-hours?

Enrique Lopez avatar
Enrique Lopez

I just did that when I sent the first message

venkata.mutyala avatar
venkata.mutyala

In one of our environments I’ve been noticing really slow ECR pull times. Particularly when we replace our nodepools. For example an image that normally takes 15 seconds to pull can take as long as 8-10 minutes. During our node pool replacement we have ~~~400 pods with a single container that get their image from ECR. In total it’s ~~~0 distinct images in ~200 private ECR repositories and because of how we have pod anti-affinity/nodes setup the 200 images get pulled twice. That being said I don’t believe we ever exceed 400 pulls per second and the AWS Quota docs make it seem like we could do a multiple more (ex. 2k/second). I’m curious to know if anyone has run into similar issues and if AWS needs to just scale something on their backend.

I have plans to open an AWS support ticket but if I figure I would throw the question out here in case someone has a quick fix for this.

Enrique Lopez avatar
Enrique Lopez

How secure or insecure is to give github actions access to an AWS VPC by implementing Github Self Hosted Runners?

1
venkata.mutyala avatar
venkata.mutyala

I haven’t done self-hosted runners but I assume it talks back to github servers and awaits instructions. Ex. reverse tunnel/proxy. Regardless you will have to trust Microsoft/GitHub but aside from that a couple things you could do that are pretty easy to help lock things down are:

• Create separate subnet(s) for these selfhosted runners.

• Create separate security groups(s) for these runners. Depending on how your network/securitygroups are laid out, you may want to just add the github runner sg to the resources it needs to access or explicitly allow the particular subnet IP range.

Hope this helps in the short-term. I’m curious to see what others say in the next office hours.

1
    keyboard_arrow_up