#aws (2023-03)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2023-03-01

slackbot avatar
slackbot
09:01:07 PM

Nave has removed themselves from this channel.

2023-03-03

el avatar

Anyone have advice on automatically updating the AMI of a launch template for an ASG? Right now we just periodically apply Terraform. Current thought is to ignore the AMI field in Terraform and write a Lambda to update the launch template, and I’m wondering if there’s a better way.

Stephen Tan avatar
Stephen Tan

this is exactly what we do in my company. We have a lambda which looks up the latest AMI every week and updates a set of launch templates. Not only that, we also get the Lambda to optionally nuke all the instance in the ASG in order to rebuild them with the latest AMI and patches. I work for a Bank, so keeping up to date with patches is mandatory. The only machines we don’t rebuild automatically are certain Production machines which require us to make a formal “downtime” period for rebuilding. It means we don’t have to do any patching for any instances we run. All our instances are in ASGs. For machines which need Deployment, we have set up CodeDeploy to trigger a deploy ( using ansible ) to each instance.

Stephen Tan avatar
Stephen Tan

using a Lambda is the best way to go IMHO

el avatar

gotcha, thanks!

Stephen Tan avatar
Stephen Tan

it means that in order to “patch” an instance, we simply terminate the instance. Automation then rebuilds the host

el avatar

nice any reason you don’t use an instance refresh instead?

Stephen Tan avatar
Stephen Tan

I guess you could do instance refresh tbh - never really thought about instance refresh - it sounds like a slightly more organised way of terminating which is fine

el avatar

gotcha. I think instance refresh spins up new instances first so you’ve got some redundancy while the old instances are taken out of service

Stephen Tan avatar
Stephen Tan

ah - fair enough - that sounds much better tbh. In fact having thought about it, I think we might do an instance refresh.

Stephen Tan avatar
Stephen Tan

I’d love to share the Go code we use but I fear that I would get into trouble if I did as it’s work’s IP

el avatar

understandable should be quick to write some python to do it

el avatar

would be amazing if AWS just offered this as an ASG feature

el avatar

tick a box to keep the AMI up to date and call it a day

Stephen Tan avatar
Stephen Tan

yeah - it’s not complex and yes, AWS would do well to make this “standard”

Warren Parad avatar
Warren Parad

Run your TF on a schedule and as a CLI command during the run pull the latest version of the AMI and pass it as a TF variable

2023-03-04

2023-03-05

Anand Singh avatar
Anand Singh

Hi There Need small help here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application Where to upload application code? Or this terraform code would only setup beanstalk environment

cloudposse/terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application

Alex Jurkiewicz avatar
Alex Jurkiewicz

latter

cloudposse/terraform-aws-elastic-beanstalk-application

Terraform Module to define an ElasticBeanstalk Application

1

2023-03-06

James avatar

Hi All,

I’m working on something which would collect quarterly and annual report data from booking entity authorities (think like uber and taxi companies). Is there any downside to using *Aurora* over classic *RDS* given that the traffic will be low throughout the year, and increase due about a 4 month a year when reports are due

Any other suggestions are welcome as well. Thanks!

Alex Jurkiewicz avatar
Alex Jurkiewicz

No major relevant differences. Aurora has a serverless offering, which can help with saving money for elastic workloads. But if your workload’s elasticity is measured in days+, you can do just as well with manual RDS vertical scaling

1
managedkaos avatar
managedkaos

For operations at high scale, the main benefit I’ve seen from Aurora is the autofailover and the automated snapshotting, backups, and replication. If you are good about managing your data, RDS can work well for less demanding loads. And as mentioned, you can scale up when needed if you already know when to expect higher loads.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya makes sense to skip Aurora for these reasons. Plus if it will just be used for reporting, and that output can be cached in another table, you might not care if a query even takes a long time to run.

1

2023-03-07

managedkaos avatar
managedkaos
How to use policies to restrict where EC2 instance credentials can be used from | Amazon Web Servicesattachment image

March 7, 2023: We’ve added language clarifying the requirement around using VPC Endpoints, and we’ve corrected a typo in the S3 bucket policy example. Today AWS launched two new global condition context keys that make it simpler for you to write policies in which Amazon Elastic Compute Cloud (Amazon EC2) instance credentials work only when […]

1
2
1
Nishant Thorat avatar
Nishant Thorat

Essentially you need to use VPC endpoint policy for this effect. I wrote a blog explaining PrivateLink and VPC Endpoints https://www.cloudyali.io/blogs/demystifying-aws-privatelink-and-vpc-endpoint-services-everything-you-need-to-know

Demystifying AWS PrivateLink and VPC Endpoint Services

VPC Interface Endpoints (powered by AWS PrivateLink) are a powerful technology to improve security and performance in an AWS environment.

2023-03-10

2023-03-15

OliverS avatar
OliverS

Does anyone have a recommendation for a tool to visualize – or at least tabulate – which AWS resources use a particular security group?

Given how many resource types can point to an SG, I’m surprised there isn’t an established way to do that (other than pretending to delete the SG from console but this is dangerous and not programmatic). I found a couple of open source projects on github like a python project sgdeps and a bash one sg-tool but < 40 stars so I figure they are not the go-to solution for this problem.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I bet it could be accomplished using the new SaaS like steam pipe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Have you looked at https://github.com/wallix/awless? Not sure if it can do it but it is design to make it easier to list resources on AWS

wallix/awless

A Mighty CLI for AWS

1
OliverS avatar
OliverS

what does this mean: “SaaS like steam pipe”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Steampipe | select * from cloud;

Steampipe is an open source tool to instantly query your cloud services (e.g. AWS, Azure, GCP and more) with SQL. No DB required.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Infra as SQL | IaSQL

Cloud infrastructure as data in PostgreSQL

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, I didn’t realize steampipe was open source)

OliverS avatar
OliverS

Yeah I remembered seeing something like that but didn’t remember the name

OliverS avatar
OliverS

there was something similar for terraform too, to query the state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Turns out, even steampipe can do that https://hub.steampipe.io/plugins/turbot/terraform

Terraform plugin | Steampipe Hubattachment image

Query Terraform files with SQL! Open source CLI. No DB required.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s really freaking rad. @Robert Horrox I see your point more and more.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All these platforms are supported: https://hub.steampipe.io/plugins

Catalog of Plugins | Steampipe Hubattachment image

Here’s the list of data sources and APIs that Steampipe supports: Cloud Services, APIs, files, databases, etc.

OliverS avatar
OliverS

yeah pretty cool if you just need to search through a couple of resource types

but i think uses one table per resource type, so you have to know all tables to search, the name of the SG column for each table, and query each table individually.

Granted, once you go through this exercise once, you can write a sql script

However to create a map of all SG to all resources that use them, I suppose you would have a bunch of joins, again in a script

is that easier than writing a loop in bash, I’m not sure

Robert Horrox avatar
Robert Horrox

Two things you can leverage steampipe for:

  1. You can pull the raw data from multiple AWS accounts at once with one SQL call and output to a json file for easy parsing with jq or other tools.
  2. It can be run as a service exposing a postgres interface which you can use any programming language with a postgres driver to pull from. My experience with the AWS cli had been less than great
1
Jim Park avatar
Jim Park

TIL you can configure EC2 to use the resource name as the hostname for an EC2 instance, so that when you log into an instance, or query kubectl nodes , you can skip the IP address to resource id translation step. API | Wizard

When you launch an EC2 instance with a Hostname type of Resource name, the guest OS hostname is configured to use the EC2 instance ID. • Format for an instance in us-east-1: _ec2-instance-id_.ec2.internal • Example: _i-0123456789abcdef_.ec2.internal • Format for an instance in any other AWS Region: _ec2-instance-id.region_.compute.internal • Example: _i-0123456789abcdef.us-west-2_.compute.internal

2023-03-16

Nishant Thorat avatar
Nishant Thorat

Hello everyone, Amazon Linux 2023 was just released on March 15th! This latest version comes with three significant features:

• Default IMDSv2 support with max two hops, which greatly enhances security posture. To learn more about IMDS, check out my blog: https://lnkd.in/gnTA_brw

• AL2023 utilizes gp3 volumes by default, reducing costs with improved performance.

• Versioned repositories offer more control over packages, allowing for better standardization of workloads.

Check out the official AWS announcement to learn more: https://lnkd.in/g2kPryj8

Understand instance metadata service (IMDS) for secure EC2 instances.

Understand how IMDSv2 improves security. Identify instances with IMDSv1 or IMDSv2. Know how to enforce IMdSv2.

Amazon Linux 2023, a Cloud-Optimized Linux Distribution with Long-Term Support | Amazon Web Servicesattachment image

I am excited to announce the general availability of Amazon Linux 2023 (AL2023). AWS has provided you with a cloud-optimized Linux distribution since 2010. This is the third generation of our Amazon Linux distributions. Every generation of Amazon Linux distribution is secured, optimized for the cloud, and receives long-term AWS support. We built Amazon Linux […]

1

2023-03-17

Balazs Varga avatar
Balazs Varga

question: I have a base role without iam:listroles is there any way to get assumable roles attached to this role ?

2023-03-19

mario.stopfer avatar
mario.stopfer

Hello everyone! We are revealing the pricing for CodeSmash, our new No Code platform! If you want more info, feel free to check it out at https://codesmash.studio

CodeSmash - The ultimate no-code platform

CodeSmash is the scalable no-code platform that offers you full transparency to your code and the freedom to continue building.

Diego Maia avatar
Diego Maia

The idea is very interesting. I currently work with a low-code platform here in Brazil that is constantly expanding with several clients. I am not familiar with Codesmash, but I like the idea. There is a lot of market potential and infinite possibilities with this type of product.

CodeSmash - The ultimate no-code platform

CodeSmash is the scalable no-code platform that offers you full transparency to your code and the freedom to continue building.

mario.stopfer avatar
mario.stopfer

Yes, I agree. You will be able to build cloud infrastructure using No Code tools but the code will be saved in the background in your Git repos. Each Git repo will have a CI/CD pipeline attached to it so you have automated deploys with Terraform of your architecture as well.

mario.stopfer avatar
mario.stopfer

After that, I will expand into frontend so that you can build fullstack apps as well.

2023-03-20

Shreyank Sharma avatar
Shreyank Sharma

Hello, We are running an Elasticsearch stack(installed using helm) in a 3-node kubernetes cluster in AWS installed using kops in that we have an Elasticsearch cluster running 1 client 2 data nodes 2 master nodes. to our Elasticsearch cluster applications will be sending logs (apps in kubernetes and lambdas)using logstash. Now we are planning to move away from kubernetes and planning to migrate the applications running in Kubernetes to ECS (AWS Elastic Container Services). right now we have around 300 indices with size 20GB with 5 shards and 1 replica. I have some analysis of how to move data if we migrate the Elasticsearch cluster to dockers running in ec2 and tested it works fine. there are many reasons why are moving away from kubernetes but one of the reasons is Cost. and 75% of the k8s cluster was used by ealsticsearch. We don’t want to go with Elastic cloud to Open Search as it is costly. *Now my question is what is the best option for us for Elasticsearch once we move away from kubernetes?*

  1. Docker running in EC2
  2. Elastic Container Service (not sure how this will work with EFS storage)
  3. On-Prem (not sure if it is one of the options as all our applications are running on the cloud) Please let me know if there is any better option. Any help is very much appreciated. Many thanks.
1
Diego Maia avatar
Diego Maia

I advise against using EFS storage for your ES cluster due to performance, scalability, and cost reasons. Some alternatives you may consider are using local storage volumes or EBS volumes when considering Docker on EC2 or ECS for your ES cluster.

1
this1
Diego Maia avatar
Diego Maia

Today, I manage an ES with over 10TB of data, but it is running inside Kubernetes. I have had a lot of headaches refining its performance, but now I don’t have any more issues. The secret to your decision should not be based solely on cost, but rather on how much maintenance you will have, because depending on the criticality of your cluster, it directly affects the business.

Shreyank Sharma avatar
Shreyank Sharma

thanks for the reply @Diego Maia

in my setup 3 data node (4gb memory each) 2 master node (2gb each) 1 client (1gb)

still we get OOM kills

Shreyank Sharma avatar
Shreyank Sharma

we do you think its a good idea if we re index our indices from 5 shard 1 replicas to 2 shard 1 replica when I migrate

Shreyank Sharma avatar
Shreyank Sharma

Can I have ECS with EFS as storage

Diego Maia avatar
Diego Maia

Reindexing indices can be a good idea to improve performance and reduce resource overhead…Here in my case I cut the index replica to 0, because the sync between data nodes is not good for us, …remember it’s my case of use.

1
Diego Maia avatar
Diego Maia

Yes, it’s possible to use ECS with EFS as storage, it’s important to consider the performance and scalability limitations of EFS, but i not recomend if your problem is performance.

Diego Maia avatar
Diego Maia

for OOM… I recommend set with ES_JAVA_OPTS include -Xms and -Xmx, which set the initial and maximum heap size of the JVM, and -XX:+UseCGroupMemoryLimitForHeap, which allows the JVM to use OS cgroup memory limits to automatically adjust the heap size.

Diego Maia avatar
Diego Maia

example

ES_JAVA_OPTS=“-Xms4g -Xmx4g - -XX:+UseCGroupMemoryLimitForHeap”

Shreyank Sharma avatar
Shreyank Sharma

that is set as 2gb for data 700mb for master, 600mb for clients, thank you

Shreyank Sharma avatar
Shreyank Sharma

Even we thought to change replicas to 0 but planned to keep 1 replica in PROD env

Diego Maia avatar
Diego Maia

Replicas are costly for data nodes only. But evaluate this depending on the case. You can also work with the number of shards to be started simultaneously, which helps in case of data node restarts and you want a fast recovery.

Stef avatar

I would strongly suggest to rather look at Amazon OpenSearch Service, yes it’s more expensive to run. However the administrative overhead is drastically reduced.

3

2023-03-21

awl avatar

For anyone in the US who took an AWS exam in person with Pearson: What did you use for your 2nd form of ID? I have a license for #1, but my passport is expired. It seems silly, but can I just show a credit card with my signature on the back? It meets requirements, but seems kind of silly.

BillM avatar

Phone and confirm with them, I’d assume an expired passport should still be good. You can’t travel on it, but it’s still you.

Bhavik Patel avatar
Bhavik Patel

I have a CloudFront distribution set up to serve a static website hosted on an S3 bucket. The website is built with React and React Router, which expects the base path to be in the root directory.

I also have a custom domain configured to point to the CloudFront distribution. However, when I navigate to /apply on the custom domain, I get a 404 error. After investigating, I found that CloudFront is routing the request to the S3 origin bucket, but it’s not serving the index.html file in the root directory as expected.

I tried to fix this by updating the Lambda@Edge function to properly direct the /apply path to the index.html file in the root directory of the S3 origin bucket. Here’s the updated function:

'use strict';

exports.handler = (event, context, callback) => {
    const request = event.Records[0].cf.request;
    const url = request.uri;
    const onlyApply = /^\/apply$/;
    if (onlyApply.test(url)) {
        const newOrigin = {
            custom: {
                domainName: 's3-website-us-east-1.amazonaws.com',
                port: 80,
                protocol: 'http',
                path: '',
                sslProtocols: ['TLSv1', 'TLSv1.1', 'TLSv1.2']
            }
        };
        request.origin = newOrigin;
        request.uri = '/index.html'; // append index.html to the URI
    }
    const response = event.Records[0].cf.response || {};
    if (onlyApply.test(url)) {
        response.status = '301';
        response.statusDescription = 'Moved Permanently';
        response.headers = response.headers || {};
        response.headers['location'] = [{ key: 'Location', value: 'https://' + request.headers.host[0].value + '/index.html' }]; // append index.html to the Location header
    }
    callback(null, response);
};

Although this works, now my users are getting redirected to the static website URL instead of the custom domain.

Fizz avatar
Prevent Cloudfront from forwarding part of path to origin server

Background: I have an S3 Bucket ( Origin 1) that serves a static website under a the domain example.com using Cloudfront.

Goal:

Additionaly i want example.com/subfolder to serve content from seco…

Bhavik Patel avatar
Bhavik Patel

@Fizz I tried this recommendation but it doesn’t quite work with my case or I might not be comprehending something correctly.

My behavior pattern is /apply* -> where the origin is an S3 bucket that does not expect /apply * Default -> where the origin is a website like www.google.com

My goal here is the

  1. <subdomain>/apply to go to S3 w/o /apply
  2. Any other subdomain requests go to the default path (www.google.com) With the changes on the serverless forum. What would end up happening is that another request would happen without the /apply and it would go to the default path and return a different origin
Fizz avatar

That should work. What behaviour did you observe where you only stripped the path, and did not do a redirect and did not change the host? To be clear, you should not be doing a 301 redirect and you should not be changing the host to the s3 bucket.

Bhavik Patel avatar
Bhavik Patel

Definitely clear that I should not be doing a redirect, its just been my last ditch effort after trying a million things to get this working.

The article that you linked up with my first iteration of trial and error. I recreated the setup, but it still fails to source the index.html correctly in the root directory of the s3 bucket

Bhavik Patel avatar
Bhavik Patel

Here is the CF function i’m using

Bhavik Patel avatar
Bhavik Patel

How path behaviors are setup

Bhavik Patel avatar
Bhavik Patel

Results when I try to navigate to my distribution’s URL + Path

Fizz avatar

What status code are you getting back? 404? 200? 401?

Fizz avatar

And it looks like you got a response as you got the assets folder back in the response

Bhavik Patel avatar
Bhavik Patel

It’s a 404 - failed to load resource. Makes sense since its trying to load the apply file instead of index.html

Fizz avatar

Hmm. Assuming you have checked the lambda is firing, and entering the if block and setting the uri, then after that I would turn on access logging for the bucket if access is via http, or turn on cloudtrail for s3 object access to see what API calls are being made against the bucket

2023-03-22

Nishant Thorat avatar
Nishant Thorat

Hey there everyone!

Hope you’re all doing well. I’m looking for some insights on maintaining resource tag hygiene in AWS environments. I’d love to hear your thoughts on how you standardize and enforce resource tags across your teams, projects, and deployments.

Additionally, I’m currently working on a tool to help with tags hygiene, and I would be thrilled to receive any feedback or comments from you all. If anyone is interested in working more closely with me on this, feel free to DM me or drop a note in the comments.

Thank you all in advance for your help and support!

Alex Jurkiewicz avatar
Alex Jurkiewicz

aws tag editor?

Stef avatar

Start with an SCP policy to enforce tagging

Bhavik Patel avatar
Bhavik Patel

I’ve managed our through terraform. There are some pretty verbose modules that have been created to assist with tagging. Haven’t had the need to use them though

Nishant Thorat avatar
Nishant Thorat

@Alex Jurkiewicz @Stef @Bhavik Patel tag policy for identifying the gaps/tag compliance and scp for enforcing seems the way forward.

One question though- how do you ensure that once you identify gaps, the developers/engineers tag those resources before you start enforcing scp. What process do you follow, specially across teams/projects.

Bhavik Patel avatar
Bhavik Patel

For our company, we have two modes to ensure that we’re tagging resources.

  1. We are connected to a compliance application called Vanta which ensures that certain resources that carry PII information is tagged properly
  2. It’s part of our PR process. If someone is provisioning resources, they need to be tagged
m avatar

I’ve been working with a tool called Resoto in my personal environments. I would investigate something like the steampipe aws compliance mod if I needed to conform to a standard … or else. I’m also a big fan of Service Catalog in general to deploy Infrastructure as Products, but I don’t like the way it does tags. But check it out, might be for you. Whatever you do, put it in a pipeline!

Resource Tagging | Resoto by Some Engineering Inc.attachment image

Resoto is able to create, update, and delete tags for resources that support tags.

AWS Compliance mod | Steampipe Hubattachment image

Run individual configuration, compliance and security controls or full compliance benchmarks for CIS, FFIEC, PCI, NIST, HIPAA, RBI CSF, GDPR, SOC 2, Audit Manager Control Tower, FedRAMP, GxP and AWS Foundational Security Best Practices controls across all your AWS accounts using Steampipe.

m avatar

Where is your tool hosted?

2023-03-23

Renesh reddy avatar
Renesh reddy

Hi @cloudposse-team

I have created VPN and associated with 2 private subnets which are routed by NAT. Able to connect VPN. Trying to connect RDS DB getting issues nodename or service provider, or not known. I have allowed IP’s and ports for VPN and RDS security groups.

Not sure what would be the issue. ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Lots of possible problems there, none of them specific to Cloud Posse tooling. You should ask AWS support (or use other public AWS support resources).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Trying to connect RDS DB getting issues nodename or service provider, or not known.
Also, could you try using the IP directly to rule out DNS issues?

2023-03-24

2023-03-28

Nitin avatar

Hello Team,

We today we are facing sudden issue rds-cluster module. It tring to replacing our existing infrasture. because it is now using cluster_identifier_prefix instead of cluster_identifier. any idea how we can resolve this issue?

Nitin avatar

resource "aws_rds_cluster" "primary" { ~ allocated_storage = 1 -> (known after apply) ~ arn = "arn:aws:rds:[MASKED]:[MASKED]:cluster:renamed" -> (known after apply) ~ availability_zones = [ - "[MASKED]a", - "[MASKED]b", - "[MASKED]c", ] -> (known after apply) + cluster_identifier_prefix = (known after apply) ~ cluster_members = [ - "renamed-1", ] -> (known after apply) ~ cluster_resource_id = "cluster-renamed" -> (known after apply) ~ database_name = "renamed" -> (known after apply) - enabled_cloudwatch_logs_exports = [] -> null ~ endpoint = "renamed.cluster-asdfsafdasdfasdf.[MASKED].[rds.amazonaws.com](http://rds.amazonaws.com)" -> (known after apply) ~ engine_version_actual = "13.9" -> (known after apply) ~ hosted_zone_id = "Z2VFMSZA74J7XZ" -> (known after apply) ~ iam_roles = [] -> (known after apply) ~ id = "renamed" -> (known after apply) - iops = 0 -> null ~ master_username = "renamed" -> (known after apply) ~ port = 5432 -> (known after apply) ~ reader_endpoint = "renamed.cluster-ro-asdfsafdasdfasdf.[MASKED].[rds.amazonaws.com](http://rds.amazonaws.com)" -> (known after apply) - storage_type = "aurora" -> null # forces replacement tags = { } # (24 unchanged attributes hidden) # (1 unchanged block hidden) }

Nitin avatar

this is sudden issue. till last week it was working

Paula avatar

i had similar issues but not with this particular module. Check if storage_type is filed with “aurora”, sometimes there are default attributes who are automatically filled and when you do a re-apply and finds you have it in null it is taked as a change

Nitin avatar

thanks for promt reply @Paula . Issue has been fixed

Alex Jurkiewicz avatar
Alex Jurkiewicz

it looks like you are using an older Terraform version too. Newer versions will tell you specifically why they want to replace an instance

Fizz avatar

Are you pinning versions of your module and provider?

2023-03-29

Hamdi Hassan avatar
Hamdi Hassan

Hey Everyone

Any one who is good at Regex Expressions

venkata.mutyala avatar
venkata.mutyala

Try chatGPT. I had it write one successfully within 10mins.

RB avatar

Have you tried regex.ai ?

1

2023-03-30

Christof Bruyland avatar
Christof Bruyland

Hi all, just a quick question: what are you using for web based management of the EKS cluster on AWS?

Eduardo Wohlers avatar
Eduardo Wohlers

Not sure if I understand. I usually use Fargate and the default web-UI.

Alex avatar

I guess he means something like kubernetes dashboard or so?

2023-03-31

Ryan Raub avatar
Ryan Raub

How does everyone manage the AWS service notification emails? Shared google group is what we’ve been using and I feel like this has grown to the point that its out of hand. I want to setup a better process to have multiple people be able to triage these while not letting any slip through the cracks. I really don’t want to point these emails directly at Jira but its the current front runner of ideas.

Darren Cunningham avatar
Darren Cunningham

we use a jira workflow with some automation smarts that assigns it to a member of a group so the assignee is rotated bi-weekly. that way there is a person who is responsible to take action on them. doesn’t mean they need to be the person to do the work, just the person who’s charged with making sure the right people(s) are pulled in to take the necessary action.

Ryan Raub avatar
Ryan Raub

thank you for sharing! having a rotation for triage is a good idea

    keyboard_arrow_up