#aws (2023-03)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2023-03-01

Nave has removed themselves from this channel.
2023-03-03

Anyone have advice on automatically updating the AMI of a launch template for an ASG? Right now we just periodically apply Terraform. Current thought is to ignore the AMI field in Terraform and write a Lambda to update the launch template, and I’m wondering if there’s a better way.

this is exactly what we do in my company. We have a lambda which looks up the latest AMI every week and updates a set of launch templates. Not only that, we also get the Lambda to optionally nuke all the instance in the ASG in order to rebuild them with the latest AMI and patches. I work for a Bank, so keeping up to date with patches is mandatory. The only machines we don’t rebuild automatically are certain Production machines which require us to make a formal “downtime” period for rebuilding. It means we don’t have to do any patching for any instances we run. All our instances are in ASGs. For machines which need Deployment, we have set up CodeDeploy to trigger a deploy ( using ansible ) to each instance.

using a Lambda is the best way to go IMHO

gotcha, thanks!

it means that in order to “patch” an instance, we simply terminate the instance. Automation then rebuilds the host

nice any reason you don’t use an instance refresh instead?

I guess you could do instance refresh tbh - never really thought about instance refresh - it sounds like a slightly more organised way of terminating which is fine

gotcha. I think instance refresh spins up new instances first so you’ve got some redundancy while the old instances are taken out of service

ah - fair enough - that sounds much better tbh. In fact having thought about it, I think we might do an instance refresh.

I’d love to share the Go code we use but I fear that I would get into trouble if I did as it’s work’s IP

understandable should be quick to write some python to do it

would be amazing if AWS just offered this as an ASG feature

tick a box to keep the AMI up to date and call it a day

yeah - it’s not complex and yes, AWS would do well to make this “standard”

Run your TF on a schedule and as a CLI command during the run pull the latest version of the AMI and pass it as a TF variable
2023-03-04
2023-03-05

Hi There Need small help here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application Where to upload application code? Or this terraform code would only setup beanstalk environment
Terraform Module to define an ElasticBeanstalk Application

latter
Terraform Module to define an ElasticBeanstalk Application
2023-03-06

Hi All,
I’m working on something which would collect quarterly and annual report data from booking entity authorities (think like uber and taxi companies). Is there any downside to using *Aurora* over classic *RDS* given that the traffic will be low throughout the year, and increase due about a 4 month a year when reports are due
Any other suggestions are welcome as well. Thanks!

No major relevant differences. Aurora has a serverless offering, which can help with saving money for elastic workloads. But if your workload’s elasticity is measured in days+, you can do just as well with manual RDS vertical scaling

For operations at high scale, the main benefit I’ve seen from Aurora is the autofailover and the automated snapshotting, backups, and replication. If you are good about managing your data, RDS can work well for less demanding loads. And as mentioned, you can scale up when needed if you already know when to expect higher loads.

Ya makes sense to skip Aurora for these reasons. Plus if it will just be used for reporting, and that output can be cached in another table, you might not care if a query even takes a long time to run.
2023-03-07


March 7, 2023: We’ve added language clarifying the requirement around using VPC Endpoints, and we’ve corrected a typo in the S3 bucket policy example. Today AWS launched two new global condition context keys that make it simpler for you to write policies in which Amazon Elastic Compute Cloud (Amazon EC2) instance credentials work only when […]

Essentially you need to use VPC endpoint policy for this effect. I wrote a blog explaining PrivateLink and VPC Endpoints https://www.cloudyali.io/blogs/demystifying-aws-privatelink-and-vpc-endpoint-services-everything-you-need-to-know
VPC Interface Endpoints (powered by AWS PrivateLink) are a powerful technology to improve security and performance in an AWS environment.
2023-03-10
2023-03-15

Does anyone have a recommendation for a tool to visualize – or at least tabulate – which AWS resources use a particular security group?
Given how many resource types can point to an SG, I’m surprised there isn’t an established way to do that (other than pretending to delete the SG from console but this is dangerous and not programmatic). I found a couple of open source projects on github like a python project sgdeps
and a bash one sg-tool
but < 40 stars so I figure they are not the go-to solution for this problem.

I bet it could be accomplished using the new SaaS like steam pipe

Have you looked at https://github.com/wallix/awless? Not sure if it can do it but it is design to make it easier to list resources on AWS
A Mighty CLI for AWS

what does this mean: “SaaS like steam pipe”

Steampipe is an open source tool to instantly query your cloud services (e.g. AWS, Azure, GCP and more) with SQL. No DB required.

Cloud infrastructure as data in PostgreSQL

(also, I didn’t realize steampipe was open source)

Yeah I remembered seeing something like that but didn’t remember the name

there was something similar for terraform too, to query the state

Turns out, even steampipe can do that https://hub.steampipe.io/plugins/turbot/terraform

It’s really freaking rad. @Robert Horrox I see your point more and more.

All these platforms are supported: https://hub.steampipe.io/plugins

Here’s the list of data sources and APIs that Steampipe supports: Cloud Services, APIs, files, databases, etc.

yeah pretty cool if you just need to search through a couple of resource types
but i think uses one table per resource type, so you have to know all tables to search, the name of the SG column for each table, and query each table individually.
Granted, once you go through this exercise once, you can write a sql script
However to create a map of all SG to all resources that use them, I suppose you would have a bunch of joins, again in a script
is that easier than writing a loop in bash, I’m not sure

Two things you can leverage steampipe for:
- You can pull the raw data from multiple AWS accounts at once with one SQL call and output to a json file for easy parsing with jq or other tools.
- It can be run as a service exposing a postgres interface which you can use any programming language with a postgres driver to pull from. My experience with the AWS cli had been less than great

TIL you can configure EC2 to use the resource name as the hostname for an EC2 instance, so that when you log into an instance, or query kubectl nodes
, you can skip the IP address to resource id translation step.
API | Wizard
When you launch an EC2 instance with a Hostname type of Resource name, the guest OS hostname is configured to use the EC2 instance ID. • Format for an instance in us-east-1:
_ec2-instance-id_.ec2.internal
• Example:_i-0123456789abcdef_.ec2.internal
• Format for an instance in any other AWS Region:_ec2-instance-id.region_.compute.internal
• Example:_i-0123456789abcdef.us-west-2_.compute.internal
2023-03-16

Hello everyone, Amazon Linux 2023 was just released on March 15th! This latest version comes with three significant features:
• Default IMDSv2 support with max two hops, which greatly enhances security posture. To learn more about IMDS, check out my blog: https://lnkd.in/gnTA_brw
• AL2023 utilizes gp3 volumes by default, reducing costs with improved performance.
• Versioned repositories offer more control over packages, allowing for better standardization of workloads.
Check out the official AWS announcement to learn more: https://lnkd.in/g2kPryj8
Understand how IMDSv2 improves security. Identify instances with IMDSv1 or IMDSv2. Know how to enforce IMdSv2.

I am excited to announce the general availability of Amazon Linux 2023 (AL2023). AWS has provided you with a cloud-optimized Linux distribution since 2010. This is the third generation of our Amazon Linux distributions. Every generation of Amazon Linux distribution is secured, optimized for the cloud, and receives long-term AWS support. We built Amazon Linux […]
2023-03-17

question: I have a base role without iam:listroles is there any way to get assumable roles attached to this role ?
2023-03-19

Hello everyone! We are revealing the pricing for CodeSmash, our new No Code platform! If you want more info, feel free to check it out at https://codesmash.studio
CodeSmash is the scalable no-code platform that offers you full transparency to your code and the freedom to continue building.

The idea is very interesting. I currently work with a low-code platform here in Brazil that is constantly expanding with several clients. I am not familiar with Codesmash, but I like the idea. There is a lot of market potential and infinite possibilities with this type of product.
CodeSmash is the scalable no-code platform that offers you full transparency to your code and the freedom to continue building.

Yes, I agree. You will be able to build cloud infrastructure using No Code tools but the code will be saved in the background in your Git repos. Each Git repo will have a CI/CD pipeline attached to it so you have automated deploys with Terraform of your architecture as well.

After that, I will expand into frontend so that you can build fullstack apps as well.
2023-03-20

Hello, We are running an Elasticsearch stack(installed using helm) in a 3-node kubernetes cluster in AWS installed using kops in that we have an Elasticsearch cluster running 1 client 2 data nodes 2 master nodes. to our Elasticsearch cluster applications will be sending logs (apps in kubernetes and lambdas)using logstash. Now we are planning to move away from kubernetes and planning to migrate the applications running in Kubernetes to ECS (AWS Elastic Container Services). right now we have around 300 indices with size 20GB with 5 shards and 1 replica. I have some analysis of how to move data if we migrate the Elasticsearch cluster to dockers running in ec2 and tested it works fine. there are many reasons why are moving away from kubernetes but one of the reasons is Cost. and 75% of the k8s cluster was used by ealsticsearch. We don’t want to go with Elastic cloud to Open Search as it is costly. *Now my question is what is the best option for us for Elasticsearch once we move away from kubernetes?*
- Docker running in EC2
- Elastic Container Service (not sure how this will work with EFS storage)
- On-Prem (not sure if it is one of the options as all our applications are running on the cloud) Please let me know if there is any better option. Any help is very much appreciated. Many thanks.

I advise against using EFS storage for your ES cluster due to performance, scalability, and cost reasons. Some alternatives you may consider are using local storage volumes or EBS volumes when considering Docker on EC2 or ECS for your ES cluster.


Today, I manage an ES with over 10TB of data, but it is running inside Kubernetes. I have had a lot of headaches refining its performance, but now I don’t have any more issues. The secret to your decision should not be based solely on cost, but rather on how much maintenance you will have, because depending on the criticality of your cluster, it directly affects the business.

thanks for the reply @Diego Maia
in my setup 3 data node (4gb memory each) 2 master node (2gb each) 1 client (1gb)
still we get OOM kills

we do you think its a good idea if we re index our indices from 5 shard 1 replicas to 2 shard 1 replica when I migrate

Can I have ECS with EFS as storage

Reindexing indices can be a good idea to improve performance and reduce resource overhead…Here in my case I cut the index replica to 0, because the sync between data nodes is not good for us, …remember it’s my case of use.

Yes, it’s possible to use ECS with EFS as storage, it’s important to consider the performance and scalability limitations of EFS, but i not recomend if your problem is performance.

for OOM… I recommend set with ES_JAVA_OPTS include -Xms and -Xmx, which set the initial and maximum heap size of the JVM, and -XX:+UseCGroupMemoryLimitForHeap, which allows the JVM to use OS cgroup memory limits to automatically adjust the heap size.

example
ES_JAVA_OPTS=“-Xms4g -Xmx4g - -XX:+UseCGroupMemoryLimitForHeap”

that is set as 2gb for data 700mb for master, 600mb for clients, thank you

Even we thought to change replicas to 0 but planned to keep 1 replica in PROD env

Replicas are costly for data nodes only. But evaluate this depending on the case. You can also work with the number of shards to be started simultaneously, which helps in case of data node restarts and you want a fast recovery.

I would strongly suggest to rather look at Amazon OpenSearch Service, yes it’s more expensive to run. However the administrative overhead is drastically reduced.
2023-03-21

For anyone in the US who took an AWS exam in person with Pearson: What did you use for your 2nd form of ID? I have a license for #1, but my passport is expired. It seems silly, but can I just show a credit card with my signature on the back? It meets requirements, but seems kind of silly.

Phone and confirm with them, I’d assume an expired passport should still be good. You can’t travel on it, but it’s still you.

I have a CloudFront distribution set up to serve a static website hosted on an S3 bucket. The website is built with React and React Router, which expects the base path to be in the root directory.
I also have a custom domain configured to point to the CloudFront distribution. However, when I navigate to /apply on the custom domain, I get a 404 error. After investigating, I found that CloudFront is routing the request to the S3 origin bucket, but it’s not serving the index.html file in the root directory as expected.
I tried to fix this by updating the Lambda@Edge function to properly direct the /apply path to the index.html file in the root directory of the S3 origin bucket. Here’s the updated function:
'use strict';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const url = request.uri;
const onlyApply = /^\/apply$/;
if (onlyApply.test(url)) {
const newOrigin = {
custom: {
domainName: 's3-website-us-east-1.amazonaws.com',
port: 80,
protocol: 'http',
path: '',
sslProtocols: ['TLSv1', 'TLSv1.1', 'TLSv1.2']
}
};
request.origin = newOrigin;
request.uri = '/index.html'; // append index.html to the URI
}
const response = event.Records[0].cf.response || {};
if (onlyApply.test(url)) {
response.status = '301';
response.statusDescription = 'Moved Permanently';
response.headers = response.headers || {};
response.headers['location'] = [{ key: 'Location', value: 'https://' + request.headers.host[0].value + '/index.html' }]; // append index.html to the Location header
}
callback(null, response);
};
Although this works, now my users are getting redirected to the static website URL instead of the custom domain.

Don’t change the origin, and dont do a 301 redirect. https://serverfault.com/questions/1001886/prevent-cloudfront-from-forwarding-part-of-path-to-origin-server
Background: I have an S3 Bucket ( Origin 1) that serves a static website under a the domain example.com using Cloudfront.
Goal:
Additionaly i want example.com/subfolder to serve content from seco…

@Fizz I tried this recommendation but it doesn’t quite work with my case or I might not be comprehending something correctly.
My behavior pattern is
/apply*
-> where the origin is an S3 bucket that does not expect /apply
* Default
-> where the origin is a website like www.google.com
My goal here is the
<subdomain>/apply
to go to S3 w/o/apply
- Any other subdomain requests go to the default path (www.google.com)
With the changes on the serverless forum. What would end up happening is that another request would happen without the
/apply
and it would go to the default path and return a different origin

That should work. What behaviour did you observe where you only stripped the path, and did not do a redirect and did not change the host? To be clear, you should not be doing a 301 redirect and you should not be changing the host to the s3 bucket.

Definitely clear that I should not be doing a redirect, its just been my last ditch effort after trying a million things to get this working.
The article that you linked up with my first iteration of trial and error. I recreated the setup, but it still fails to source the index.html
correctly in the root directory of the s3 bucket

Here is the CF function i’m using

How path behaviors are setup

Results when I try to navigate to my distribution’s URL + Path

What status code are you getting back? 404? 200? 401?

And it looks like you got a response as you got the assets folder back in the response

It’s a 404 - failed to load resource. Makes sense since its trying to load the apply
file instead of index.html

Hmm. Assuming you have checked the lambda is firing, and entering the if block and setting the uri, then after that I would turn on access logging for the bucket if access is via http, or turn on cloudtrail for s3 object access to see what API calls are being made against the bucket
2023-03-22

Hey there everyone!
Hope you’re all doing well. I’m looking for some insights on maintaining resource tag hygiene in AWS environments. I’d love to hear your thoughts on how you standardize and enforce resource tags across your teams, projects, and deployments.
Additionally, I’m currently working on a tool to help with tags hygiene, and I would be thrilled to receive any feedback or comments from you all. If anyone is interested in working more closely with me on this, feel free to DM me or drop a note in the comments.
Thank you all in advance for your help and support!

aws tag editor?

Start with an SCP policy to enforce tagging

I’ve managed our through terraform. There are some pretty verbose modules that have been created to assist with tagging. Haven’t had the need to use them though

@Alex Jurkiewicz @Stef @Bhavik Patel tag policy for identifying the gaps/tag compliance and scp for enforcing seems the way forward.
One question though- how do you ensure that once you identify gaps, the developers/engineers tag those resources before you start enforcing scp. What process do you follow, specially across teams/projects.

For our company, we have two modes to ensure that we’re tagging resources.
- We are connected to a compliance application called Vanta which ensures that certain resources that carry PII information is tagged properly
- It’s part of our PR process. If someone is provisioning resources, they need to be tagged

I’ve been working with a tool called Resoto in my personal environments. I would investigate something like the steampipe aws compliance mod if I needed to conform to a standard … or else. I’m also a big fan of Service Catalog in general to deploy Infrastructure as Products, but I don’t like the way it does tags. But check it out, might be for you. Whatever you do, put it in a pipeline!
Resoto is able to create, update, and delete tags for resources that support tags.

Run individual configuration, compliance and security controls or full compliance benchmarks for CIS, FFIEC, PCI, NIST, HIPAA, RBI CSF, GDPR, SOC 2, Audit Manager Control Tower, FedRAMP, GxP and AWS Foundational Security Best Practices controls across all your AWS accounts using Steampipe.

Where is your tool hosted?
2023-03-23

I have created VPN and associated with 2 private subnets which are routed by NAT. Able to connect VPN. Trying to connect RDS DB getting issues nodename or service provider, or not known. I have allowed IP’s and ports for VPN and RDS security groups.
Not sure what would be the issue. ?

Lots of possible problems there, none of them specific to Cloud Posse tooling. You should ask AWS support (or use other public AWS support resources).

Trying to connect RDS DB getting issues nodename or service provider, or not known.
Also, could you try using the IP directly to rule out DNS issues?
2023-03-24
2023-03-28

Hello Team,
We today we are facing sudden issue rds-cluster module. It tring to replacing our existing infrasture. because it is now using cluster_identifier_prefix instead of cluster_identifier. any idea how we can resolve this issue?

resource "aws_rds_cluster" "primary" {
~ allocated_storage = 1 -> (known after apply)
~ arn = "arn:aws:rds:[MASKED]:[MASKED]:cluster:renamed" -> (known after apply)
~ availability_zones = [
- "[MASKED]a",
- "[MASKED]b",
- "[MASKED]c",
] -> (known after apply)
+ cluster_identifier_prefix = (known after apply)
~ cluster_members = [
- "renamed-1",
] -> (known after apply)
~ cluster_resource_id = "cluster-renamed" -> (known after apply)
~ database_name = "renamed" -> (known after apply)
- enabled_cloudwatch_logs_exports = [] -> null
~ endpoint = "renamed.cluster-asdfsafdasdfasdf.[MASKED].[rds.amazonaws.com](http://rds.amazonaws.com)" -> (known after apply)
~ engine_version_actual = "13.9" -> (known after apply)
~ hosted_zone_id = "Z2VFMSZA74J7XZ" -> (known after apply)
~ iam_roles = [] -> (known after apply)
~ id = "renamed" -> (known after apply)
- iops = 0 -> null
~ master_username = "renamed" -> (known after apply)
~ port = 5432 -> (known after apply)
~ reader_endpoint = "renamed.cluster-ro-asdfsafdasdfasdf.[MASKED].[rds.amazonaws.com](http://rds.amazonaws.com)" -> (known after apply)
- storage_type = "aurora" -> null # forces replacement
tags = {
}
# (24 unchanged attributes hidden)
# (1 unchanged block hidden)
}

this is sudden issue. till last week it was working

i had similar issues but not with this particular module. Check if storage_type is filed with “aurora”, sometimes there are default attributes who are automatically filled and when you do a re-apply and finds you have it in null it is taked as a change

thanks for promt reply @Paula . Issue has been fixed

it looks like you are using an older Terraform version too. Newer versions will tell you specifically why they want to replace an instance

Are you pinning versions of your module and provider?
2023-03-29

Hey Everyone
Any one who is good at Regex Expressions

Try chatGPT. I had it write one successfully within 10mins.

2023-03-30

Hi all, just a quick question: what are you using for web based management of the EKS cluster on AWS?

Not sure if I understand. I usually use Fargate and the default web-UI.

I guess he means something like kubernetes dashboard or so?
2023-03-31

How does everyone manage the AWS service notification emails? Shared google group is what we’ve been using and I feel like this has grown to the point that its out of hand. I want to setup a better process to have multiple people be able to triage these while not letting any slip through the cracks. I really don’t want to point these emails directly at Jira but its the current front runner of ideas.

we use a jira workflow with some automation smarts that assigns it to a member of a group so the assignee is rotated bi-weekly. that way there is a person who is responsible to take action on them. doesn’t mean they need to be the person to do the work, just the person who’s charged with making sure the right people(s) are pulled in to take the necessary action.

thank you for sharing! having a rotation for triage is a good idea