#aws (2024-06)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2024-06-03

Enrique Lopez avatar
Enrique Lopez

Hi guys, I’m preparing an slide in a training to some mentees, I wanna present this info to explain the AWS Developer Tools in a single chart, and this summary:

• CodeStar <- Just an interface to manage several pipelines

• Cloud9 <- Just an IDE, like VSCode but in the cloud

• CodeBuild <- Similar to Github Actions

• CodePipeline <- A group of codebuilds/codedeploys

• CodeDeploy <- To deploy your code, usually to move your code from S3 to EC2

• CodeCommit <- Like GitHub My question is: does this makes sense to you, can I make it clearer? What would you change?

loren avatar

i feel like CodeBuild is actually pretty strong, though certainly feels different than github actions. CodeCommit is pretty awful/weak though

loren avatar

i have loads of use cases where i prefer codebuild over github actions, particularly anything that requires credentials or network-level access to vpcs

1
Darren Cunningham avatar
Darren Cunningham

yeah I think CodeBuilder is a lot more powerful than GitHub Actions…it’s just that you have to build out everything. So in turn, GitHub Actions is easier to get started with and covers most scenarios for most teams and therefore is “better” in most cases.

1
loren avatar

there’s even a slick new feature that blurs the lines, where your github action runs the codebuild job directly

1
Enrique Lopez avatar
Enrique Lopez

Ok, we can probably say that CodeBuild is harder, but not weaker

loren avatar

i mean, is it hard to run a shell command? that’s basically what a buildspec is doing. just a bunch of shell commands

Enrique Lopez avatar
Enrique Lopez

Ok, so we can probably remove that comment, to avoid bias

loren avatar

i suppose if you think of github actions as a library of published “modules” that are on their own running some defined set of shell commands, then maybe so. but then of course you run into the situation where the action doesn’t support your use case. and now you have to fix the upstream, or fork it and run your own, or fall back to shell commands anyway

Enrique Lopez avatar
Enrique Lopez

yeah that could be a relevant thing

Adnan avatar

just wanted to say thanks for sharing your thoughts. i never used any aws ci/cd tools, i always thought “why would i besides github actions”, but your comments made me very curious.

2
Chris Wahl avatar
Chris Wahl

The secret is that most of these tools have parity in one way or another, they’re just aligned and opinionated to a sub-set of use cases. Except Jenkins, that’s fairly universally disliked.

1
1
Adnan avatar

For me the biggest advantage of GH Actions is the reusability of actions and workflows (let’s not talk about security right now) and also that someone else is managing the infrastructure .

Darren Cunningham avatar
Darren Cunningham

hah, you just hit on the two biggest reasons that teams choose not to use GitHub Actions.

GHA marketplace is a significant supply chain attack risk. best mitigation is to enforce version pinning (to the hash, not just version tag), but brand-jacking and typo-squatting are possible too. People rarely actually check the code of the action they’re using. People assume nobody would put malicious code into OSS, <insert obligatory xz reference>.

Managed infrastructure is great…until you realize how much cheaper it can be at scale to run “your own”. I had one pipeline that was going to be like $4k/month on GHA (needed the largest workers they offered), set up self-hosted runners with EC2 Spot fleet and it was like $200/month. That and BYO can immensely speed up pipeline run times on occasion.

3
Adnan avatar

That’s why I was jokingly saying let’s not talk about security . But I didn’t necessarily mean marketplace actions. I just meant the ability to easily reuse actions and workflows. The cost is different for different orgs. In my case it’s much cheaper than your anecdote.

Darren Cunningham avatar
Darren Cunningham

I get, I was just expanding for the lurkers

2024-06-04

andrei n avatar
andrei n

Hello! How can I add for the msk-apache-kafka-cluster terraform module custom server configs for kafka e.g.: kafka_configuration_properties = { “auto.create.topics.enable”: true }

andrei n avatar
andrei n

Error: Unsupported argument │ │ on main.tf line 91, in module “kafka”: │ 91: kafka_configuration_properties = { │ │ An argument named “kafka_configuration_properties” is not expected here.

Hao Wang avatar
Hao Wang

which version of the module is used?

andrei n avatar
andrei n

source = “cloudposse/msk-apache-kafka-cluster/aws” version = “2.4.0”

Hao Wang avatar
Hao Wang

yeah, kafka_configuration_properties is not a variable for this module, do you use a tutorial?

andrei n avatar
andrei n

I am not using any tutorial. In order to make the cluster publicly accessible, the following setting is required:

allow.everyone.if.no.acl.found = false

andrei n avatar
andrei n

how to achieve this ?

Hao Wang avatar
Hao Wang
variable "properties" {
Hao Wang avatar
Hao Wang

yeah, confirmed it is

andrei n avatar
andrei n

lovely, thanks alot

Hao Wang avatar
Hao Wang

np

2024-06-05

maarten avatar
maarten

Has anyone ever been confronted with: “Parameter: SpotFleetRequestConfig.IamFleetRole is invalid. ” when doing a spot-request ? The Role, Trust Policy and Policy all look fine to me. Some Redditor had the same unsolved question. It works in one region, but not in the other, it does not look like a policy issue to me.

Hao Wang avatar
Hao Wang
Running into error when launching spot instance request with IAM account

I’m trying to create spot EC2 instance with IAM user account.

I got this error message and I can’t go further

Parameter: SpotFleetRequestConfig.IamFleetRole is invalid. It seems like administrator

2024-06-06

2024-06-07

Matt Gowie avatar
Matt Gowie

Would appreciate any on this insane AWS Amplify Hosting issue: https://github.com/aws-amplify/amplify-hosting/issues/2563

#2563 miss cloud front on browser

Before opening, please confirm:

☑︎ I have checked to see if my question is addressed in the FAQ. ☑︎ I have searched for duplicate or closed issues. ☑︎ I have read the guide for submitting bug reports. ☑︎ I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.

App Id

d2joh8jz57nkvq

Region

ap-northeast-2

Amplify Console feature

Performance

Describe the bug

Hello .

I installed Next.js following the AWS guide and hosted it on amplify.

However, every time I made a request to my service, the response was so slow. So, looking at the response header, the x-cache: Miss from cloudfront header is always present on browser. So I followed the instructions and enabled performance mode on my brunch in amplify but I’m still having the same problem.

The curious thing is that if you look at the x-cache header with the curl command, it was hit.

curl -X HEAD -i <https://v.place.hitit.xyz/store/80e0a902-490f-4d96-b18d-988c852b2975> -s | grep -Fi x-cache x-cache: Hit from cloudfront

I suspect this is region related. Could you please check this as well ?

lambdaEdge : us-east-1
lambda : us-east-1
s3 : us-east-1
amplify : ap-northeast-2

Expected behavior

I don’t have any customHttp.yml . it was problem ?

Reproduction steps

just enter my website.

https://v.place.hitit.xyz/
https://v.place.hitit.xyz/store/80e0a902-490f-4d96-b18d-988c852b2975

Build Settings

No response

Additional information

No response

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @mike186

#2563 miss cloud front on browser

Before opening, please confirm:

☑︎ I have checked to see if my question is addressed in the FAQ. ☑︎ I have searched for duplicate or closed issues. ☑︎ I have read the guide for submitting bug reports. ☑︎ I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.

App Id

d2joh8jz57nkvq

Region

ap-northeast-2

Amplify Console feature

Performance

Describe the bug

Hello .

I installed Next.js following the AWS guide and hosted it on amplify.

However, every time I made a request to my service, the response was so slow. So, looking at the response header, the x-cache: Miss from cloudfront header is always present on browser. So I followed the instructions and enabled performance mode on my brunch in amplify but I’m still having the same problem.

The curious thing is that if you look at the x-cache header with the curl command, it was hit.

curl -X HEAD -i <https://v.place.hitit.xyz/store/80e0a902-490f-4d96-b18d-988c852b2975> -s | grep -Fi x-cache x-cache: Hit from cloudfront

I suspect this is region related. Could you please check this as well ?

lambdaEdge : us-east-1
lambda : us-east-1
s3 : us-east-1
amplify : ap-northeast-2

Expected behavior

I don’t have any customHttp.yml . it was problem ?

Reproduction steps

just enter my website.

https://v.place.hitit.xyz/
https://v.place.hitit.xyz/store/80e0a902-490f-4d96-b18d-988c852b2975

Build Settings

No response

Additional information

No response

mike186 avatar
mike186

Thank you!

2024-06-10

omkar avatar

Issue: Application Performance Explanation: We have deployed all our microservices on AWS EKS. Some are backend services that communicate internally (around 50 services), and our main API service, “loco,” handles logging and other functions. The main API service is accessed through the following flow: AWS API Gateway -> Nginx Ingress Controller -> Service. In the ingress, we use path-based routing, and we have added six services to the ingress, each with a corresponding resource in a single API gateway. Our Angular static application is deployed on S3 and accessed through CloudFront. The complete flow is as follows: CloudFront -> Static S3 (frontend) -> AWS API Gateway -> VPC Link -> Ingress (Nginx Ingress Controller with path-based routing) -> Services -> Container. Problem: Occasionally, the login process takes around 6-10 seconds, while at other times it only takes 1 second. The resource usage of my API services is within the limit. Below are the screenshots from Datadog traces of my API service:

Screenshot of the API service when it took only 1 second

Screenshot of the API service when it took 6-10 seconds Request for Help: How should I troubleshoot this issue to identify where the slowness is occurring?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Dale avatar

I take it you have reviewed the flame graph of the login service and done a profile of it to rule out the login service itself being the bottleneck?

Dale avatar

I only ask because on your second image there are 4 times as many spans being indexed, so it is making me wonder whether between the two screenshots something has invalidated a cache your app relies on and it is having to rebuild that? Maybe a new pod of that service has been spun up from a scaling event and your containers don’t come with the cache prewarmed?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse) bumping this up

sharma.mohit332 avatar
sharma.mohit332

Also, as per the screenshot live-locobuzz-api-sql-server is taking almost 5x response time. Did you had a chance to check which query is expensive in the later one?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Yeah, to sort of condense what’s mentioned above, APM allows you to instrument code, effectively setting timers at different points of execution which resolve when calls return.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

You’ll want to study spans captured to see which ones have the majority of time, and then further dig into (instrument) those spans

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

If you cannot instrument any deeper into the call (i.e. the span leaves to another service), then you’ll need to see if you can instrument that resource/dependency

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

if you post more about the calling functions and their dependencies, we can advise how to proceed

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, I’m trying to understand why the ecs cluster module is trying to recreate the policy attachments every time I add more than one module instance via a for_each. The plan shows the arn will change, but it’s a AWS managed policy, so it won’t change:

update policy_arn : “arnawsiam:awspolicy/AmazonSSMManagedInstanceCore”

change to Known after apply Forces replacement

the resource address is:

module.ecs_clusters[“xxx”].module.ecs_cluster.aws_iam_role_policy_attachment.default[“AmazonSSMManagedInstanceCore”]

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

probably best to use #terraform for terraform questions

Juan Pablo Lorier avatar
Juan Pablo Lorier

@Erik Osterman (Cloud Posse) sorry, I thought this was the terraform aws channel. Will post in terraform then

np1
ecatevatis avatar
ecatevatis

I’m getting an error when creating ec2-instance, trying to reference the private subnet for a dynamic_subnet I created, any ideas how to reference the private_subnet_id into ec2?

subnet = module.dynamic_subnets.private_subnet_id

ecatevatis avatar
ecatevatis

That was the wrong screenshot please see below.

ecatevatis avatar
ecatevatis

well, this was one short-term fix: subnet = element(module.dynamic_subnets.private_subnet_ids, 0)

ecatevatis avatar
ecatevatis

I guess it randomly chooses which AZ it goes into?

2024-06-11

Mehak avatar

This policy we have to enforce mutilaz on elasticache clusters. Do we have some such policy to enforce Multi-AZ in RDS Aurora and Elasticsearch?

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "elasticache:CreateCacheCluster",
                "elasticache:CreateReplicationGroup"
            ],
            "Resource": [
                "arn:aws:elasticache:us-east-1:4852:replicationgroup*",
                "arn:aws:elasticache:us-east-1:4852:cluster*"
            ],
            "Condition": {
                "StringNotEqualsIgnoreCase": {
                    "elasticache:MultiAZEnabled": true
                }
            }
        }
    ]
}
Mehak avatar

We are creating AWS resources using Terraform which is using Terraform role. So we donot want to create datastores if Multi-AZ is not enabled on them. So if we have such policy for RDS Aurora and ElasticSearch then that would be great!!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Actions, resources, and condition keys for Amazon RDS - Service Authorization Reference

Lists all of the available service-specific resources, actions, and condition keys that can be used in IAM policies to control access to Amazon RDS.

Mehak avatar

@Andriy Knysh (Cloud Posse) But this doesn’t work in RDS Aurora

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i just searched for that. You can search for IAM policy keys in RDS Aurora

Mehak avatar

I couldn’t find any such parameters there. Do we have some other way to enforce such rule?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like there is no such policy conditions for Aurora (see https://stackoverflow.com/questions/73164178/iam-policy-to-force-enable-aurora-read-replica), although the comment was from 2 years ago

IAM policy to force enable Aurora Read Replica

I’d like to enforce IAM user when create Aurora Postgres cluster, they have to stick &quot;Create an Aurora Replica or Reader node in a different AZ&quot; in Multi-AZ deployment option. So I create…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Why does AWS RDS Aurora have the option of "Multi-AZ Deployment" when it does replication across different zones already by default?

When launching an Aurora instance I have the option of “Multi-AZ Deployment”, which it describes as “Specifies if the DB Instance should have a standby deployed in another Availability Zone.”

Howe…

Mehak avatar

I am thinking to go for sentinel policies

Mehak avatar

or if you have idea about open policy agent. which one would be better?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

where are you thinking to run the OPA agent?

Mehak avatar

in tf cloud

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Mehak do you need further assistance here?

Mehak avatar

@Gabriela Campana (Cloud Posse) yes

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i’m not familiar with TF cloud sentinel policies. There are many docs about that, e.g. https://developer.hashicorp.com/terraform/cloud-docs/policy-enforcement/sentinel. Maybe other people can help here

Defining Policies - Sentinel - HCP Terraform | Terraform | HashiCorp Developerattachment image

Learn how to use Sentinel policy language to create policies, including imports to define rules, useful functions, and more.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

I could not find anything related to TF cloud sentinel policies in Cloud Posse projects history. @Erik Osterman (Cloud Posse) to confirm if we have any SME on TF cloud sentinel policies

2024-06-12

Mehak avatar

Can someone help me with sentinel policy to enforce multi-az on rds aurora and elasticsearch clusters. I will create policy in TF cloud?

2024-06-13

2024-06-17

Alex Atkinson avatar
Alex Atkinson

I don’t think the updated cert chain will be added to this npm module before August 22. https://github.com/mysqljs/mysql/blob/master/lib/protocol/constants/ssl_profiles.js

Alex Atkinson avatar
Alex Atkinson

It’ll be the kick that some need to get on the mysql2 module.

2024-06-19

2024-06-20

2024-06-21

Sudheer avatar
Sudheer

Hi Folks, Have you ever wanted a generative AI assistant that could go through S3, Redis, RDS, Confluence, or an internet web crawler and answer questions about your product using generative AI? If you’re building this from scratch, think again. Check out Amazon Q! and How did First Orion optimize their workflow with Amazon Q? Check the link above for a detailed post describing the architecture and other aspects. Feel free to comment and share your views.

1

2024-06-30

ecatevatis avatar
ecatevatis

Terrascan isn’t properly identifying any of the cloudposse modules for compliance. Is there a scanner that works with cloudposse modules?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is complaince is based on the parameters you pass, and standards can be contradictory, for example requiring different retention periods.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We ensure our modules are sufficiently parameterized, but the end user needs to pass the parameters

2
    keyboard_arrow_up