#aws (2022-12)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-12-01

slackbot avatar
slackbot
01:11:38 PM

Nave has joined this channel by invitation from SweetOps.

 avatar
01:11:38 PM
has joined the channel
1

2022-12-06

Arash Bahrami avatar
Arash Bahrami

Hello every one I need a little bit help with AWS cognito userpool group and iam role I have a Cognito userpool that only uses google sign in and have group for each tenant. right now im using the federate identity to give role to my signed users but this role is for all tenants. I just want to use the cognito groups role but I cant get the credential for the group role can some one help me find what I’am doing wrong ?

1
Arash Bahrami avatar
Arash Bahrami
Comment on #146 User doesn't have permissions even though they are assigned to a group with permissions

@Prefinem @mlabieniec Gents, I think this may be relevant to this discussion.

Check the role assigned to the user group has a trust relationship. It needs this so it can assume the role of the federated identity provider.

You can build an appropriate role for the User pool groups role by doing this:

• Open AWS console • Get to IAM section • Pick roles • Pick web identity • Choose Amazon Cognito • Paste in your Identity pool id (the federated one) • Click next • Now add/create policies you need for the user group, like S3 access, or whatever. • Give the role a name and save it. • Go to your User Pool group, edit it and assign the role just created. • Open the Federated Identity -> Authentication providers section->Authenticated role selection • Set the Authenticated role selection dropbox to Choose role from token • Optionally set Role resolution to DENY

References:

Fine grained auth

Role based access control

2022-12-08

Balazs Varga avatar
Balazs Varga

hello, anybody seeing issues in us-east-2c ?

Balazs Varga avatar
Balazs Varga

it was a huge spot instance termination

z0rc3r avatar

• I wouldn’t consider “huge spot instance termination” as an “issue”, this is expected no matter the region and/or az • your us-east-2c is different from someone else’s us-east-2c https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html

Balazs Varga avatar
Balazs Varga

yeah, it is not an issue. I just commented what I sfound in our logs

vicentemanzano6 avatar
vicentemanzano6

Hello! I have 1 application loadbalancer with listeners pointing to 2 different target groups. I want to make blue green deployments into ECS with CodeDeploy - However, I have one listener on port 80 to redirect any http traffic to https port 443, however, the listener for port 443 only points to one of the target groups. Therefore, what listeners/config should I have to make sure traffic can be directed to the second target group and https redirection is still in place?

Warren Parad avatar
Warren Parad

why do the listeners have to do with target groups or blue green deployments?

vicentemanzano6 avatar
vicentemanzano6

@Warren Parad because I need to have 2 listeners in the blue/green deployment config, each one for each target group, im listening on port 80 and port 8080 HTTP and redirecting each to one HTTPS listener, one in port 443 to the one of the target group but i dont know what to do with the second listener since only port 443 seems to be the one for https redirection

Warren Parad avatar
Warren Parad


because I need to have 2 listeners in the blue/green deployment config
why

Warren Parad avatar
Warren Parad

that’s not how blue green deployments work. you don’t change your listeners

Warren Parad avatar
Warren Parad

Blue green deployment happens in ECS not in the load balancer. However if the load balancer you are using allows for weight based routing then you can update the target groups. But not the listeners.

vicentemanzano6 avatar
vicentemanzano6

@Warren Parad I mean currently I am using CodeDeploy to make a blue/green deployment into ECS and CodeDeploy asks me to add 2 listeners - but I get what you mean I will look more into it thanks

Warren Parad avatar
Warren Parad

why would it ask you to add two listeners, that sounds incorrect. Can you screen shot what you are looking at that asks you to add two listeners AND those listeners have anything to do with blue/green deployment. Having two listeners one for 80 and 443 is correct, but it has nothing to do with deployment.

vicentemanzano6 avatar
vicentemanzano6

@Warren Parad Oh I see so then i should just have those 2 listeners and make sure the https 443 listener points to both blue and green target groups?

Warren Parad avatar
Warren Parad

yes

vicentemanzano6 avatar
vicentemanzano6

It worked thank you so much!

1

2022-12-09

2022-12-14

2022-12-19

bricezakra avatar
bricezakra

Hello everyone, How to move my aws codepipeline from one environment to another?

2022-12-20

Renesh reddy avatar
Renesh reddy

Is there any ETL loader, transform and validation services in AWS.

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Glue and eventbridge pipes

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Lambda and step functions too

Renesh reddy avatar
Renesh reddy

legacy stack was lifted/shifted to aws on a redshift/postgres database. there is no real time system/application around the database. Just a series of ETL jobs to move data in and out of the database.

Renesh reddy avatar
Renesh reddy

Is glue alone service is fine to use or any other services need to include. @RB (Ronak) (Cloud Posse)

Renesh reddy avatar
Renesh reddy

just want to desing first then implement later

2022-12-21

Michael Dizon avatar
Michael Dizon

hey, question for anyone working with AWS Config. I’m using the S3_BUCKET_REPLICATION_ENABLED rule, which checks to see if replication is turned on for S3 buckets in the account. The issue I’m having is that it’s flagging the bucket I’m using as the replication destination as being non compliant. Is there a way to exclude this bucket from being evaluated under that rule?

2022-12-23

Renesh reddy avatar
Renesh reddy

Hello team, is anyone familiar with below error:

JobName:merman-poc and JobRunId:jr_422ff47af063e3dceeea799d162050718848b4bfa3a2e64d459d508237024522_attempt_2 failed to execute with exception Could not find connection for the given criteria Failed to get catalog connections given names: my_rds_connection (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: InvalidInputException; Request ID: 0074eca7-5207-4f7b-addb-b6dfdccf9d8c; Proxy: null)

2022-12-28

2022-12-29

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Does anyone have a recommended approach for shipping application logs (json) from S3 to datadog?

Darren Cunningham avatar
Darren Cunningham

I typically go with option 1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-datadog-lambda-forwarder

Terraform module to provision all the necessary infrastructure to deploy Datadog Lambda forwarders

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

that’ll push any type of logs from an S3 bucket to datadog?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it uses the Lambda provided by DD

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is an example of a component (TF root module) that uses the DD module https://github.com/cloudposse/terraform-aws-components/tree/master/modules/datadog-lambda-forwarder

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
        s3_buckets: [cp-gbl-log-datadog-logs-archive-cloudtrail
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

our logs are custom though

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

eks pod logs -> fluentbit -> s3

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it should send everything from the bucket (or a folder in the bucket) to DD (i’m not sure in what format it will be in DD UI, it all needs to be tested)

1
Alan Kis avatar
Alan Kis

@Steve Wade (swade1987) If you want near real time logs from EKS pods then you need to forward those earlier to the DataDog.

For that purpose Datadog Agent would be much better And if you are not shipping JSON or logs in known format (as you said you are using custom logs) you can always transform those later in DataDog.

https://docs.datadoghq.com/containers/kubernetes/log/?tab=daemonset

Forwarder is more being used for Lambda or AWS Service Logs from CloudWatch. Also, there is a significant delay, depending on service, which is not useful if you need to alert or perform some automation.

P.S. Happy and prosperous New 2023 Year! Guten Rutsch!

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Yeh the agent is good but if the agent goes down or we lose connectivity to the DD SaaS we miss logs.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Also happy new year to you too

1
Alan Kis avatar
Alan Kis

I see what your concern is, but in that case it’s the responsibility of the agent to read the log files again upon restart and re-ingest them into DD

Agree, I can’t 100% guarantee for DataDog but at least with all agents from Elastic that’s the case.

Make sure that depending on logs amounts you are having proper retention policies on those to prevent any surprises.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

100% I think we can go from fluentbit to DD with their plug-in

1

2022-12-30

2022-12-31

Vinko Vrsalovic avatar
Vinko Vrsalovic

What determines if an ECS deployment is in_progress or completed? From the screenshot below I would assume it would be completed already…

If I ask the API I get the helpful “ECS deployment ecs-svc/5274582771589456544 in progress.” as rolloutStateReason

Darren Cunningham avatar
Darren Cunningham

probably needs to pass health checks, could be failing and continually retrying

Vinko Vrsalovic avatar
Vinko Vrsalovic

No way to tell that from any place then? Other than manually performing the health checks?

Darren Cunningham avatar
Darren Cunningham

go to the service view -> deployments & events

Darren Cunningham avatar
Darren Cunningham

should be right below where you’re looking, I think

Vinko Vrsalovic avatar
Vinko Vrsalovic

yea, nothing obvious was there, actually took like 30 minutes to complete

Vinko Vrsalovic avatar
Vinko Vrsalovic

for no discernible reason to my inexperienced eyes

Vinko Vrsalovic avatar
Vinko Vrsalovic

so the log that’s below the screenshot is the best place to search for anything amiss

Darren Cunningham avatar
Darren Cunningham

are you using EC2 or Spot? might have been unable to obtain a worker instance of the size requested

Vinko Vrsalovic avatar
Vinko Vrsalovic

Fargate

Darren Cunningham avatar
Darren Cunningham

Configuration & Tasks -> click on the Task ID and that will show the containers launched and their health status

Darren Cunningham avatar
Darren Cunningham

might not be useful now that it’s green, but next time

Vinko Vrsalovic avatar
Vinko Vrsalovic

Alan Kis avatar
Alan Kis

Everything should be in the service events explaining 30 minutes for deployment to complete

    keyboard_arrow_up