#aws (2022-12)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-12-01
Nave has joined this channel by invitation from SweetOps.
2022-12-06
Hello every one I need a little bit help with AWS cognito userpool group and iam role I have a Cognito userpool that only uses google sign in and have group for each tenant. right now im using the federate identity to give role to my signed users but this role is for all tenants. I just want to use the cognito groups role but I cant get the credential for the group role can some one help me find what I’am doing wrong ?
I fixed it here is the perfect solution https://github.com/aws-amplify/amplify-js/issues/146#issuecomment-392675737
@Prefinem @mlabieniec Gents, I think this may be relevant to this discussion.
Check the role assigned to the user group has a trust relationship
. It needs this so it can assume the role of the federated identity provider.
You can build an appropriate role for the User pool groups role by doing this:
• Open AWS console
• Get to IAM section
• Pick roles
• Pick web identity
• Choose Amazon Cognito
• Paste in your Identity pool id (the federated one)
• Click next
• Now add/create policies you need for the user group, like S3 access, or whatever.
• Give the role a name and save it.
• Go to your User Pool group, edit it and assign the role just created.
• Open the Federated Identity -> Authentication providers section->Authenticated role selection
• Set the Authenticated role selection dropbox to Choose role from token
• Optionally set Role resolution
to DENY
References:
2022-12-08
hello, anybody seeing issues in us-east-2c ?
it was a huge spot instance termination
• I wouldn’t consider “huge spot instance termination” as an “issue”, this is expected no matter the region and/or az
• your us-east-2c
is different from someone else’s us-east-2c
https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html
Learn how AWS uses availability zones and AZ IDs.
yeah, it is not an issue. I just commented what I sfound in our logs
Hello! I have 1 application loadbalancer with listeners pointing to 2 different target groups. I want to make blue green deployments into ECS with CodeDeploy - However, I have one listener on port 80 to redirect any http traffic to https port 443, however, the listener for port 443 only points to one of the target groups. Therefore, what listeners/config should I have to make sure traffic can be directed to the second target group and https redirection is still in place?
why do the listeners have to do with target groups or blue green deployments?
@Warren Parad because I need to have 2 listeners in the blue/green deployment config, each one for each target group, im listening on port 80 and port 8080 HTTP and redirecting each to one HTTPS listener, one in port 443 to the one of the target group but i dont know what to do with the second listener since only port 443 seems to be the one for https redirection
because I need to have 2 listeners in the blue/green deployment config
why
that’s not how blue green deployments work. you don’t change your listeners
Blue green deployment happens in ECS not in the load balancer. However if the load balancer you are using allows for weight based routing then you can update the target groups. But not the listeners.
@Warren Parad I mean currently I am using CodeDeploy to make a blue/green deployment into ECS and CodeDeploy asks me to add 2 listeners - but I get what you mean I will look more into it thanks
why would it ask you to add two listeners, that sounds incorrect. Can you screen shot what you are looking at that asks you to add two listeners AND those listeners have anything to do with blue/green deployment. Having two listeners one for 80 and 443 is correct, but it has nothing to do with deployment.
@Warren Parad Oh I see so then i should just have those 2 listeners and make sure the https 443 listener points to both blue and green target groups?
yes
2022-12-09
2022-12-14
2022-12-19
Hello everyone, How to move my aws codepipeline from one environment to another?
2022-12-20
Is there any ETL loader, transform and validation services in AWS.
Glue and eventbridge pipes
Lambda and step functions too
legacy stack was lifted/shifted to aws on a redshift/postgres database. there is no real time system/application around the database. Just a series of ETL jobs to move data in and out of the database.
Is glue alone service is fine to use or any other services need to include. @RB
just want to desing first then implement later
2022-12-21
hey, question for anyone working with AWS Config. I’m using the S3_BUCKET_REPLICATION_ENABLED
rule, which checks to see if replication is turned on for S3 buckets in the account. The issue I’m having is that it’s flagging the bucket I’m using as the replication destination as being non compliant. Is there a way to exclude this bucket from being evaluated under that rule?
2022-12-23
Hello team, is anyone familiar with below error:
JobName:merman-poc and JobRunId:jr_422ff47af063e3dceeea799d162050718848b4bfa3a2e64d459d508237024522_attempt_2 failed to execute with exception Could not find connection for the given criteria Failed to get catalog connections given names: my_rds_connection (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: InvalidInputException; Request ID: 0074eca7-5207-4f7b-addb-b6dfdccf9d8c; Proxy: null)
2022-12-28
2022-12-29
Does anyone have a recommended approach for shipping application logs (json) from S3 to datadog?
I typically go with option 1
Terraform module to provision all the necessary infrastructure to deploy Datadog Lambda forwarders
that’ll push any type of logs from an S3 bucket to datadog?
yes
it uses the Lambda provided by DD
this is an example of a component (TF root module) that uses the DD module https://github.com/cloudposse/terraform-aws-components/tree/master/modules/datadog-lambda-forwarder
variable "s3_buckets" {
s3_buckets: [cp-gbl-log-datadog-logs-archive-cloudtrail
our logs are custom though
eks pod logs -> fluentbit -> s3
it should send everything from the bucket (or a folder in the bucket) to DD (i’m not sure in what format it will be in DD UI, it all needs to be tested)
@Steve Wade (swade1987) If you want near real time logs from EKS pods then you need to forward those earlier to the DataDog.
For that purpose Datadog Agent would be much better And if you are not shipping JSON or logs in known format (as you said you are using custom logs) you can always transform those later in DataDog.
https://docs.datadoghq.com/containers/kubernetes/log/?tab=daemonset
Forwarder is more being used for Lambda or AWS Service Logs from CloudWatch. Also, there is a significant delay, depending on service, which is not useful if you need to alert or perform some automation.
P.S. Happy and prosperous New 2023 Year! Guten Rutsch!
Yeh the agent is good but if the agent goes down or we lose connectivity to the DD SaaS we miss logs.
I see what your concern is, but in that case it’s the responsibility of the agent to read the log files again upon restart and re-ingest them into DD
Agree, I can’t 100% guarantee for DataDog but at least with all agents from Elastic that’s the case.
Make sure that depending on logs amounts you are having proper retention policies on those to prevent any surprises.
2022-12-30
2022-12-31
What determines if an ECS deployment is in_progress or completed? From the screenshot below I would assume it would be completed already…
If I ask the API I get the helpful “ECS deployment ecs-svc/5274582771589456544 in progress.” as rolloutStateReason
probably needs to pass health checks, could be failing and continually retrying
No way to tell that from any place then? Other than manually performing the health checks?
go to the service view -> deployments & events
should be right below where you’re looking, I think
yea, nothing obvious was there, actually took like 30 minutes to complete
for no discernible reason to my inexperienced eyes
so the log that’s below the screenshot is the best place to search for anything amiss
are you using EC2 or Spot? might have been unable to obtain a worker instance of the size requested
Fargate
Configuration & Tasks -> click on the Task ID and that will show the containers launched and their health status
might not be useful now that it’s green, but next time
Everything should be in the service events explaining 30 minutes for deployment to complete