#aws (2021-06)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2021-06-13

David avatar
David

Say I have an API Gateway endpoint that triggers an async lambda (meaning the http req will return immediately and the lambda will run async). Say each lambda invocation takes 1 second to run, and my reserved concurrency is 100 lambdas for that function, but my reqs/second to the endpoint is 200.

In this case, the queue of lambda functions would continuously grow it seems like, as there wouldn’t be enough lambdas to keep up with the queued events.

I have three questions:

  1. How do I monitor if this type of queueing is happening? I use datadog, but any pointers on any system are helpful
  2. Can I tell Lambda to just drop messages in the queue older than X hours if the messages are not that important to process?
  3. Can I tell Lambda to only process X% of the traffic, discarding the rest?
managedkaos avatar
managedkaos

I can’t speak to the monitoring portion yet, but just wanted to ask: if you have a requirement for processing all requests (vs just dropping them), have you considered an implementation that uses a true queue like SQS or AMQ? I imagine this as one API and two lambdas. The API triggers a lambda that simply puts the requests details into the queue and the second (async) lambda is triggered by the queue to process a new item.

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Yep, we use the above model. The lambda Async queue is not configurable or observable, so you don’t want invocations to ever go there. https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html

Asynchronous invocation - AWS Lambda

Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. When you invoke a function asynchronously, you don’t wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource to chain together components of your application.

2021-06-11

jose.amengual avatar
jose.amengual

at this point I think I get paid to deal with with AWS apiisms like this all the time so I do not even get mad anymore

Issif avatar
Issif

My total support, I became crazy when I developed https://github.com/claranet/aws-inventory-graph

claranet/aws-inventory-graph attachment image

Explore your AWS platform with, Dgraph, a graph database. - claranet/aws-inventory-graph

Issif avatar
Issif

no consistency: VPCId, VpcID, VpcId, etc

Issif avatar
Issif

dates have at least 3 differents formats

Zach avatar

drives me nuts with the random use of local vs utc time

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

And now they’ll never fix it because they’re so backwards compatible.

Issif avatar
Issif

2021-06-10

Frank avatar
Frank

I just had a discussion with one of our developers regarding a customer of ours who wants to be able to access an RDS instance to be able to SELECT some data.

That raises some challenges. Our RDS is in a private subnet and can only be reached externally using a tunnel to our Bastion host. From a security-perspective (but also maintenance) I’m not a fan as it would mean granting AWS access, creating an IAM role allowing them to connect to the Bastion node, creating a user on the RDS instance and granting appropriate permissions etc.. This doesn’t feel like the most practical/logical approach.

How do you guys deal with granting access to externals to “internal” systems?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can use AWS systems manager and RDS IAM authentication to improve the experience of the flow you describe (SSH tunnel)

Alex Jurkiewicz avatar
Alex Jurkiewicz

In general though, I think if an external system needs this access, you need to bridge the network. In other words a VPN

Alex Jurkiewicz avatar
Alex Jurkiewicz

Another workaround is to place the database on a public subnet and rely on security groups to only open access for certain public IPs

Darren Cunningham avatar
Darren Cunningham

silly question, do they need access to the actual DB? If they’re running selects, what about replicating data to another RDS instance in a public subnet with only the data they need or exporting the data to S3 and providing them access to that via Athena?

Frank avatar
Frank

@ Good suggestions. A public RDS is probably the “easiest” although that would still require a new instance to be created since you IIRC you cannot change the subnet groups for an existing RDS instance. But after that all it needs is a security group + a user with SELECT privs

Frank avatar
Frank

@Darren Cunningham Exporting the data to S3 might be a good alternative too. But I have a feeling that the customer actually expects an actual DB to access. We’ve had a similar request a while ago, but we could squash that by providing them a temporary public RDS instance based off a snapshot. But since the DB is 50GB it’s not practical to do that on a frequent basis

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

@ I came across such requests in the past. It’s much better to find a way to replicate data for the customer into their own RDS (they can manage an RDS and you insert data into it) than break your security model. The risk in opening your private DB to someone you don’t control is a great one (as far as I’m concerned).

Alex Jurkiewicz avatar
Alex Jurkiewicz

Tbh, a 500gb snapshot should spin up quickly, sandpit sounds ideal.

It is possible to change subnets of an existing instances from private to public but you must change each az once at a time. It’s slow and fiddly

jose.amengual avatar
jose.amengual

a 500gb cluster clone takes about 7 to 8 min to spin up, and you could try cloning it to a special. vpc that have the proper access

jose.amengual avatar
jose.amengual

and destroy after is used

Saichovsky avatar
Saichovsky

What’s the best way to enable EBS encryption by default across accounts in an AWS organization? Should I deploy a lambda to all member accounts to enable the setting or is there a better way?

Saichovsky avatar
Saichovsky

This is meant to fix a Config rule compliance issue

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

You could enable that globally

Saichovsky avatar
Saichovsky

How so? Been sifting through docs

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

I’d personally test it in some test account with existing infrastructure to make sure it worked as expected

Saichovsky avatar
Saichovsky

What I have been able to do is set up the config rule globally

Saichovsky avatar
Saichovsky

This looks like a setting for a single account

Saichovsky avatar
Saichovsky

I am trying to do this for multiple accounts using Landing Zone and CloudFormation

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

you probably have to do it for each account, I’d imagine

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Would be nice if it was a toggle directly in landing zone

Saichovsky avatar
Saichovsky

Used the CLI to do it in each account

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

are the aws docs for anyone else? https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html

edit: seem to be back

Alex Jurkiewicz avatar
Alex Jurkiewicz

nice. You can create a Lambda Layer with a long name that has an ARN longer than what the Lambda API will accept when creating/updating a function. So you can create a layer but never attach it to anything party_poop

1

2021-06-09

2021-06-08

Maciek Strömich avatar
Maciek Strömich
Status overview attachment image

Amazon Web Services outage map with current reported problems and downtime.

venkata.mutyala avatar
venkata.mutyala

Likely just the console and not a full outage. https://downdetector.com/status/aws-amazon-web-services/

AWS live status. Problems and outages for Amazon Web Services attachment image

Real-time AWS (Amazon Web Services) status. Is AWS down or suffering an outages? Here you see what is going on.

venkata.mutyala avatar
venkata.mutyala

Also looks like it’s trending down

beaur97 avatar
beaur97

Does anyone have any good documentation/ideas on deploying to AWS Elastic Beanstalk through a Jenkins pipeline project? We currently just build a grails application into a .war file and deploy it with the EBS plugin through a freestyle project, but we’re looking to expand the functionality of the build a lot and need to move it to pipeline projects. I can’t really find any documentation on this.

Maybe we need a pipeline to run all the pre-deploy steps and build the project, then use what we already have to deploy what we build? Not sure how people handle this specifically

managedkaos avatar
managedkaos

@ do you have access to Linked In Learning? I have a course there that goes over exactly what you’re describing. That is, it shows you how to run Jenkins on AWS (really, you can run Jenkins anywhere) along with a basic configuration to deploy to Elastic Beanstalk. The course does not use a pipeline, it uses a standard job, but you could easily expand it to use a pipeline for a more complex project/deployment. I don’t have the bandwidth to give 1:1 support on this but if you have a question I might be able to answer it here asynchronously.

https://www.linkedin.com/learning/running-jenkins-on-aws-8591136/running-jenkins-on-aws

(also, if you don’t have access to Linked In Learning, check your local library! many libraries offer free access to Lynda.com and/or Linked In Learning. Lynda is very slowly being phased out but the content is the same!)

Running Jenkins on AWS - Jenkins Video Tutorial | LinkedIn Learning, formerly Lynda.com

Join Michael Jenkins for an in-depth discussion in this video, Running Jenkins on AWS, part of Running Jenkins on AWS.

beaur97 avatar
beaur97

@ sadly that’s the part I don’t have a problem with. We already have it set up deploying to AWS through a standard job and I’m trying to get it to deploy through a pipeline instead and just not super sure on the needed syntax for the actual deployment part

managedkaos avatar
managedkaos

if you have scripts/commands for the deploy, you can probably just wrap your scripts up in a pipeline doc and use that. i will try to follow up later with an example.

beaur97 avatar
beaur97

Cool, we don’t have scripts, just set it up with the plug-in to deploy so I guess that’s the issue, starting from nothing when I switch to a pipeline I’ll need to grab credentials, send file to S3, point the EBS instance to that file in S3(?). So nothing super complicated to start, just can’t find good documentation on it

Jon Butterworth avatar
Jon Butterworth

What are you looking to do specifically? Just handle the build & deployment of the grails app into EB from Jenkins? Assuming the EB env already exists? That should be straight forward by getting Jenkins to build the app, and then using AWS CLI in the pipeline to handle the deployment to the pre existing EB environment. But if you’re looking to get Jenkins to build out the EB env also, you’ll be best off using Terraform to build out the EB environment. If it’s the actual pipeline you’re looking for help with…

        pipeline {
          agent any
            stages {
              stage('Checkout App Src'){
                steps {
                  git changelog: false, 
                    credentialsId: 'repo-creds', 
                    poll: false, 
                    url: '[email protected]'
                }
              }
              stage('Build App') {
                steps {
                  sh 'grails create-app blah'
                }
              }
              stage('Push App') {
                steps {
                  sh 'zip -r app.zip app.war'
                  ah 'aws s3 cp app.zip <s3://blah>
                }
              }
              stage('Create EB Version') {
                steps{
                  sh 'aws elasticbeanstalk create-application-version --application-name grails-app --version-label blah --source-bundle S3Bucket=blah,S3Key=app.zip' 
                }
              }
              stage('Update EB Environment') {
                steps{
                  sh 'aws elasticbeanstalk update-environment --environment-name grails-app --version-label blah' 
            }
        }
Jon Butterworth avatar
Jon Butterworth

There’s likely typos and syntax errors… written freehand, but it’ll give you an idea, if that’s what you’re looking for. If not, add some more specific detail to explain exactly what you need.

1
beaur97 avatar
beaur97

@ Thanks for this, it’s exactly what I need. Only question I have is that does any auth need to happen in the pipeline to use AWS CLI? Our Jenkins server is in AWS and deploys to it currently through some set up credentials file. I’m assuming this is an IAM role set up for Jenkins already that I can use. Just 100% new to this for AWS specifically so trying to make sure I have everything covered before I start

Jon Butterworth avatar
Jon Butterworth

@ it depends how you have the auth set up for AWS. If it’s just a regular credentials file in ~/.aws/credentials then you shouldn’t need to do anything. Just run the pipeline and it should work.

beaur97 avatar
beaur97

Okay, I ssh’d into the box and had access to the cli and the jenkins project has the aws credentials in it’s global config so I think it should work. Can’t get to migrating it til next week probably, but looking good. Thanks for the help!

1
1

2021-06-07

Mazin Ahmed avatar
Mazin Ahmed

Can anyone help me with this one? I can’t seem to find the reason why it’s failing

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Deny",
      "Resource": ["arn:aws:s3:::test-mazin-12", "arn:aws:s3:::test-mazin-12/*"],
      "Condition": {
        "StringNotLike": {
          "s3:prefix": "allow-me.txt"
        }
      },
      "Principal": "*"
    }
  ]
}

I always get Conditions do not apply to combination of actions and resources in statement

1
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)
Cannot match Condition statement with actions and resources in s3 bucket policy

My goal is to deny write access to most of a bucket to all users except one, who has full access the the bucket defined in a separate policy attached to the user. The top-level directories of the b…

Mazin Ahmed avatar
Mazin Ahmed

Thanks! - I overlooked this one

1
Mazin Ahmed avatar
Mazin Ahmed

If anyone curious about it:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "NotResource": ["arn:aws:s3:::test-mazin-12/allow-me.txt"]
        }
    ]
}
3

2021-06-04

Steve Wade avatar
Steve Wade

does anyone know how to change the EnabledCloudwatchLogsExports via of an RDS database instance via the CLI?

Steve Wade avatar
Steve Wade

i am trying to do it via terraform but its not liking it

1
Steve Wade avatar
Steve Wade
"EnabledCloudwatchLogsExports": [
  "audit",
  "general"
],
Steve Wade avatar
Steve Wade

i want to remove audit

Brian Ojeda avatar
Brian Ojeda
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration EnableLogType=general 
Steve Wade avatar
Steve Wade
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration '{"EnableLogTypes":["general"]}'
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requested
Steve Wade avatar
Steve Wade
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration EnableLogTypes=general

An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requested
Brian Ojeda avatar
Brian Ojeda
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration DisableLogTypes=audit 
Brian Ojeda avatar
Brian Ojeda

slight modification

Brian Ojeda avatar
Brian Ojeda

@

Steve Wade avatar
Steve Wade
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration DisableLogTypes=audit
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: You cannot use the log types 'audit' with engine version mysql 8.0.21. For supported log types, see the documentation.
Brian Ojeda avatar
Brian Ojeda

So it is not supported?

Steve Wade avatar
Steve Wade

we upgraded from 5.7 to 8.0.21

Brian Ojeda avatar
Brian Ojeda

Looks like with MySQL you need to use custom option or custom parameter groups.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.html

Publishing MySQL logs to Amazon CloudWatch Logs - Amazon Relational Database Service

You can configure your MySQL DB instance to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage.

Steve Wade avatar
Steve Wade

the issue is we left that log group there

Steve Wade avatar
Steve Wade

but now i can’t remove it and neither can terraform

Brian Ojeda avatar
Brian Ojeda

If you have a custom option group, you can modify it. If not, you will need to create one and set the respective option then assign it to your target.

Brian Ojeda avatar
Brian Ojeda

That cli arg --cloudwatch-logs-export-configuration doesn’t work for mysql. It does for postgres though.

2021-06-02

Alex Jurkiewicz avatar
Alex Jurkiewicz

I want to tag my resources with a few levels of categorisation to help with cost allocation. The top level of categorisation is always “product” sold to customers. But then there are one or two layers of more specific categorisation for each infra piece.

Any suggestions on generic names for these tags? I would like to have something that makes sense globally since we want these tags everywhere for cost purposes

Michael Warkentin avatar
Michael Warkentin

Can you build out a quick spreadsheet of some examples of various resources and the values that you’d be tagging with? Might help evaluate the key names (“this makes sense for x and y, but not really for z…”)

Generally some useful ones outside of product could be environment, team.. sounds like those may be too specific?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah. I’m looking for names like “product”, “subproduct” and “subsubproduct”. Just ones that sound less dumb

Michael Warkentin avatar
Michael Warkentin

CostCategory1, CostCategory2.

Michael Warkentin avatar
Michael Warkentin

Service, component?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yeah… product, service, component might be it

Alex Jurkiewicz avatar
Alex Jurkiewicz

(I did consider foo1, foo2, foo3, but it seems a little ambiguous if the items are three tags at the same level or a tree)

Darren Cunningham avatar
Darren Cunningham

Product, Service, Component has served me well

1
Maciek Strömich avatar
Maciek Strömich

we’re using cost_center tag. finances is maintaining the “hierarchy” on their side and our resources have basically a single tag value

1
Sai Krishna avatar
Sai Krishna

Hi I am trying to setup AWS Appflow Salesforce integration, part of the setup AWS Docs requires me to setup AWS Callback URLs on the Salesforce side . “In the *Callback URL* text field, enter the URLs for your console for the stages and Regions in which you will use the connected app” . Can someone guide me if its just the landing page url of the AWS console

Salesforce - Amazon AppFlow

The following are the requirements and connection instructions for using Salesforce with Amazon AppFlow.

2021-06-01

Fabian avatar
Fabian

Hi. I’m looking at downgrading an AWS RDS database one level from db.m4.xlarge to db.m6g.large. Sometimes I get CPU loads of 50% for 1 hour a day, but only a few days a month. Does anyone have an idea if the cheaper database will be able to handle the load?

Darren Cunningham avatar
Darren Cunningham

you mean downgrade from db.m6g.2xlarge (vCPU: 8 Memory: 32 GiB) to db.m4.xlarge (vCPU: 4 Memory: 16 GiB)?

Fabian avatar
Fabian

sorry I meant m4.xlarge to db.m6g.large

Fabian avatar
Fabian

@Darren Cunningham any view on this?

Darren Cunningham avatar
Darren Cunningham

either way you’re cutting your resources in half, so performance of queries is going to be impacted. during those usage spikes you could bring client applications to standstill – so you need to make the decision if performance degradation is acceptable in order to save money

Fabian avatar
Fabian

performance degradation is ok, but tipping over not so much

Fabian avatar
Fabian

any better idea than profiling all queries during those times?

Darren Cunningham avatar
Darren Cunningham

I mean, that’s a good idea regardless. You should be aware of the queries that are being ran and especially during spikes. (1) security (2) performance tuning

Darren Cunningham avatar
Darren Cunningham

there might be DB configuration options to help the overall DB performance too, but I leave that to actual DBAs

Alex Jurkiewicz avatar
Alex Jurkiewicz

is the time this spike happens predictable? Can you schedule a periodic scale up/down?

Are you using read replicas? Can you add more replicas to help with load?

Michael Warkentin avatar
Michael Warkentin

I would think the m6g cpu should be quite a bit faster than the m4, so that might help wash the difference as well?

Michael Warkentin avatar
Michael Warkentin

Maybe you could go to the m6g.2xlarge first and see what cpu looks like during the spike and then do another resize down?

Fabian avatar
Fabian

Not using replicas. Is it possible to scale up/down RDS instances? I wasn’t aware. @Michael Warkentin interesting point about speed. Problem is that I don’t want to risk downgrading and the the DB tips over. I had that a very long time ago and took >1day to recover, so I can’t just downgrade and see if it works.

Michael Warkentin avatar
Michael Warkentin

Yeah that’s why I suggested moving to the new architecture at the same size first, and then reevaluate again

RDS scales up/down by replacing the instance

    keyboard_arrow_up