#aws (2021-06)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2021-06-01
Hi. I’m looking at downgrading an AWS RDS database one level from db.m4.xlarge to db.m6g.large. Sometimes I get CPU loads of 50% for 1 hour a day, but only a few days a month. Does anyone have an idea if the cheaper database will be able to handle the load?
you mean downgrade from db.m6g.2xlarge
(vCPU: 8 Memory: 32 GiB) to db.m4.xlarge
(vCPU: 4 Memory: 16 GiB)?
sorry I meant m4.xlarge to db.m6g.large
@Darren Cunningham any view on this?
either way you’re cutting your resources in half, so performance of queries is going to be impacted. during those usage spikes you could bring client applications to standstill – so you need to make the decision if performance degradation is acceptable in order to save money
performance degradation is ok, but tipping over not so much
any better idea than profiling all queries during those times?
I mean, that’s a good idea regardless. You should be aware of the queries that are being ran and especially during spikes. (1) security (2) performance tuning
there might be DB configuration options to help the overall DB performance too, but I leave that to actual DBAs
is the time this spike happens predictable? Can you schedule a periodic scale up/down?
Are you using read replicas? Can you add more replicas to help with load?
I would think the m6g cpu should be quite a bit faster than the m4, so that might help wash the difference as well?
Maybe you could go to the m6g.2xlarge first and see what cpu looks like during the spike and then do another resize down?
Not using replicas. Is it possible to scale up/down RDS instances? I wasn’t aware. @Michael Warkentin interesting point about speed. Problem is that I don’t want to risk downgrading and the the DB tips over. I had that a very long time ago and took >1day to recover, so I can’t just downgrade and see if it works.
Yeah that’s why I suggested moving to the new architecture at the same size first, and then reevaluate again
RDS scales up/down by replacing the instance
2021-06-02
I want to tag my resources with a few levels of categorisation to help with cost allocation. The top level of categorisation is always “product” sold to customers. But then there are one or two layers of more specific categorisation for each infra piece.
Any suggestions on generic names for these tags? I would like to have something that makes sense globally since we want these tags everywhere for cost purposes
Can you build out a quick spreadsheet of some examples of various resources and the values that you’d be tagging with? Might help evaluate the key names (“this makes sense for x and y, but not really for z…”)
Generally some useful ones outside of product could be environment, team.. sounds like those may be too specific?
Yeah. I’m looking for names like “product”, “subproduct” and “subsubproduct”. Just ones that sound less dumb
CostCategory1, CostCategory2.
Service, component?
yeah… product, service, component might be it
(I did consider foo1, foo2, foo3, but it seems a little ambiguous if the items are three tags at the same level or a tree)
we’re using cost_center tag. finances is maintaining the “hierarchy” on their side and our resources have basically a single tag value
Hi I am trying to setup AWS Appflow Salesforce integration, part of the setup AWS Docs requires me to setup AWS Callback URLs on the Salesforce side . “In the *Callback URL* text field, enter the URLs for your console for the stages and Regions in which you will use the connected app” . Can someone guide me if its just the landing page url of the AWS console
The following are the requirements and connection instructions for using Salesforce with Amazon AppFlow.
2021-06-04
does anyone know how to change the EnabledCloudwatchLogsExports
via of an RDS database instance via the CLI?
"EnabledCloudwatchLogsExports": [
"audit",
"general"
],
i want to remove audit
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration EnableLogType=general
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration '{"EnableLogTypes":["general"]}'
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requested
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration EnableLogTypes=general
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: No modifications were requested
aws rds modify-db-instance --db-instance-identifier abcedf1234 --cloudwatch-logs-export-configuration DisableLogTypes=audit
slight modification
@Steve Wade (swade1987)
❯ aws rds modify-db-instance --db-instance-identifier re-dev-perf-01 --region eu-west-1 --cloudwatch-logs-export-configuration DisableLogTypes=audit
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: You cannot use the log types 'audit' with engine version mysql 8.0.21. For supported log types, see the documentation.
So it is not supported?
we upgraded from 5.7 to 8.0.21
Looks like with MySQL you need to use custom option or custom parameter groups.
You can configure your MySQL DB instance to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage.
the issue is we left that log group there
but now i can’t remove it and neither can terraform
If you have a custom option group, you can modify it. If not, you will need to create one and set the respective option then assign it to your target.
That cli arg --cloudwatch-logs-export-configuration
doesn’t work for mysql. It does for postgres though.
2021-06-07
Can anyone help me with this one? I can’t seem to find the reason why it’s failing
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Effect": "Deny",
"Resource": ["arn:aws:s3:::test-mazin-12", "arn:aws:s3:::test-mazin-12/*"],
"Condition": {
"StringNotLike": {
"s3:prefix": "allow-me.txt"
}
},
"Principal": "*"
}
]
}
I always get Conditions do not apply to combination of actions and resources in statement
See this answer https://stackoverflow.com/a/44848685/2965993
My goal is to deny write access to most of a bucket to all users except one, who has full access the the bucket defined in a separate policy attached to the user. The top-level directories of the b…
If anyone curious about it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"NotResource": ["arn:aws:s3:::test-mazin-12/allow-me.txt"]
}
]
}
2021-06-08
Amazon Web Services outage map with current reported problems and downtime.
Likely just the console and not a full outage. https://downdetector.com/status/aws-amazon-web-services/
Real-time AWS (Amazon Web Services) status. Is AWS down or suffering an outages? Here you see what is going on.
Also looks like it’s trending down
Does anyone have any good documentation/ideas on deploying to AWS Elastic Beanstalk through a Jenkins pipeline project? We currently just build a grails application into a .war file and deploy it with the EBS plugin through a freestyle project, but we’re looking to expand the functionality of the build a lot and need to move it to pipeline projects. I can’t really find any documentation on this.
Maybe we need a pipeline to run all the pre-deploy steps and build the project, then use what we already have to deploy what we build? Not sure how people handle this specifically
@beaur97 do you have access to Linked In Learning? I have a course there that goes over exactly what you’re describing. That is, it shows you how to run Jenkins on AWS (really, you can run Jenkins anywhere) along with a basic configuration to deploy to Elastic Beanstalk. The course does not use a pipeline, it uses a standard job, but you could easily expand it to use a pipeline for a more complex project/deployment. I don’t have the bandwidth to give 1:1 support on this but if you have a question I might be able to answer it here asynchronously.
https://www.linkedin.com/learning/running-jenkins-on-aws-8591136/running-jenkins-on-aws
(also, if you don’t have access to Linked In Learning, check your local library! many libraries offer free access to Lynda.com and/or Linked In Learning. Lynda is very slowly being phased out but the content is the same!)
Join Michael Jenkins for an in-depth discussion in this video, Running Jenkins on AWS, part of Running Jenkins on AWS.
@managedkaos sadly that’s the part I don’t have a problem with. We already have it set up deploying to AWS through a standard job and I’m trying to get it to deploy through a pipeline instead and just not super sure on the needed syntax for the actual deployment part
if you have scripts/commands for the deploy, you can probably just wrap your scripts up in a pipeline doc and use that. i will try to follow up later with an example.
Cool, we don’t have scripts, just set it up with the plug-in to deploy so I guess that’s the issue, starting from nothing when I switch to a pipeline I’ll need to grab credentials, send file to S3, point the EBS instance to that file in S3(?). So nothing super complicated to start, just can’t find good documentation on it
What are you looking to do specifically? Just handle the build & deployment of the grails app into EB from Jenkins? Assuming the EB env already exists? That should be straight forward by getting Jenkins to build the app, and then using AWS CLI in the pipeline to handle the deployment to the pre existing EB environment. But if you’re looking to get Jenkins to build out the EB env also, you’ll be best off using Terraform to build out the EB environment. If it’s the actual pipeline you’re looking for help with…
pipeline {
agent any
stages {
stage('Checkout App Src'){
steps {
git changelog: false,
credentialsId: 'repo-creds',
poll: false,
url: '[email protected]'
}
}
stage('Build App') {
steps {
sh 'grails create-app blah'
}
}
stage('Push App') {
steps {
sh 'zip -r app.zip app.war'
ah 'aws s3 cp app.zip <s3://blah>
}
}
stage('Create EB Version') {
steps{
sh 'aws elasticbeanstalk create-application-version --application-name grails-app --version-label blah --source-bundle S3Bucket=blah,S3Key=app.zip'
}
}
stage('Update EB Environment') {
steps{
sh 'aws elasticbeanstalk update-environment --environment-name grails-app --version-label blah'
}
}
There’s likely typos and syntax errors… written freehand, but it’ll give you an idea, if that’s what you’re looking for. If not, add some more specific detail to explain exactly what you need.
@Jon Butterworth Thanks for this, it’s exactly what I need. Only question I have is that does any auth need to happen in the pipeline to use AWS CLI? Our Jenkins server is in AWS and deploys to it currently through some set up credentials file. I’m assuming this is an IAM role set up for Jenkins already that I can use. Just 100% new to this for AWS specifically so trying to make sure I have everything covered before I start
@beaur97 it depends how you have the auth set up for AWS. If it’s just a regular credentials file in ~/.aws/credentials then you shouldn’t need to do anything. Just run the pipeline and it should work.
Okay, I ssh’d into the box and had access to the cli and the jenkins project has the aws credentials in it’s global config so I think it should work. Can’t get to migrating it til next week probably, but looking good. Thanks for the help!
2021-06-09
2021-06-10
I just had a discussion with one of our developers regarding a customer of ours who wants to be able to access an RDS instance to be able to SELECT some data.
That raises some challenges. Our RDS is in a private subnet and can only be reached externally using a tunnel to our Bastion host. From a security-perspective (but also maintenance) I’m not a fan as it would mean granting AWS access, creating an IAM role allowing them to connect to the Bastion node, creating a user on the RDS instance and granting appropriate permissions etc.. This doesn’t feel like the most practical/logical approach.
How do you guys deal with granting access to externals to “internal” systems?
You can use AWS systems manager and RDS IAM authentication to improve the experience of the flow you describe (SSH tunnel)
In general though, I think if an external system needs this access, you need to bridge the network. In other words a VPN
Another workaround is to place the database on a public subnet and rely on security groups to only open access for certain public IPs
silly question, do they need access to the actual DB? If they’re running selects, what about replicating data to another RDS instance in a public subnet with only the data they need or exporting the data to S3 and providing them access to that via Athena?
@Alex Jurkiewicz Good suggestions. A public RDS is probably the “easiest” although that would still require a new instance to be created since you IIRC you cannot change the subnet groups for an existing RDS instance. But after that all it needs is a security group + a user with SELECT privs
@Darren Cunningham Exporting the data to S3 might be a good alternative too. But I have a feeling that the customer actually expects an actual DB to access. We’ve had a similar request a while ago, but we could squash that by providing them a temporary public RDS instance based off a snapshot. But since the DB is 50GB it’s not practical to do that on a frequent basis
@Frank I came across such requests in the past. It’s much better to find a way to replicate data for the customer into their own RDS (they can manage an RDS and you insert data into it) than break your security model. The risk in opening your private DB to someone you don’t control is a great one (as far as I’m concerned).
Tbh, a 500gb snapshot should spin up quickly, sandpit sounds ideal.
It is possible to change subnets of an existing instances from private to public but you must change each az once at a time. It’s slow and fiddly
a 500gb cluster clone takes about 7 to 8 min to spin up, and you could try cloning it to a special. vpc that have the proper access
and destroy after is used
8 minutes? Wow. I believe restoring our 50GB DB from the snapshot took ~20-30 minutes
But the separate VPC / sandpit sounds a lot better than having to poke holes for the existing instance. It should be doable to automate that I guess. It would only work for cases were the data doesn’t have to be up-to-date though, might still cover a lot of the cases
read clones not restores
in my previous company we used clones to point to a temporary cluster while we updated the schema in the main cluster or imported billions of rows ( eventually consistent data)
What’s the best way to enable EBS encryption by default across accounts in an AWS organization? Should I deploy a lambda to all member accounts to enable the setting or is there a better way?
This is meant to fix a Config rule compliance issue
You could enable that globally
How so? Been sifting through docs
I’d personally test it in some test account with existing infrastructure to make sure it worked as expected
What I have been able to do is set up the config rule globally
This looks like a setting for a single account
I am trying to do this for multiple accounts using Landing Zone and CloudFormation
you probably have to do it for each account, I’d imagine
Would be nice if it was a toggle directly in landing zone
are the aws docs for anyone else? https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html
edit: seem to be back
nice. You can create a Lambda Layer with a long name that has an ARN longer than what the Lambda API will accept when creating/updating a function. So you can create a layer but never attach it to anything
2021-06-11
at this point I think I get paid to deal with with AWS apiisms like this all the time so I do not even get mad anymore
My total support, I became crazy when I developed https://github.com/claranet/aws-inventory-graph
Explore your AWS platform with, Dgraph, a graph database. - claranet/aws-inventory-graph
no consistency: VPCId, VpcID, VpcId, etc
dates have at least 3 differents formats
drives me nuts with the random use of local vs utc time
And now they’ll never fix it because they’re so backwards compatible.
2021-06-13
Say I have an API Gateway endpoint that triggers an async lambda (meaning the http req will return immediately and the lambda will run async). Say each lambda invocation takes 1 second to run, and my reserved concurrency is 100 lambdas for that function, but my reqs/second to the endpoint is 200.
In this case, the queue of lambda functions would continuously grow it seems like, as there wouldn’t be enough lambdas to keep up with the queued events.
I have three questions:
- How do I monitor if this type of queueing is happening? I use datadog, but any pointers on any system are helpful
- Can I tell Lambda to just drop messages in the queue older than X hours if the messages are not that important to process?
- Can I tell Lambda to only process X% of the traffic, discarding the rest?
I can’t speak to the monitoring portion yet, but just wanted to ask: if you have a requirement for processing all requests (vs just dropping them), have you considered an implementation that uses a true queue like SQS or AMQ? I imagine this as one API and two lambdas. The API triggers a lambda that simply puts the requests details into the queue and the second (async) lambda is triggered by the queue to process a new item.
Yep, we use the above model. The lambda Async queue is not configurable or observable, so you don’t want invocations to ever go there. https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. When you invoke a function asynchronously, you don’t wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource to chain together components of your application.
2021-06-14
Greetings, team! Question about using aws ssm start-session
for SSH connections.
I’m able to connect to a server using the following SSH configuration:
Host host_name
ProxyCommand aws ssm start-session --target=i-1234567890 --document-name AWS-StartSSHSession --parameters 'portNumber=%p'
However, this still requires a user and key to be in place on the target server for the user that’s connecting.
Using aws ssm start-session --target=i-1234567890
directly, instead of as a ProxyCommand
, also works and drops me into the server as ssm-user
. However, there are too many servers to know them by their ID; the name just works so much better with the human brain. :D
Is there a way to get the functionality of starting a session as ssm-user
without using a key? Essentially, I’d like to not have to provision a user on the server but instead gate access with IAM roles that have permission to start an SSM session.
You could write a small wrapper script that retrieves the instance ID using the hostname (assuming it’s tagged) and feed the results into the aws-cli.
Something like this might do the trick:
#!/bin/bash
SRV_HOSTNAME="${1}"
REGION="${2:-eu-west-1}"
INSTANCE_ID=$(aws ec2 describe-instances --filters Name=instance-state-name,Values=running Name=tag-value,Values="${SRV_HOSTNAME}" --query 'Reservations[*].Instances[*].{Instance:InstanceId}' --output text --region "${REGION}")
aws ssm start-session --target="${INSTANCE_ID}"
The ProxyCommand isn’t working here. After adding user ssm-user
it still tries to authenticate using my key
https://github.com/qoomon/aws-ssm-ec2-proxy-command Looks like someone had the same problem and has fixed that particular key issue by adding it to authorized_keys for 60 seconds. enough to allow you to connect
AWS SSM EC2 SSH Proxy Command. Contribute to qoomon/aws-ssm-ec2-proxy-command development by creating an account on GitHub.
yes, like your wrapper I have a script that reads all instances and picks up the name tag and instance ID to generate an SSH config like the snippet in the original post.
thanks for the suggestion. I’ll look into it!
@Frank thanks again for this tip. it works nicely indeed! One caveat: connections failed initially because the password was expired for the ssm-user
on the target server. So i had to connect with a different account and reset the password. after that the script worked for SSH and SCP. Cheers!
Odd that the ssm-user
account has a password (that expires). But glad to hear you got it to work!
yeah! its crazy odd. so i can get this method working but to get it really working, i have to loop through all my instances and update the ssm-user password
Is it possible to move everything in a VPC from one set of IPs to another? Not sure exactly how that’s worded but my boss who doesn’t have a ton of networking knowledge is the only one allowed in AWS. He set up a VPC for me to use for “dev tools” starting with Prometheus/Grafana server. Using these will require a VPC peering connection to our other VPCs, but I’m 90% sure he used the same CIDR block for both VPCs, which will block the Peering connection.
Is there some way to migrate the instance or 2 I have in the dev tools VPC to a new block, or will I have to tear it down and set up a new one?
I’m assuming I’ll have to tear down the instance for a new one, but I think I can back it up through like EBS and create another image from it, right?
you can attach another CIDR to an existing VPC
create new subnets and such
and you could migrate over
but at that point is way better to create a new VPC and migrate within the same account if needed
@jose.amengual so you can set everything up then move an existing instance? Sorry if these questions are super basic, like I said only my boss can do anything in AWS itself so I can have simple knowledge gaps on stuff like that
you can create another VPC in the same account with a non colliding CIDR and peer the existing vpc with the new one so you can migrate the services in stages and once you are done with the move delete the peering and peer with the other vpcs
Okay, I’ll look into this tonight. Thanks!
you will have to redeploy mostly
2021-06-15
Hi,
I have a question with the AWS EFS module. I’m trying to build an EFS filesystem and access it from a swarm cluster built with this terraform module:
https://gitlab.com/live9/terraform/modules/-/tree/master/swarm_cluster
and I’m using the EFS module like this:
data "aws_vpc" "selected" {
# default = true
id = file("../swarm_cluster/output/vpc_id.txt")
}
data "aws_subnet_ids" "subnet" {
vpc_id = data.aws_vpc.selected.id
# filter {
# name = "availabilityZone"
# values = ["us-east-2a", "us-east-2b", "us-east-2c"] # insert values here
# }
}
module "efs" {
source = "git::<https://github.com/cloudposse/terraform-aws-efs.git?ref=master>"
namespace = var.project
stage = var.environment
name = var.name
region = var.aws_region
vpc_id = data.aws_vpc.selected.id
subnets = tolist(data.aws_subnet_ids.subnet.ids)
security_groups = [file("../swarm_cluster/output/swarm-sg_id.txt")]
tags = var.extra_tags
# zone_id = var.aws_route53_dns_zone_id
}
The swarm module writes the VPC id and security group to files so I can access them from the EFS code. But when running terraform apply
I get this error:
Error: MountTargetConflict: mount target already exists in this AZ
*│* {
│ RespMetadata: {
│ StatusCode: 409,
│ RequestID: "5fc21e72-d970-4b28-992d-211acf4c0491"
│ },
│ ErrorCode: "MountTargetConflict",
│ Message_: "mount target already exists in this AZ"
│ }
│
│ on .terraform/modules/efs/main.tf line 22, in resource "aws_efs_mount_target" "default":
│ 22: resource "aws_efs_mount_target" "default" {
One time per each subnet. What I’m doing wrong ?
Thanks!
Terraform modules
in AWS EFS console, check if mount target already exists in the AZ
Terraform modules
also instead of reading from files, you can just hardcode the VPC ID and subnets ID to test the EFS module
also look at this working example https://github.com/cloudposse/terraform-aws-efs/blob/master/examples/complete/main.tf
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
Hi @Andriy Knysh (Cloud Posse) thanks for your answers. About your suggestion of checking if the mount target already exists, if I’m creating the EFS, how could the mount target exist before the EFS system is created ?
(if it was created before manually for example, or somebody else created it with terraform, or you started applying the TF code and it failed for any reason, the resources got created, but the TF state did not get updated. Are you using a local state? If yes, it’s the common source of errors like this: resources get created, then local state gets lost for any reason, then TF sees the resources in AWS but does not have the state file anymore)
no, I’m using a remote state stored in s3. This is just a test on my personal aws account, I remember have created an EFS filesystem (with another name) manually before, but it also was deleted before I started testing the EFS module.
I can delete the remote state to be sure, but if there aren’t any EFS filesystems created, is there another way to check if there are any stale mount targets left ?
Ok, I deleted the remote state, removed the .terrform directory and the local terraform.tfstate* files and run terraform init again. Then run terraform apply again and monitored the EFS AWS console while terraform was running. When the new EFS appeared on the console I went to its Network tab and saw the three mount targets in “creating” state (one per the three subnets on the vpc). Some minutes later all appeared in the “Available” state but terraform failed with the same errors as before.
I think I found the issue, the cluster code creates public and private subnets for each AZ, so the EFS module create the mount targets for the first three subnets of the thre AZ’s and then fails when trying to create the other three.
yes, use just private subnets as shown in the example ^
Excellent, that was the issue, I fltered the subnets by a tag that only the private subnets have and it worked. Thanks for the pointers
@Andriy Knysh (Cloud Posse) helped me figure out the issue. The subnets ids datasource was retreiving more than one subnet per AZ, so I modified the datasource filter to only fetch private subnets and that was all.
nice. If you use AWS SSM Sessions Manager for access to ec2 instances, you can choose to write logs of each session to S3 and encrypt the logs with KMS. However, to do so you need to grant kms:Decrypt permissions to the iam users that use sessions manager and ec2 instance profiles that are connected to. Not kms:Encrypt, but Decrypt!
if you like to use SSM, I developed this a long time ago, maybe it might find it useful https://github.com/claranet/sshm
Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your ~/.aws/profile
to easily select the instance you want to connect on. - claranet/sshm
Ah, that might explain why I was having such a hard time getting SSM session logging working. Not the most obvious choice of IAM permissions.
yeah. My S3 logging is not currently working, but there are no errors. Time for AWS support!
mmmm I do not remember having to give the users access to the kms key
the instance yes, since the agent needs to write the logs
Ah right. This makes more sense. I’m using the key for transit encryption
Both sides of the tunnel will need to decrypt their data with it
correct but the user that uses it does no need access to the key
in fact that could be dangerous
They do, they need decrypt access
but access to what? are you using the SSH proxy for session manager?
because in that case yes, they will need that, trough the console they do not
(only if the key have a policy for root account access)
2021-06-16
Is anyone experiencing issues in us-east-2?
nope, weather is perfect today
couldn’t help myself – which service(s) you referring to?
Yes
Our tam reported API error rate elevated
we have our tam on call as well. Still waiting for a resolution to this outage
2021-06-17
Hi all, Is there any convenient tool for mfa with aws-cli?
If aws sso is configured then you have mfa built in and it expires every 24 hours or less
I think you can also configure awscli to prompt you for mfa codes
You can also use aws-vault
https://github.com/99designs/aws-vault/blob/master/USAGE.md#mfa
A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault
I’ve used custom awskeygen tools with multiple projects in the past. most of them are not open source… basically we generated aws keys using SAML e.g. this python one, https://github.com/jleaniz/AWSKeygen if you look for adfs.py in github you can see a lots of results to check e.g. https://github.com/mynameisakash/python_mfa_saml_adfs I remember seeing go tools for the same as well… (just like above aws-vault)
This is not a direct answer to your question but related!
I use MFA for AWS CLI and have my call to aws auth wrapped in a script that checks the expiration time of the session. that way I can open new terminals and auth there without having to MFA again until the token expires.
export AWS_SESSION_EXPIRATION=$(grep "\[$ENV_NAME\]" -A 4 ~/.aws/credentials | grep aws_session_expiration | awk '{print $3}')
if [ $AWS_SESSION_EXPIRATION -gt $(date +%s) ];
then
echo "AWS session expires on $(date -r $AWS_SESSION_EXPIRATION)"
else
echo "AWS ession has expired!"
echo "Running 'aws-saml-auth'"
# AWS SAML Auth Settings
aws-saml-auth login
aws --profile=${ENV_NAME} sts get-caller-identity
fi
Couple years ago, I wrote this py script. It served it purpose well at the time. It still works, but I havent used to since switching to AWS SSO.
https://gist.github.com/sgtoj/4fb6bf2bdb68b8992cdca54b82835faf
I have similar one for AWS SSO too. https://gist.github.com/sgtoj/af0ed637b1cc7e869b21a62ef56af5ac
Leapp is the tool to access your cloud; It securely stores your access information and generates temporary credential sets to access your cloud ecosystem from your local machine. - Noovolari/leapp
u can use leapp both for AWS SSO and MFA and also for cross-account roles in AWS
There is a good talk about it in office hours: https://youtu.be/-Jn53731i7s?t=2055
I’m stealing this one from @RB https://aws.amazon.com/about-aws/whats-new/2021/06/kms-multi-region-keys/
If I have an API Gateway and I set a global rate limit of 25 reqs / second on it, but I receive 100 reqs per second, am I charged for the requests that are rejected (in the $3.50/million requests pricing)?
sounds like a fantastic question for AWS support
probably yes, the only way to really block i think it with a waf in case of a ddos
2021-06-18
2021-06-22
Folks, Does anyone know what this master_user_options meant for? I mean do we use this master user for actually connecting to the elasticsearch setup and pump in the logs?
2021-06-23
Question: Is there a way to serve a static HTML page from S3 through an ALB?
TLDR:
On occasion I use maintenance pages for long deployments or changes. I do this by creating a /*
rule in the ALB listener that reads a local html file for the response content:
resource "aws_lb_listener_rule" "maintenance_page" {
listener_arn = aws_lb_listener.alb.arn
action {
type = "fixed-response"
fixed_response {
content_type = "text/html"
message_body = file("${path.module}/maintenance_page.html")
status_code = "200"
}
}
condition {
path_pattern {
values = ["/*"]
}
}
}
Unfortunately, this method only allows for content that is less than or equal to 1024 bytes. So the page is minimally styled. I’d like to add richer content with CSS and images (well, not me but the developers! ) but I know that will require more bytes. I’m thinking maybe the CSS could come from a link but even then, depending on how much is added to make the maintenance page look like the app, it will take more than 1024 bytes.
So I’m thinking we could store the page in S3 and then serve it from there. I’d prefer not to do any DNS dancing with the app endpoint and instead just update what the app is serving from the ALB. Any thoughts or ideas?
I know of no native integration with ALB and S3. I have used ngnix as a proxy in the past.
You can probably do what you need with lambda though.
user -> alb -> lambda -> s3
Just need to make sure the context type header is set to text/html
(via lambda response) so that the client (browser) renders it correctly.
ok cool. that could work. I’m taking a look here: https://aws.amazon.com/blogs/networking-and-content-delivery/lambda-functions-as-targets-for-application-load-balancers/
As of today, Application Load Balancers (ALBs) now support AWS Lambda functions as targets. Build websites and web applications as serverless code, using AWS Lambda to manage and run your functions, and then configure an ALB to provide a simple HTTP/S frontend for requests coming from web browsers and clients. Triggering a Lambda Function from […]
Yeah. I have used that integration in the past for API request (in place of api-gateway). I see no reason it can not work to serve html pages.
We are using that for our maintenance page as well. Just adding a catch-all to a Lambda function. No S3, though all inline with Lambda, because the page is really simple. Works pretty well.
yes i would go with keeping all the code in lambda. no reason to go to S3 if the lambda can handle the request
2021-06-24
This is really cool. Just heard about it from another devops group: https://www.allthingsdistributed.com/2021/06/introducing-aws-bugbust.html
The first global bug-busting challenge to fix one million bugs and save $100 million in technical debt
I’m failing to see how this is supposed to benefit the dev community. This just seems like Amazon using their influence to get the community to do more work for less. Maybe I’m missing something though.
The first global bug-busting challenge to fix one million bugs and save $100 million in technical debt
I don’t think the bugs are Amazon’s bugs. they are providing a gamified platform for teams to use Code Guru on Java or Python codebases to fix the team’s bugs.
At least that’s the way i’m reading it
yeah, they’re creating a platform that allows anybody to open up their software for the community to squash bugs…but I’m not tracking why the community should get involved
Ahh, i guess i didn’t see where the code was open. i was thinking it would just be a team working on that team’s code
open to the “bug squashers”, not necessarily the world
honestly the how it works is less of the issue for me, it’s the why
2021-06-25
Hi, Every one, i am trying to create AWS route53 resource using terraform but want to use dynamic block approach. Can any one help me how i can do it in right way. ?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CREATE ROUTE53 ZONES AND RECORDS
#
# This module creates one or multiple Route53 zones with associated records
# and a delegation set.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ------------------------------------------------------------------------------
# Prepare locals to keep the code cleaner
# ------------------------------------------------------------------------------
locals {
zones = var.name == null ? [] : try(tolist(var.name), [tostring(var.name)], [])
skip_zone_creation = length(local.zones) == 0
run_in_vpc = length(var.vpc_ids) > 0
skip_delegation_set_creation = !var.module_enabled || local.skip_zone_creation || local.run_in_vpc ? true : var.skip_delegation_set_creation
delegation_set_id = var.delegation_set_id != null ? var.delegation_set_id : try(
aws_route53_delegation_set.delegation_set[0].id, null
)
}
# ------------------------------------------------------------------------------
# Create a delegation set to share the same nameservers among multiple zones
# ------------------------------------------------------------------------------
resource "aws_route53_delegation_set" "delegation_set" {
count = local.skip_delegation_set_creation ? 0 : 1
reference_name = var.reference_name
depends_on = [var.module_depends_on]
}
# ------------------------------------------------------------------------------
# Create the zones
# ------------------------------------------------------------------------------
resource "aws_route53_zone" "zone" {
for_each = var.module_enabled ? toset(local.zones) : []
name = each.value
comment = var.comment
force_destroy = var.force_destroy
delegation_set_id = local.delegation_set_id
dynamic "vpc" {
for_each = { for id in var.vpc_ids : id => id }
content {
vpc_id = vpc.value
}
}
tags = merge(
{ Name = each.value },
var.tags
)
depends_on = [var.module_depends_on]
}
# ------------------------------------------------------------------------------
# Prepare the records
# ------------------------------------------------------------------------------
locals {
records_expanded = {
for i, record in var.records : join("-", compact([
lower(record.type),
try(lower(record.set_identifier), ""),
try(lower(record.failover), ""),
try(lower(record.name), ""),
])) => {
type = record.type
name = try(record.name, "")
ttl = try(record.ttl, null)
alias = {
name = try(record.alias.name, null)
zone_id = try(record.alias.zone_id, null)
evaluate_target_health = try(record.alias.evaluate_target_health, null)
}
allow_overwrite = try(record.allow_overwrite, var.allow_overwrite)
health_check_id = try(record.health_check_id, null)
idx = i
set_identifier = try(record.set_identifier, null)
weight = try(record.weight, null)
failover = try(record.failover, null)
}
}
records_by_name = {
for product in setproduct(local.zones, keys(local.records_expanded)) : "${product[1]}-${product[0]}" => {
zone_id = try(aws_route53_zone.zone[product[0]].id, null)
type = local.records_expanded[product[1]].type
name = local.records_expanded[product[1]].name
ttl = local.records_expanded[product[1]].ttl
alias = local.records_expanded[product[1]].alias
allow_overwrite = local.records_expanded[product[1]].allow_overwrite
health_check_id = local.records_expanded[product[1]].health_check_id
idx = local.records_expanded[product[1]].idx
set_identifier = local.records_expanded[product[1]].set_identifier
weight = local.records_expanded[product[1]].weight
failover = local.records_expanded[product[1]].failover
}
}
records_by_zone_id = {
for id, record in local.records_expanded : id => {
zone_id = var.zone_id
type = record.type
name = record.name
ttl = record.ttl
alias = record.alias
allow_overwrite = record.allow_overwrite
health_check_id = record.health_check_id
idx = record.idx
set_identifier = record.set_identifier
weight = record.weight
failover = record.failover
}
}
records = local.skip_zone_creation ? local.records_by_zone_id : local.records_by_name
}
# ------------------------------------------------------------------------------
# Attach the records to our created zone(s)
# ------------------------------------------------------------------------------
resource "aws_route53_record" "record" {
for_each = var.module_enabled ? local.records : {}
zone_id = each.value.zone_id
type = each.value.type
name = each.value.name
allow_overwrite = each.value.allow_overwrite
health_check_id = each.value.health_check_id
set_identifier = each.value.set_identifier
# only set default TTL when not set and not alias record
ttl = each.value.ttl == null && each.value.alias.name == null ? var.default_ttl : each.value.ttl
# split TXT records at 255 chars to support >255 char records
records = can(var.records[each.value.idx].records) ? [for r in var.records[each.value.idx].records :
each.value.type == "TXT" && length(regexall("(\\"\\")", r)) == 0 ?
join("\"\"", compact(split("{SPLITHERE}", replace(r, "/(.{255})/", "$1{SPLITHERE}")))) : r
] : null
dynamic "weighted_routing_policy" {
for_each = each.value.weight == null ? [] : [each.value.weight]
content {
weight = weighted_routing_policy.value
}
}
dynamic "failover_routing_policy" {
for_each = each.value.failover == null ? [] : [each.value.failover]
content {
type = failover_routing_policy.value
}
}
dynamic "alias" {
for_each = each.value.alias.name == null ? [] : [each.value.alias]
content {
name = alias.value.name
zone_id = alias.value.zone_id
evaluate_target_health = alias.value.evaluate_target_health
}
}
depends_on = [var.module_depends_on]
}
Please use a code snippet for such a long paste
2021-06-28
Hey there, I’m pretty new to k8s and AWS and I’m wondering if it is a good idea to use AWS services via AWS Service Broker. The project and its docs seem a little bit outdated, so I’m afraid flogging a dead horse …
I think service broker is getting replaced with “ACK” which I think is aws controllers for kubernetes. However crossplane looks like a better project. It’s third party but appears to have good support and be multicloud.
Thank you, I’ll have a look at these
That’s my impression too, although I have not used crossplane myself, very curious about other people’s experiences.
Hi I have a requirement to re-encrypt traffic from alb to backend hosts using a 3rd party cert. The traffic from client to alb is already encrypted using ACM cert. Would appreciate pointers to any reference docs and/or suggestions to enable this. Thanks !
you basically just throw nginx or haproxy or whatever in front of your app and it has the cert configured. Note that AWS will not verify the cert, they don’t care
awesome ! meaning that no extra config is required on the ALB itself to enable any of this ?
but thats correct, no config on the ALB itself.
Is there any benefit to this other than checking a compliance box? I’ve been asked this before and haven’t dived into it yet. Wasn’t certain how much security benefit there is with a reverse proxy doing ssl, when it’s already behind the ALB.
well, it does ensure that connection is encrypted - it just isn’t very good because its usually self-signed, and because AWS doesn’t care about the validity of it
Got ya. The connection in question is in the vpc though, right? So basically for that to be truly useful, would it be fair to say that every single connection in the VPC would need to be encrypted? Asking since relevant to some of my work and I’m not certain the additional toil is worth it for my situation, but willing to adjust if I’m missing something
maybe not, if you determined that a particular route was ‘sensitive’ and only wanted to encrypt that path. This is more applicable to gov, finance and healthcare though where you’re under a strict framework of requirements
like HIPAA/HiTrust says all protected information has to be encrypted in transit - everywhere
So even though it would be private inside subnet, I’d need to manage certs for connecting to my database via TLS?
right, or your cloud provider has them built-in (AWS RDS)
2021-06-29
I’m using AWS ECS with load balancer in front of my services. Now I want to restrict access to the Mongo Atlas Database (whitelist) for certain IP addresses? How do i find out those external addressed of my backend services that are running on ECS? Should that be the IP address of the load balancer? Mongo Atlas whitelisting only supports IP address or CIDR notation, I can’t put DNS there.
I guess those ECS instances have a security group. Can you reference that security group in an inbound rule on the security group of your backend services?
Maybe I wrote the question badly. I just want to allow only my app running on AWS to access the Mongo Atlas servers. And they only support adding IP address whitelisting, which IP do i choose (out of all AWS layers)?
AWS Instances have Public Ips as well. Depends, if you have assingned in the ECS cluster to give the intstances Public Ips, you can see them and filter them in the Mongo Atlas Servers. Mongo Atlas servers are not in you AWS account right, they are in Mongo’s property?
Load balancer is the only place where the trafic comes to your instances, so you can filter your incoming traffic there, but not the outgoing.
Yeah mongo clusters have nothing to do with AWS.
I know instances have public IPS, but I don’t know whether ECS may change and switch my instances and then I need to set the new IP address again. Or that will never happen?
True. If they scale up/down, they will be replaced and the instances will get new IPs. I see elastic IP as a temporary solution, because you cannot add the LB ip. Or maybe defining a ‘jump host’ with a static (elastic ip) from where your instances will access to Mongo Atlas. But IMO they (mongo) should have a more smooth solution
Customers want to guarantee private connectivity to MongoDB Atlas running on AWS. All dedicated clusters on MongoDB Atlas are deployed in their own VPC, so customers usually connect to a cluster via VPC peering or public IP access-listing. AWS PrivateLink allows you to securely access MongoDB Atlas clusters from your own VPC. In this post, follow step-by-step instructions to configure AWS PrivateLink for MongoDB Atlas, ensuring private connectivity to your data.
@Ognen Mitev thanks, will check that out!
Pozdrav! Let me know, I am curious how you have solved it!
In case your application is placed in private subnets, you can provide Mongo Atlas service your AWS NAT gateway EIP. Because your application wil go through Internet via NAT gw. However, the better solution is to use AWS Private Link which is @Ognen Mitev has mentioned above. That will be more secure. I had established the connection between our apps with Mongo Atlas through Privatelink and all working well.
:wave: I’m trying to allow AWSAccountA to send eventbridge events to AWSAccountB. In AWSAccountB I can get this to work correctly with a *
principal
principals {
type = "*"
identifiers = ["*"]
}
however, the following principal doesn’t work
principals {
type = "Service"
identifiers = ["[events.amazonaws.com](http://events.amazonaws.com)"]
}
Is this because a cross account EventBridge message isn’t marked as a "[events.amazonaws.com](http://events.amazonaws.com)"
service principal? (it seems like I’d want to avoid anonymous principals)
The principal is the account ID (per their standard format with :root etc). See: https://aws.amazon.com/blogs/compute/simplifying-cross-account-access-with-amazon-eventbridge-resource-policies/
You are right regarding not using *
, that’s bad practice.
in this case I have a dynamic/variable number of accounts that need the ability to push events to AWSAccountB. I’ve locked down the policy with conditions that scope access to a specific Organization OU, "events:source"
, and "events:detail-type"
. Should keep things safe but the wildcard principal still feels awkward
s there a standard way to share private Docker images with external customers?
I have a docker registry on ECR that has images we allowlist external AWS Account IDs to access from our customers. If we have a customer who wanted to access the image without AWS (like through Google Cloud), how could we make that happen while not making our registry public?
I wonder if you can use federation for this.
The only other method i can think of is with aws access and secret tokens… But i don’t like that approach for obvious reasons
@David You can also utilise services such as Cloudsmith (I work there) or other agnostic sources of truth to manage to distribution of assets like Docker. We have customers who are distributing to users all over the world, at-scale. Utilising the system of entitlement (access) tokens, you can associate your customers with registries/repositories for access, so they are able to only pull the images and assets they specifically have access to. The good thing about being agnostic is not being tied to a particular ecosystem (e.g. AWS or GCP), but still being universally available via a high-performance CDN.
2021-06-30
FYI if you use gimme-aws-creds, okta changed something recently that has apparently been slowly rolling out to customers, and it totally breaks gimme-aws-creds. A hotfix release was made yesterday to gimme-aws-creds to account for the change