#aws (2021-04)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2021-04-01
Just curious if people have purchased AWS RDS Reserved Instances before, any best practices / pitfalls to share.
• I understand that we cannot use AWS savings plans, only RI’s.
• do you guys standardize on a specific instance size (ie. m4.2xlarge) so you don’t waste reservations?
• confirm my understanding that if we purchase an RDS RI, it’ll automatically apply to existing instances? We don’t need to re-spin a new RDS from snapshot, right?
Ye, you don’t need to respin. It automatically matches reservation.
Saving plans can’t be applied to RDS.
do you guys standardize on a specific instance size (ie. m4.2xlarge) so you don’t waste reservations?
This depends. I actually never did it.
Check the https://aws.amazon.com/rds/reserved-instances/ How billing works
With Amazon RDS Reserved Instances, reserve a DB instance for a one or three year term at a significant discount compared to the On-Demand Instance pricing for the same DB instance.
If your RDS are all in the same family (ie, M or R, etc) then the reservation is flexible and can be applied to any instance in that family
for example if you are using entirely XLs you could actually purchase all Large RIs, and 2 Large RIs would cover 1 XL instance. Or vice versa, an XL RI could cover 2 Large RDS.
Thanks - I see people in my org have done EC2 reservations but not savings plans. I like the flexibilityoption since I doubt i’d ever move from a t- to an m-.. most likely ever just go up and down (ie. m4)
so in theory, this would be a completely separate and non-impacting change / process that doesn’t matter at ALL to my terraform/deployment setup.. I simply arrange the purchase with my cloudPlatform team/aws admins
has anyone seen this before ….
cloud-nuke defaults-aws
INFO[2021-04-01T13:40:37+01:00] Identifying enabled regions
ERRO[2021-04-01T13:40:37+01:00] session.AssumeRoleTokenProviderNotSetError AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
github.com/gruntwork-io/[email protected]/errors/errors.go:81 (0x16a1565)
runtime/panic.go:969 (0x1036699)
github.com/aws/[email protected]/aws/session/session.go:318 (0x1974a25)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:50 (0x19749ca)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:66 (0x1974b36)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:86 (0x1974ce6)
github.com/gruntwork-io/cloud-nuke/commands/cli.go:281 (0x199506c)
github.com/gruntwork-io/[email protected]/errors/errors.go:93 (0x16a175e)
github.com/urfave/[email protected]/app.go:490 (0x1691402)
github.com/urfave/[email protected]/command.go:210 (0x169269b)
github.com/urfave/[email protected]/app.go:255 (0x168f5e8)
github.com/gruntwork-io/[email protected]/entrypoint/entrypoint.go:21 (0x1996478)
github.com/gruntwork-io/cloud-nuke/main.go:13 (0x19966a7)
runtime/proc.go:204 (0x10395e9)
runtime/asm_amd64.s:1374 (0x106b901)
error="AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set."
it does say “AssumeRoleTokenProviderNotSetError: assume role with MFA enabled”. Do you have MFA enabled?
Yes I don’t think cloud nuke can handle it
Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke
We also have MFA
My guess though is your ~/.aws/config
is missing something
2021-04-02
Does anyone know of tools that can evaluate function code (e.g. lambda), identify API actions in the code, and compare those actions against a role or set of policy documents to determine whether all the permissions are accounted for?
Getting tired of being surprised after a deploy when an execution fails because the lambda was missing a permission…
I’ve thought about creating a Managed Policy that was overly permissive and assigning Lambdas with that policy, then writing something that ran on a CloudWatch rule and evaluated the CloudTrail log history for a given lambda over the past 30 days and then automatically provisioned a new role with the correct level minimum required access and updated the Lambda to use that – but all that seems too magical and like it would be potentially messy when trying to later update a Lambda with more access
oh, i think that kind of thing does exist… https://github.com/duo-labs/cloudtracker
CloudTracker helps you find over-privileged IAM users and roles by comparing CloudTrail logs with current IAM policies. - duo-labs/cloudtracker
I’ve only tested this with the aws cli, but maybe it’d work if you ran the lambda code locally?
Generate an IAM policy from AWS calls using client-side monitoring (CSM) or embedded proxy - iann0036/iamlive
if i squint, i can kinda see how i might use iamlive to capture mocked api actions from my unit tests, but i’d still need to build something to analyze it’s output and compare against the role/policy… https://github.com/iann0036/iamlive
glad I wouldn’t have to make it, but again I don’t think it’s the right solution as you’re counting on the CloudTrail auditing to prune the permissions, doesn’t really allow you to have ephemeral IAC that you can freely move between accounts
oh haha, @bradym beat me to iamlive
great minds… or something
we’ve started writing unit tests for lambda code and mocking the api calls with moto. so that might be an avenue…
well i do use terraform to deploy the lambda and it’s execution role, so i could compare the iamlive output and the iam policy from the terraform plan/output…
hmm, puresec made a serverless plugin for exactly this, before they were bought by palo alto. unfortunately now the project seems inactive, https://github.com/puresec/serverless-puresec-cli/
Serverless plugin for least privileges. Contribute to puresec/serverless-puresec-cli development by creating an account on GitHub.
seeing lots of tools analyzing roles and comparing allowed permissions against cloudtrail usage or iam access analyzer, but basically nothing i can use in a pipeline before a deployment
Palo Alto strikes again – I’m afraid Bridgecrew will fall into this category too
the same guy who wrote iamlive has a number of ‘iamfast’ repos that will inspect your code and produce a policy. Its very much “beta”
also the minute that you have code that does something like “create cloudwatch stack” I think all bets are off
right, cloudformation becomes the executing agent… i aint doing that. i’ll just have terraform do it. and i’ll see the errors on tf apply
for me this is mostly a lambda that deploys fine, but then gets an event, executes, and dies because we missed a permission. it’s kinda “async” in terms of feedback of the error
thanks for the iamfast pointer… that’s promising, though i don’t want to generate the policy, i want to compare against a policy i already assign… https://github.com/iann0036/iamfast-python
Contribute to iann0036/iamfast-python development by creating an account on GitHub.
ah. I think you’d have to do the diff manually … or write something on top of that
we’ll see where this goes… https://github.com/iann0036/iamfast-python/issues/1
Hi, this is a really cool project! I was wondering if you had ideas on how to do basically what you're doing, but instead of generating an iam policy, compare the discovered actions against a p…
2021-04-05
As lead solutions architect for the AWS Well-Architected Reliability pillar, I help customers build resilient workloads on AWS. This helps them prepare for disaster events, which is one of the biggest challenges they can face. Such events include natural disasters like earthquakes or floods, technical failures such as power or network loss, and human actions […]
Never seen the term Pilot light before. Thanks for sharing.
As lead solutions architect for the AWS Well-Architected Reliability pillar, I help customers build resilient workloads on AWS. This helps them prepare for disaster events, which is one of the biggest challenges they can face. Such events include natural disasters like earthquakes or floods, technical failures such as power or network loss, and human actions […]
2021-04-06
We’re looking at sending events to kinesis from our frontend app. The examples in AWS docs for this all tell you to use cognito for this, but it’s not clear to me how/if that makes it any more secure or if it’s just obfuscation? Any thoughts/experiences here?
You mean this doc ? https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-streams.html
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time.
If its this one, its only applicable if you are using aws amplify + cognito in your web-app
Examples and guidance for developing for Amazon Kinesis Data Firehose using the API.
Should have provided the link, this is the example we’re looking at.
Ugh, wrong one… too many aws tabs open
JavaScript code example that applies to browser execution
I see, this uses the federated ids which you can use. Its common pattern when you want to interact with aws services from frontend
I guess my main question is, if you setup iam creds correctly with only the exact permissions you need, what is the benefit of introducing cognito?
Well, how would you hook-up iam on your web-app? Its running on client-side(User’s browser)
Could be okay if you want to do server-side operations
What’s being suggested to me is that since the IAM user would only have very specific permissions… we could just put the IAM creds in the browser. I’m pushing back against that, but need to better understand why that is more risky than putting an identity pool id in the browser.
Yeah, not a good idea to put aws access keys in browser
Yeah… that’s what I’m saying!
id-pool is a common aws design pattern which you should be using for these kind of operations, its isolated
Any chance you can point me at some documentation that would help me understand it better?
Amazon Cognito helps you manage the abstraction of identities across multiple identity providers with the AWS.CognitoIdentityCredentials object. The identity that is loaded is then exchanged for credentials in AWS STS.
https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html
What you basically wanna do is create an un-authenticated role for the cognito identity pool and have that role access the kinesis
The recommended way to obtain AWS credentials for your browser scripts is to use the Amazon Cognito Identity credentials object, AWS.CognitoIdentityCredentials . Amazon Cognito enables authentication of users through third-party identity providers.
Concepts for role-based access control.
Hope this helps to prevent setting accesskeys on your frontend app
Fortunately since I first posted it’s been decided we’re going to have the frontend post to a backend server that will forward data onto kinesis. Maybe not the most elegant way, but better by far than aws creds in the frontend!
I’m still gonna do some reading so I can better understand cognito for the next time something crazy like this comes up.
I’m thinking about using a sidecar on my Fargate Service to proxy the database connection.
I’m thinking this helps address two issues:
- Simplify configuration - the applications can always use
localhost:<port>
to connect (though of course connection details could still be set through an env far just in case) to the database - Security - nothing (besides code reviews) is stopping an application developer for printing the database connection details in their application. But, if the auth is associated to a sidecar that they don’t have access to I tried searching but I don’t see people talking about this. So I’m thinking either this is a bad idea or it’s just so obvious that I should done it sooner…
I think that is a bad idea, it is yet another hop in between the db and the app and something else you have to maintain
you can use RDS proxy to do some connection handling but I guess the reason you have not seen any result to your question is because you are already in a container environment where you can pass ENV variables, deploy easily etc
no reason to over complicate things
I think it’s trading complexities rather than adding and the extra hop is nominal. My application priorities are stability and security, not saving nanoseconds or even milliseconds for that matter.
to that point, I’m mostly trying to address the latter concern of DB connection details being able to just be printed. But I guess to that point I could just be using IAM Authentication and locking down the Role.
This change would exchange security for extra maintenance overhead and reduced developer autonomy.
You have to trust developers to some extent. How paranoid you need to be depends on your company industry and regulatory environment. If you can rely on existing controls, it will make your life easier. For example, your PR process will already ensure nobody adds this sort of code.
yup, which is why I’m exploring the options. I see the pros/cons, but it’s fun to talk about them.
wearing my developer hat, I think it’s cool to not have to think about db connection details and just be able to use localhost
all the time and know that it’s going to be there.
but cool doesn’t necessarily mean it’s good
I don’t think that a PR process ensures that a line like this could be entered - I think it’s pretty easy to overlook a print/log statement.
and I’m not even accounting for bad actors, I think this could easily be a whoops
I don’t think there is any added convenience from the developer with this approach. If you read the hostname from an env var, that’s no harder than a hardcoded localhost
fair
You are right that the PR process is not perfect. Governance never is. But it’s a lot cheaper and often Good Enough. The PR process protects against all developer bad actors. The sidecar approach protects only a single compromise. For example, a developer could write code to dump specific data to the log file, or even POST it to a remote system.
(For example, a product I work on has functionality where our customers can add a webhook subscription for all changes. A developer could write a migration that adds their own webhook subscription for a customer and exfiltrate data that way. We trust our PR process to detect this.)
Setting up a ProxySQL sidecar container to manage connections in AWS ECS (Elastic Container Service).
was linked to this by AWS Support - I asked them the same initial question
I guess to that point I could just be using IAM Authentication and locking down the Role
Are there issues for you with using this solution?
nope, I just haven’t done it before.
Looks like these are the limitations you need to watch out for https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Limitations
Mainly
• Instance type and max connections
• Are you using CNAME
• I don’t know if you mentioned rds type, but if postgres and you are using master (hopefully not) log in method may need to change for you
Authenticate to your DB instance or cluster using IAM database authentication.
Using PG and it will not be the master, we’d be creating IAM Roles and DB Users accordingly
That solution give you the separation of concerns if thats what you are looking for, and removes the extra hop
and IAM auth is not recommended for production workloads only humans
you’re referring to
The maximum number of connections per second for your DB instance
correct?
there is a limit of 200 connections per token
right on, trying to find the aws doc/reference for that, do you have it?
Authenticate to your DB instance or cluster using IAM database authentication.
Mysql : Use IAM database authentication when your application requires fewer than 200 new IAM database authentication connections per second.
got it, and postgres is just based off instance limits then?
the recommended way to do this for apps is to use the Secret Manager autorotation
Do you have some reference to that ^ for the OP
I do not know but if Mysql has that limit I will say is the same for postgress since Mysql is always ahead that Postgres from AWS support
Automatically rotate your Amazon RDS database secrets by using Lambda functions provided by Secrets Manager invoked automatically on a defined schedule.
2021-04-07
2021-04-08
oh cool - autoscaling windows instances is finally worthwhile
2021-04-09
does anybody have a lookup table (or script) for all the RDS instance types and their core-processor counts? This is for calculating licenses.
an AWS cli option like aws rds describe-instance-types
would have been great. (edited)
A free and easy-to-use tool for comparing EC2 Instance features and prices.
hmm idk. your best bet might be scraping https://aws.amazon.com/rds/instance-types/
Amazon RDS provides a selection of general purpose and memory optimized instance types to fit different relational database use cases.
yeah i know, it’s a little unfortunate that i gotta resort to that but totally, thanks.
Amazon RDS provides a selection of general purpose and memory optimized instance types to fit different relational database use cases.
nah i was wrong. you want this.
aws rds describe-orderable-db-instance-options --engine postgres
You can also add this to only get the enging version and dbinstanceclass
--query 'OrderableDBInstanceOptions[][EngineVersion, DBInstanceClass]'
add this to only get a specific version’s instance type
--engine-version 13.1
@RB you are correct. came back with a solution before i could!
I was looking at getting all of them with a script like this:
REGION=us-east-1
aws rds describe-db-engine-versions --query="DBEngineVersions[].{Version:EngineVersion,Engine:Engine}" --output=text | while read engine version;
do
for instance_class in $(aws rds describe-orderable-db-instance-options --engine "${engine}" --engine-version "${version}" --query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" --output text --region us-east-1);
do
echo "${engine} ${version} ${instance_class}"
aws rds describe-orderable-db-instance-options --engine "${engine}" --db-instance-class "${instance_class}" --output=json --query="OrderableDBInstanceOptions.AvailableProcessorFeatures"
done
done
yowza. looks like it could work but at that point, id just use python.
that wall of text is terrifying
i haven’t run it yet but i think that will give you everything
i am getting null
for the processor stuff though. might need to tweak my quuery a bit. left to the reader!
haha nice nice
you could dump all the json and convert the json to a csv. might be easier
yeah
and i agree, when you start looping over services, a pythonic approach will save your sanity
this approach works but i’m looking at this doc: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#USER_ConfigureProcessor.CLI.Example3
and all the queries i’ve done come up with null
for the AvailableProcessorFeatures
.
Are you seeing anything different?
Determine the computation and memory capacity of an Amazon RDS DB instance by its DB instance class.
incredible! thanks for the input, didn’t realize about describe-orderable-db-instance-options
. The max-storage & maxIOPs is really nice.
Was there any other table/lookup scheme that actually blew up the actual instance-class themselves (ie. db.m4.4xlarge
) to actual core-counts/GB? Again, that was where I was trying to help witih my licensing true-up.
https://github.com/vantage-sh/ec2instances.info
And https://github.com/vantage-sh/ec2instances.info/blob/master/rds.py
Might give you a great start as well. Really cool sure and source available.
Amazon EC2 instance comparison site. Contribute to vantage-sh/ec2instances.info development by creating an account on GitHub.
Amazon EC2 instance comparison site. Contribute to vantage-sh/ec2instances.info development by creating an account on GitHub.
Folks, I’ve just release a small tool to run AWS Access Analyzer Policy Validation against all your IAM Policies at account level. https://github.com/z0ph/aa-policy-validator Let me know if it helps!
Validate all your Customer Policies against AWS Access Analyzer - Policy Validator - z0ph/aa-policy-validator
2021-04-11
2021-04-12
Hi, I’ve removed a subnet from Beanstalk environment, rebuildid, instances are not being launched in it, but load balancer from that environment is still using network interfaces belonging to removed subnet. How can I fux this? Thanks?
To edit EB subnets, you need to recreate the enviroment from scratch, I believe
Q: I’m trying to optimize RDS Patches (db minor engine upgrades). Is it true if I take a snapshot BEFORE the patch operation (say, 1 hour before), it would reduce the time it takes for the initial pre-patch snapshot???
Q: I’m trying to build a simple PoC with CDK Pipelines and is there really no way to use GitLab as a source? Am I missing something obvious here?
2021-04-13
Question: API Gateway VPC link subnets can be public or private but if the endpoint is public should I use a public or private subnet? using both subnets I can reach the endpoint and using both I can use a test endpoint outside the vpc and the docs……well….the docs are not the best explaining this part
@jose.amengual your message is very timely for me. for the past few weeks i’ve been working on an HTTP API Gateway that is intended to be private. I say HTTP specifically because most of the documentation I have seen centers on REST APIs.
That aside, I am using VPC Link to attach the API to a private, internal NLB. In this case, the NLB and the EC2 instances are in private subnets. Also, I’m using private subnet for my VPC Link.
As in your case, I can access my API from the internet.
However, I have a custom domain for the API endpoint which is a subdomain on a public hosted zone. So if you know the endpoint, you can resolve it. Even if you don’t have a custom domain, the API GW still has a default endpoint with something like [amazonaws.com](http://amazonaws.com)
on it which is also public resolvable.
So i explored using VPC endpoints. This may be the solution for you if the API is intended to be used internally only by other AWS resources like EC2, Lambda, etc. But note that which ever subnet you create the endpoint in for API GW, *all* services in that subnet will use if they need to access API GW. It might be OK but it also might not be what you want. Also, VPC endpoints are great for keeping traffic inside your VPC. But in my case, I’m trying to expose a service for applications outside of my VPC but only on the private network (peered VPCs).
My next iteration on this is to try a custom domain in a Private Hosted Zone. The intent being the endpoint will not be resolvable outside of my VPC and any peered VPCs/networks.
If you come up with a solution that doesn’t involve a private hosted zone, I would be happy to hear it!
interesting
in my case the alb with the api is public
and the api gateway is api is public too
the only reason to use the api gateway is to transition old apis to the new apis so we need to route some stuff to a lambda and such
ahh i see. so in your case, i’m not sure. but if you intend for the API to be public, i would just go for the easy win and put the VPC link in the public subnet.
so in my case I could get away no using a VPC link BUT it is required is you want to use a Load balancer endpoint
yeah yeah.
makes sense. So yeah, i see the need for VPC Link.
the weird this is this: if I send all the traffic to a external public endpoint , it works
so I was trying to figure out why
ahh is the ALB in the public and private subnets? otherwise it won’t be able to route to anything on the back end in the private subnet.
since is a public alb there is no SG to worry but since I was using the VPC link then what will be the SG to use ?
it gets confusing
yeah yeah! :sweat_smile:
seem like the SG on your ALB should allow traffic from 0.0.0.0/0
on the ports (80 and 443, perhaps) but it should also be in the public and private subnets.
the ALB is internet facing on front of the EKS cluster
is a ALB-controller
yes it is 0.0.0.0 since is public
i see. is it also in the EKS security group? so, subnets and security groups should all match up.
ok i misunderstood. the ALB works but the API pointing at the ALB does not. is that correct?
api gateway pointing at the alb does not
got it
but if I add and integration to a ngrok url pointing to my computer it works just fine
your set up might be different but here’s the TF code i am using for mine:
resource "aws_apigatewayv2_vpc_link" "link" {
name = "${var.name}-${var.environment}"
security_group_ids = var.security_group_ids
subnet_ids = var.subnet_ids
tags = merge(var.tags, {
"Resource" = "aws_apigatewayv2_vpc_link"
})
lifecycle {
ignore_changes = [tags]
}
}
resource "aws_apigatewayv2_integration" "api" {
api_id = aws_apigatewayv2_api.api.id
description = var.description
connection_id = var.vpc_link_id
integration_type = "HTTP_PROXY"
connection_type = "VPC_LINK"
integration_method = "ANY"
tls_config {
server_name_to_verify = var.server_name_to_verify
}
# For an HTTP API private integration, specify the ARN of an Application Load Balancer listener
integration_uri = var.integration_uri
}
resource "aws_apigatewayv2_route" "api" {
api_id = aws_apigatewayv2_api.api.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.api.id}"
authorization_type = "CUSTOM"
authorizer_id = aws_apigatewayv2_authorizer.api.id
}
There’s more but these are the pieces that use VPC Link top the ALB.
I would say, maybe check your integration_uri
. Mine looks like this:
"integration_uri" = "arn:aws:elasticloadbalancing:us-east-1:ABCDEFGHIJKLMN:listener/net/xyz-qa-api/2484cd5ecaba5155/d33d18f01a965145"
But i am also using an HTTP API not a REST API so your mileage may vary.
same url for me, is the https listener arn
did you defined this payload_format_version = "1.0"
?
for the integration?
mmm default is 1.0
checking
i get the same. not defined but that is how its configured:
"payload_format_version" = "1.0"
I’m starting to think that the ALB is meant to be internal and not internet facing
hmmm, could be. however in my config the NLB is indeed internal. but still accessible from the API GW!
yes trough the VPC link
yep
I bet if I add an integration to point directly to the ALB public url it will work
what is your config for the SG of the vpc link ?
do you have inbound rules and outbound ?
or just inbound ?
it has both inbound and outbound.
in:
ype
Protocol
Port range
Source
Description - optional
HTTP TCP 80 10.0.0.0/8 –
HTTPS TCP 443 10.0.0.0/8
out:
Type
Protocol
Port range
Destination
Description - optional
All traffic All All 0.0.0.0/0 –
what about your HTTPS hostname ?
are you using a wildcar cert?
i have that set to match the name of the site hosted at the ALB and yes, using a wild card cert
in my case I have multiples services behind…
so the HTTS Hostname = *.pepe.api.thatkeepscrashing.com
hmm i wonder if that’s the catch? meh, not sure. but wondering if the API “hostname” needs to match the ALB hostname explicity?
this is the part i am referring to:
tls_config {
server_name_to_verify = var.server_name_to_verify
}
but why the ALB hostname ?
on the integration
the api gateway should pass the Host header from the request
ahh not really the ALB hostname, but the name of the site that matches the cert attached to the ALB
yes, that yes
but that will happen with a plane ALB
I mean is the same requirement for a public ALB
i would imagine you could not verify the hostname with a cert
CN = cert = host name
correct
but it should still route
so I just created a new VPC link manually
using private subnets
same SG ( after I added the outbound rules)
and now the ALB is responding
the old integration was using public subnets
ahh got it
but now I wonder if the SG was the problem all along
I’m going to test it and see
and now even works with the custom domain name too
sweet!
Thanks @managedkaos
yeah no problem! it might be a week or so before i have any updates on my situation but i’ll keep you posted
for your use case you can have VPC endpoint policies to restrict access to the private api endpoint
I have used it in the pass to restrict only certain VPCs for examples
and FYI you can’t use a custom domain name in the private API
agree, we are prepared to skip the custom domain name on a private API. but i don’t know all the VPCs that will need to access the endpoint. so setting up an endpoint with rules will be difficult. This would be an API for use in an enterprise with many (hundreds) of AWS VPCs in addition to networks in other clouds.
ahhh I see ok
we really wanted a vanity URL but have gotten past that part. we just want it to be private now
could you proxy to a LAMBDA authorizer to do authorization for the endpoint?
yes, we have an authorizer in place. so that will be the gate for now.
I see ok
but still, say the endpoint is exposed and gets DDOS’d. how much of that do we have to pick up in cost of resource use vs AWS?
i may be overthinking but i would rather have the endpoint unresolvable on the outside
DDOS? you can set Throttling in that case
indeed. like i said, forgive me for being paranoid
but those are good concerns
@managedkaos you were creating an Internal API?
with version 1 of api gateway?
correct?
hey! no, internal api but with version 2. i got it worked out eventually.
for now the endpoint is public. we have a lambda authorizer attached
ahhh I c ok
I’m building now and internanl api with vpc endpoint and such
and now I need to change from alb to NLB since it does not seem to support albs
yes. for our service we have an ALB for the web interface and an NLB for API interfaces. i suppose it can get pricey if you have both.
instead of doing the VPC endpoint, i ended up using a VPC link to the NLB. that eliminates the need for the endpoint.
but you can’t do a vpc link to an ALB, right?
for an internal REST api
hmmm not sure. i didn’t try.
don’t think you can. this doc just specifies HTTP APIs https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vpc-links.html
Learn about working with VPC links for HTTP API private integrations.
but using this integration method you might be able to do REST -> ALB https://docs.aws.amazon.com/apigateway/latest/developerguide/setup-http-integrations.html
Learn how to configure HTTP integrations in API Gateway.
the last link is what I’m doing right now with a public api
in this case is for internal api
2021-04-14
anyone seen this ? the ssm is blowing up itself
in /var/log/amazon/ssm
572M download
Holy cow
Are there errors in there?
Well we use a sort of logger for root access on this machine, a wrapper on top of sudo: rootsh
I can’t look into it because it needs space in /var/log
can you use ec2 serial to access the server ?
Something i hardly ever need, but was confused today about some changes with Peer’d VPC’s across regions. $.20 later i had my test and proof case on what i thought was wrong. Its a nice feature. https://www.reddit.com/r/aws/comments/mr6v4w/have_to_give_a_nod_to_reachability_analyzer/
I have 99% of my aws VPCs in terraform, but something had changed recently that was stopping packets from us-west-2 to us-east-1. I know the…
2021-04-15
Can I get upvotes on this? (just :+1: on the PR comment to help prioritize it for review and merge.) Lol, it’s not my PR, but I want the feature :sob: https://github.com/hashicorp/terraform-provider-aws/pull/18644
Terraform AWS Provider to include trusted_key_groups
in cloudfront distributions
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
I feel like we need a weekly thread with these…
“Hey everybody, here are the bugs that this community wants upvoted, let’s swarm em with s!”
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
There is 3700 people in this Slack, so I feel like that might be useful.
the contributor’s lament
Yeah Matt, that’s a good idea. I always thumbs up such issues, even if they don’t touch my current infra.
Upvoted. and the thread idea is such a good idea.
This reminded me of a terraform PR I’ve had for almost 2 years, with tests and documentation, no amount of pings have helped. The infra it was meant for is long gone
@Erik Osterman (Cloud Posse) let’s chat about a weekly thread or announcement post for a list of Terraform or Open source issues to
I’d be happy to get together a simple spreadsheet if you could help with making it a weekly post in #announcements or something.
Success
w00000!!!!
thank you all!!!
Hello guys, sorry to bother you, but I have one question regards to connection between two AWS accounts, I want to have connection from one ec2 instance on Account A to RDS on account B. I have set up AWS Private Link between them. I have created everything for working like endpoint service on Account B (service provider), set up some security groups etc. and I have setup on the Account A (as consumer) endpoint which make a connection request in order to access to Account B everything looks great because I can telnet to the specific MySQL port but I am getting
k╝Host '172.xxxx' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
on the beginning I thought that problem is with the specific mysql instance but when I started to spin up new instance on Account B in order to check if connection works directly, between instance and rds, it works. do you know what could be a problem?
2021-04-16
2021-04-17
aws-nuke disables console account password, any idea how to prevent it?
Ok. Found the solution. Put it under filters
IAMLoginProfile:
- "<user-name>"
also put following to be able to switch account:
IAMRole:
- "OrganizationAccountAccessRole"
IAMRolePolicyAttachment:
- "OrganizationAccountAccessRole -> AdministratorAccess"
2021-04-18
Hi, After you initially store objects in S3 Bucket. Lets say 20GB of data in March. Do you pay any additional in April if you don’t retrieve any data from that bucket? I cant see that is clearly stated anywhere, but I would say no…
I think you didn’t understand my question
I was referring to this one https://aws.amazon.com/s3/pricing/ For storage fees, maybe I didn’t get the question
Find detailed information on Free Tier, storage pricing, requests and data retrieval pricing, data transfer and transfer acceleration pricing, and data management features pricing options for all classes of S3 cloud storage.
If you see data transfer fee from internet to s3 is free… So there is no fee for that . Beside initial PUT requests and storage there is no any transfer from S3 or GET requests…
At least in example that I mentioned…
My question is simple. After you initially store 20 GB and pay for that at the end of month. Will you pay next month again for that storage if its not retrieved at all…
You will have to pay for storage. If there are data transfers, there will be additional cost incurred. Try this to calculate the exact cost that will be incurred: https://calculator.aws/#/
AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.
Well actually I did
Wouldn’t be cumulative if it is how you are saying?
that part isn’t clear to me to be honest
If you could explain your question further, I can try and help answer.
lets say you store 20 GB each month
in march you will have 20GB. in April you will have 40GB and so on… Do you pay for storage that just sitting there and you don’t access it or you just pay when you initially store objects? Generally that’s what I asked in initial question as well.
Oh I see now. You pay for storage sitting there. You get charged for 20GB in March and 40 GB in April. This is further clarified here : https://aws.amazon.com/s3/pricing/ where they say You pay for storing objects in your S3 buckets. The rate you're charged depends on your objects' size, how long you stored the objects during the month, and the storage class
well that exact line doesnt seem clear to me
that exact line lead me to post question here
and in that case that calculator would be misleading if you agree?
and basically you would see doubling s3 costs each month
It would be helpful if you could share how do you think the calculations are done. Here’s how I interpret this : March :20 GB * (0.46 USD) , April: 40 GB * (0.46 USD) , assuming 20 GB storage was added in April.
amount of stored data over one month is same
To simplify question. Do you pay for data that is just sitting there without any retrieval?
by your example you say yes, is that correct?
Yes, you pay for data just sitting there, even if you don’t access it
Thanks again
Well calculator is useless in that case
Go for infrequent access if you rarely retrieve it. Better yet, turn on Intelligent Tiering and let it drift to one of the Archive Access tiers. The total cost generally consists of how much you store + the object retrieval fees for whatever storage class you are looking at
yeah, that have sense. I just wanted to preform some initial calculation based on some inputs, and I run to calculator which is miss-leading Thing is that I still don’t know how frequent data will be accessed and how it will be processed.
You definitely pay for the data sitting there on AWS servers/SANs as it costs them resources to host. They do give you 5 GB free, “As part of the AWS Free Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5GB of Amazon S3 storage in the S3 Standard storage class; 20,000 GET Requests; 2,000 PUT, COPY, POST, or LIST Requests; and 15GB of Data Transfer Out each month for one year.”
In your case it sounds like you have even more reason to go with Intelligent Tiering - automatic cost savings for unknown/changing access patterns
Its unknown in this point, it will not be stored before I find out how it will processed. So less worries about choosing proper tier, and data retention/life-cycle.
I would just expect that their oob calculator would take that in count when gives 12 Months Estimate
Ultimately, it is probably not worth your time to forecast the math that directly. Paying at most $23/TB and maybe just $4
What you mean?
Ya probably should have a better calculator; just know that you want it in standard if you have lots of requests/retrievals
$.023/GB/month = $23/TB/month
its not that simple
ya that’s why their calculator is less than desirable for you; cost is variable based on retrievals during the period
we have options where costs dictate design, and opposite
but isnt
that’s just one parameter
yep, exactly; the $23/TB/month is like a fixed constant, object retrievals is another parameter, etc.
and if you take in count that upload of same data could be done in few requests, or in millions requests
it can be huge difference on bill
@Milosb you’ve been answered in first reply
I think yes, you also pay storage fees in addition to the data transfer fee
S3 is really cheap storage both for storing and for putting data there. Calculating how much it will cost to store 20GB is not worth the time spent on calculating
There are many nuances in fees though which you should be aware of, for example, watch out for intelligent tiering and IA because for small objects you may be paying a lot more than you can save on it.
@Marcin Brański Maybe, maybe not. I got answer on question that I didn’t ask, and that made me think that either I didn’t properly ask or question wasn’t understood well since I didn’t mention data transfer fee at all. Cheap is subjective term. I took 20GB as an example. But thanks guys for clearing it up, it was really helpful.
2021-04-19
Anyone has seen AWS Cloudsearch in use in production and know do/don’t around it vs AWS Elasticsearch?
Anyone use a fargate container for remote ssh with visual studio remote ssh plugin? Seems promising to offer a quicker remote dev environment than instances if all i want is containers anyway.
That’s what GitHub Codespaces or GitPod do Cloud9 is still using vanilla EC2s as far as I know.
I really like Gitpod as it’s the fastest option. You can also self-host it, if that matters to you
Yeah, I’m very aware of codespaces, gitpod, and coder. However, I’m not using GitHub at my new role, so Azure DevOps only.
Codespaces also takes forever to build some of the larger images. Was thinking just doing SSH with container would give me the same experience in a way.
Gitpod SSH is really exciting, but again not showing Azure DevOps, and SSH is beta/Q1 2021 but no release date announced.
2021-04-20
Hi channel.
AWS added support for using Docker images for Lambdas not so long ago, but unlike other uses of Docker (ECS, Batch), Lambdas can’t access cross account ECR repos. This is a pain if you, like me, like to build your artifacts in a tool account and then pull them in the customer facing accounts.
If you have time, please consider upvoting this proposal to add the feature: https://github.com/aws/containers-roadmap/issues/1281 :)
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
2021-04-21
Does anyone know of an ECS terraform module or a lambda function that handles events like deployment failure/task stopping and sends an SNS/Slack notification?
Contribute to maasglobal/ecs-event-notifications development by creating an account on GitHub.
2021-04-22
I have VPC -> IGW -> PublicSubnet(Application Load Balancer) -> PrivateSubnet(ECS Task)
The thing is I’ve got several components in the this various ECS tasks that want to talk to other Tasks in the private subnet.
Does anyone recommend public load balancer + internal load balancer combination, or another approach?
https://sweetops.slack.com/archives/CCT1E7JJY/p1619127317235400 @managedkaos responding here for future archive/context
Basically I have some ecs tasks with 3-4 containers each. They might need to talk to another task’s container. They also might need to accept traffic from an ALB in the public subnet.
Trying to find the simple way to ensure they can talk but ideally not care about which task is running in the case of having multiple instances of the task for scaling.
@sheldonh all you should need to do is update your security groups to allow communication in the private subnets. I usually do that by including the ECS SG as a source in the other resource.
Can you give more detail on this part: “several components in the this various ECS tasks that want to talk to other Tasks in the private subnet”?
I’ve seen people use something like App Mesh, and other non-AWS implementations (Consul, etc.) to achieve such requirements at scale. Not sure how big the environment you’re thinking of here.
yeah i think some sort of service discovery might be needed if you are considering inter-task communication. if all the tasks are in the same service, you might be able to achieve that pretty easily with container names similar to how docker-compose does it..not sure though!
as for the ALB, you can:
- put the ALB in public and private subnets. The ALB will be accessible publicly while the tasks will live in the private subnet.
- Attach a target group with the tasks added to them using the IP of the task
- Make sure the service has an security group that allows traffic from the ALB’s security group so the tasks can receive traffic
is service discovery for tasks or the containers in the task? And is it pretty easy to get going with or adds a lot of complexity?
I was hoping for something as easy as a private dns type of entry or elastic ip perhaps
i have not added/used service discovery with ECS so i can’t speak to the complexity. I know its an option though. You can decide how much you want to invest: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
indeed if your tasks are running they have an IP depending on where they are deployed. i don’t think they will have a DNS entry associated though. I would be up to you to capture their IPs and then figure out how to access them from the other tasks. which is kind of what service discovery does for you.
Your Amazon ECS service can optionally be configured to use Amazon ECS Service Discovery. Service discovery uses AWS Cloud Map API actions to manage HTTP and DNS namespaces for your Amazon ECS services. For more information, see What Is AWS Cloud Map?
I just browsed through this doc… might be helpful… https://aws.amazon.com/blogs/architecture/field-notes-integrating-http-apis-with-aws-cloud-map-and-amazon-ecs-services/
This post was cowritten with Preeti Pragya Jha, a senior software developer in Tata Consultancy Services (TCS). Companies are continually looking for ways to optimize cost. This is true of RS Components, a global trading brand of Electrocomponents plc, a global omni-channel provider of industrial and electronic products and solutions. RS Components set out to […]
Would you all recommend external internal load balancer pattern over service discovery?
I’m learning a bit on discovery and seems to be what I need if it can handle multiple nodes of same. I’m not doing mucroservices though and want to make sure that using for a non microservices based environment still is useful to ease communication of fargate tasks internally.
Bonus points of anyone has an example in Terraform of setting that up.
@sheldonh all you should need to do is update your security groups to allow communication in the private subnets. I usually do that by including the ECS SG as a source in the other resource.
Can you give more detail on this part: “several components in the this various ECS tasks that want to talk to other Tasks in the private subnet”?
2021-04-23
2021-04-26
Please, I need help urgently. I’ve ran ScoutSuite to scan my AWS account but it hit API rate limit. Can this somehow break my AWS account services and communication between them? Thanks
Have you read this: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html
Amazon EC2 throttles EC2 API requests for each AWS account on a per-Region basis. We do this to help the performance of the service, and to ensure fair usage for all Amazon EC2 customers. Throttling ensures that calls to the Amazon EC2 API do not exceed the maximum allowed API request limits. API calls are subject to the request limits whether they originate from:
Oh, I am good it’s only limited to:
• A third-party application
• A command line tool
• The Amazon EC2 console
This is also useful… https://github.com/nccgroup/ScoutSuite/wiki/Handling-Rate-Limiting
Multi-Cloud Security Auditing Tool. Contribute to nccgroup/ScoutSuite development by creating an account on GitHub.
If you did hit rate limiting during a scan, it won’t have caused a permanent issue. Other applications using the API should also respect rate limiting and retry with a backoff. If you’re running this as an automated job, run it with a reasonable –max-rate.
Thanks guys, I was just worried if I screwed some AWS services with this…
If you did… It’s too late now and the request “buckets” will have been replenished.
Great, thanks
@Tim Birkett it looks like there are separate buckets for Non-mutating actions and Unfiltered and unpaginated non-mutating actions, so basically with all describe and get actions, so I think that I’m good
I have a single container that will need to do some pass through traffic in AWS. All my current architecture is ECS Fargate. The communication will be one a 1000ish range of ports randomly chosen by the caller and passing traffic. This container would need to take in traffic on this range of ports being managed by another service and pass it through to do it’s magic.
I’m not sure if I can do that with containers. Is that possible to do with ECS Fargate or such? If not, it seems I’ll have to have a single EC2 server in the entire setup which I was hoping to avoid
very hard to give you any pointers but the client port usually is not important since it on their end so I do not know why is relevant in this case?
One task decides the port and then the other container is supposed to accept a port in a large range. Most containers have specific port mapping so unclear if it’s possible to have a container in a task accept a wide range of ports inbound. A container seems limited to 100 from the docs but not clear if any alternative
I see ok
if you had a proxy layer on front you could do it
something like haproxy, envoy
you could add it as a sidecard of the fargate task or another service
that is a very particular usecase
Any docs or examples? This is new to me so anything more specific to help me get the concepts and know where to look would help
frontend www.mysite.com
bind 10.0.0.3:80-32221
bind 10.0.0.3:443 ssl crt /etc/ssl/certs/mysite.pem
http-request redirect scheme https unless { ssl_fc }
use_backend api_servers if { path_beg /api/ }
default_backend web_servers
on Envoy you will need to check the docs and see if it is possible
@sheldonh curious as to the reason the client needs to choose a random target port
It’s connectivity service stuff, tunnels etc. Not a normal web service, so it needs to accept a range. It seems like I might need to stick with an EC2 instance for this unless you think fargate task can handle a long running service with many (say 1000 at least) port selection
Let’s assume I do this with an EC2 instance instead to allow easier vertical scaling. 1 server only right now. I’m focused on the minimal “best practice” way to implement this currently, as I don’t have any clustering with this service at this time. Must be a single server for now.
Any best practice recommendations esp regarding:
- Do you normally put a LB in front of any exposed instance, even if it’s just a single and not scaling at the time?
- Do you put the instance in a private subnet and use LB to point to it, or default to public subnet to avoid NAT Gateway load if it’s going to be accepting inbound calls, and just not give it a public ip?
- Or do you just leave the server as directly accessible on the range of ports?
regarding best practices:
• Yes, even if i only have one EC2, i use an ALB for the endpoint. If I need to change the EC2 or scale up, I don’t have to change anything that points to the endpoint
• Yes, EC2 in private subnet to keep it (more) secure. and ALB in both so it can route the traffic.
• I avoid leaving the server directly accessible if at all possible.
but for your use case you will not be able to use and ALB
That’s what I was trying to figure out. So port range (not traditional web server), means this changes a bit. Normal single port stuff and I’d just use an ALB to a private subnet with ecs tasks I guess. But port ranges, it will be better to just assign an elastic ip/route53 to this, it seems. Just newer to the LB side and trying to vet I’m not missing anything obvious there
For ec2 instance with a specific port only then ALB makes sense to put in front.
Is there any reason to put in private subnet if the instance security group doesn’t have any inbound traffic other than from alb?
It adds more nat load so just checking if it’s a “go to” to always try to put in private subnet when not directly accessed externally, even if no public ip assigned.
you need to build the LB layer yourself that is why I said that you will have to use something like haproxy in front of the instance(instances) that will receive the traffic
your instances stay in the private subnet
the haproxy will have a public and private subnet
I’m not very familar with haproxy. I am thinking it might help in future, but the “MVP” of getting something up I might just need to simplify this. That’s a lot to figure out with little time.
haproxy is very easy to configure to do that
maybe 5 lines of configs
early ELBs were haproxy (rumour has it)
easy is relative. Is there a terraform module that makes this easy. My background isn’t networking so this is newer concepts to me. I have to accept a range of inbound and the easiest way is to allow inbound on this range.
I don’t use k8’s or any of that, just fargate tasks so what’s easy to you might be science to me
but that is the issue, you need some sort of software that can do that, otherwise your app will have to do it for you
so you can use the cloudposse ec2 instance module and create userdata to install haproxy
The service running is designed for connection management. It’s just i’m trying to figure if best practice would be for this to be behind a LB period of i just let it do it’s work directly.
If installing haproxy in user data is all it takes then maybe I could figure out. The docs seemed pretty detailed/complicated.
but if you need to listen for a range of port you Can Not use aws load balancer
Found this article too. https://medium.com/@onefabulousginger/fully-automated-infrastructure-and-code-deployment-using-ansible-and-terraform-2e318820fe0c This is useful to know about. I’m basically replacing LB usage based on service limitations with my own self-managed LB alternative.
Ansible and Terraform can be used together to build/deploy any docker images, and create the AWS ECS services that will host it by using…
Is there an absolutely critical reason why I have to have it run behind haproxy instead of accept inbound directly in your mind (if the tool is a connection management tool)
That’s kinda the piece I think I need to understand. I don’t mind evaluating for improvements, but at stage 1 does this add more complexity than value for a single instance?
if you app can listen in a a range of ports then you do not need it
if you app does not , then you need a layer to do it for you
and that layer behaves like a load balancer but it has the capability to listen on a range of port and then forward to any type of endpoint ( fargate, ec2 instance, alb etc)
Perfect. For now that’s what I’ll do then. The open source project https://github.com/jpillora/chisel this is what I’m using and it accepts a wide range of ports based on configuration. I think this means for my use public subnet directly should be used.
The only “abstraction” I can think of is using route53 to simplify calling directly regardless of ip changes.
A fast TCP/UDP tunnel over HTTP. Contribute to jpillora/chisel development by creating an account on GitHub.
if you have a network person in your company I recommend to go and talk to them to get a better idea
is it currently not possible to query S3 objects by their tags? I see mention of using the resource explorer for tags on the bucket level, but I don’t think I’m seeing anything on the object level.
Hi all! Just posted a new article on AWS cost reduction: https://corey.tech/aws-cost/
By Corey Gale, Senior Engineering Manager @ GumGum
The idea of combining AWS usage data with revenue sources is great, I would love to hear more about that part
By Corey Gale, Senior Engineering Manager @ GumGum
It always amazes me how difficult it always is to track costs in AWS. I agree with @Alex Jurkiewicz, tracking usage along with revenue is a good way to look at this. For Cloudrail, we track spend at the account level, per service, etc., but not at a per-customer level.
This started coming up recently - looking at how much a given customer costs us (in Lambda, RDS, etc.) so we can later plot that vs revenue from given customers. Hopefully we’ll be able to share our learnings as well.
Great writeup @Corey Gale!
Thanks @Yoni Leitersdorf (Indeni Cloudrail) & @Alex Jurkiewicz, appreciate it! We’ve been ingesting our CUR data into Snowflake via Snowpipe, which provides the ability to join against everything else in our data warehouse. Once in Snowflake, we use a custom built Looker Explore for insights that incorporate said business data (that’s where our CPM/IHMP metrics are calculated). My #1 tip would be put some time into designing a tagging system that matches your splicing needs since all billing tags are included on CUR data points.
Another Corey working on AWS cost control
now there are two of them
2021-04-27
Anyone know if an EC2 instance refresh is in anyway network aware? I need to refresh my ECS clusters EC2 instances to use new AMIs and don’t want to cause an outage for any in progress users
can you add more instances and then scale down?
you want connection draining bascially
Yep, I can add, just wondering if instance refresh would do it, not overly attached to the idea
if you terminate an instance it will do connection draining
instance refresh should do the same
You need a lifecycle hook on your ecs asg in order to first drain the instances and then terminate the instances once the running task count has reached 0 per instance
Awesome aws autoscaling lifecycle hooks for ECS, EKS - nitrocode/awesome-aws-lifecycle-hooks
Do custom eventbridge event buses not take aws.* (e.g. aws.ec2) events? I had a bug in my code where I was putting my event rule on the default event bus. When I changed the code and added it to my custom event bus, I stopped getting events
Figured it out, they don’t, need to use the default event bus
We just updated a EKS controllers from 1.15 to 1.16 in our staging cluster. This is the second cluster to do it but this one had a problem. All of our NLB target groups except 1 became unhealthy (all targets/ec2 instances) and we ended up recreating the Loadbalancer Service’s to restore service - with accompanying DNS change via External DNS.
nginx-ingress
health check nodePorts were responding 200’s from bastion, no change to security groups etc.- EC2 Route Analyzer from NLB ENI to EC2 Instance ENI where nginx pod was was green.
- Nothing interesting in status when
aws elbv2 describe-target-health ...
. - Can see routine Target Group update CloudTrail event but all correct port and instances. Have created a support ticket but a bit anxious about prod update.
Anyone have any ideas?
I suppose this is a rancher / aws crossover event.
I’m trying to get the Rancher Quickstart https://github.com/rancher/quickstart to work, but I’m having issues getting the SSL correct. I’d also like to use my own domain name (hosted on AWS) .
I need this done and I can pay for anyone’s time if they’re looking for a (hopefully) quick gig. ;-)
Contribute to rancher/quickstart development by creating an account on GitHub.
If you’re interested in this as a paid gig either ping me here or email me at [email protected]
Contribute to rancher/quickstart development by creating an account on GitHub.
2021-04-28
Has anyone tried AWS WAF bot control? Looks like it could be quite expensive, but curious as to how well it works.
Bot Control Feature - AWS WAF - Amazon Web Services (AWS)
Bot Control $10.00 per month (prorated hourly)
Bot Control Feature - AWS WAF - Amazon Web Services (AWS)
are you seeing something different?
It starts to add up when you have many millions of requests though :)
Although it looks like you can use Scope down statements to only use bot control for certain endpoints.
hey, has anyone got good working examples of deploying and connecting to an efs from with eks, i have all the resources deployed, the pvc is connecting to the volume, however when the pod is trying to connect to the pvc i am gettign the following error:
Warning FailedMount 0s (x4 over 4s) kubelet MountVolume.MountDevice failed for volume "pv-efsdata" : rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock: connect: connection refused
In the case of using a bastion to connect to instances on a load-balanced Elastic Beanstalk environment, is there a common way to dynamically name instances so that I can easily login to various instances from the bastion?
# Example: easily connecting to instances from the bastion
ssh api-1
ssh api-2
Also, I would be interested to know how this works in load-balanced environments using the EB CLI , e.g., eb ssh
.
EB CLI handles this by listing the instances when you run the command. (ref https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html)
This is handy to use in the bastion to connect to various instances in the environment — e.g., eb ssh -n 1
. However, this won’t work if the EC2 instances have public IP addresses assigned.
If https://github.com/aws/aws-elastic-beanstalk-cli/issues/3 gets resolved, it might be possible to do this without the constraint on public IP addresses.
See also
• https://serverfault.com/questions/824409/aws-elasticbean-beanstalk-eb-ssh-using-private-dns-names
• https://stackoverflow.com/questions/39613260/ssh-in-to-eb-instance-launched-in-vpc-with-nat-gateway
What is this magic? https://aws.amazon.com/about-aws/whats-new/2021/04/ec2-enables-replacing-root-volumes-for-quick-restoration-and-troubleshooting/
for sure!
2021-04-29
Can someone explain ngnix reverse proxy like I’m 5? Not certain how this fits with load balancers and all …. Maybe more like I’m 18 since 5 year olds probably don’t know the proxy.
An nginx reverse proxy is like an HTTP(S) load balancer that sits on front of the users and transparently redirects http traffic to backend servers/services
We use nginx at our company to front
Tomcat servers, do SSL/TLS termination, static content caching, URL filtering/redirection, etc .
So the users connect to the nginx server, it then forwards the connection back to one the backend
servers,
That’s a smart 18 year old answer. How about answer like I’m 5? I’m still wrapping my head around ssl/tls termination in this so that’s a good reminder.
In one use case it’s in front of a single server so not quite sure what benefit it offers other than tls/ssl termination.
proxies like nginx give the ability to take care of some of the request chain before actual routing traffic to your application server.
benefits:
• CPU resources saved - server running your application is only doing what it needs to do.
• flexibility - you can add/replace servers behind the proxy seamlessly
Since a load balacer would let me replace a server that doesn’t apply for this I think.
Let’s take the CPU part. If the app is a Go app it should be able to handle reasonable concurrency with it’s http router/gorillamux etc. What load would you consider it helps with in a scenario like that? Any simple examples of a reverse proxy in front of a normal web server that might help give me context?
Learn the difference between a reverse proxy vs. load balancer, and how they fit into an web serving and application delivery architecture
Nice! Got me started on the right track . Making more sense now. Appreciate it
I am writing some webservices returning JSON data, which have lots of users. What are the benefits of using Nginx in front my server compared to just using the go http server?
Also looks like gorilla web kit offers much of that functionality, albeit with a bit more work on middleware and all. Good info!
“Let’s take the CPU part. If the app is a Go app it should be able to handle reasonable concurrency with it’s http router/gorillamux etc.”
it sounds like you’re only considering valid requests, something to remember is that when you have an application, if it’s exposed to the world it’s going to get crawled and bad actors are going to try to poke holes in it. by having layers (load balancers, reverse proxies, web application firewalls) in the request chain that can filter out erroneous or potentially malicious requests, you’re saving your application servers from having to spend the CPU cycles to discard such requests.
Looking for a friendly advise Customer has 2 AWS Orgs
• First Org has Legacy Landing Zone Setup
• Second Org has the Shiny Control Tower Setup I would like to make them one AWS CT Org I was thinking if it would just be easier to move the accounts from the Legacy Landing Zone Org to the CT Org ? or If I should convert the Landing Zone Org to CT and then move the accounts from other CT Org to the “new” CT Org ?
Sounds like you’ll have a mess if you just move accounts over, no? The legacy accounts will be using legacy configurations, and so you second org will be a “salad” of things.
They hired me for this job lol
Any chance you can move the accounts one by one, but making sure they are cleaned up and meet the shiny org’s structure and settings?
The new CT org has only dev workloads and the legacy LZ is using production workloads but is not managed well
@Yoni Leitersdorf (Indeni Cloudrail) thats what Im getting to , I trying to find the least used AWS account in the Legacy Org and will move it across the Shiny CT ORG
Can you do this without converting legacy org to use CT?
Any chance all of this is built using IaC?
They hire us when shit has hit the fan
The Legacy org is all hand jammed
define the configs in terraform, import everything, and use atlantis or TFC
I think that wil be alot of work
Im thinking Lean , and trying to find a way to first make them in one AWS ORG with CT
It depends on what they want to achieve - a consistent org, or a mess?
CT is all cloudformation under the covers, right? does it even support the features needed to import or otherwise move resources between stacks?
and to update the existing resources to match new/different configs in the CT org?
moving them into one org is not all that hard. it’s a well defined process. iirc, you “leave” the org with the root user of the member account to make it a standalone account, send the invite from the new org, and accept the invite from the (now) standalone account
@loren just read the AWS KB on that too
but i have no idea on how an invited account would be managed by CT…
So I think my best option would be to start with the least productive account in the Legacy org ad migrate it to CT Org , may be in an “unregistered” OU and park the account , then apply the CT guardrails , rinse-repeat
in terraform, that’s pretty well defined also. cumbersome, but well defined. hence, my suggestion
could you maybe create a new account in the legacy org to test the workflow, instead of relying on the least productive?
Thanks for the suggestion.
it’s an interesting problem. i’d love to hear more on how it works out for you. this type of workflow was one of our main considerations for why we’ve avoided both LZ and CT (and CFN), and put all our focus on terraform… but if CT/CFN can actually do it……
Yes definitely
It’s a fun problem.