#aws (2021-04)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2021-04-01

mikesew avatar
mikesew

Just curious if people have purchased AWS RDS Reserved Instances before, any best practices / pitfalls to share.

• I understand that we cannot use AWS savings plans, only RI’s.

• do you guys standardize on a specific instance size (ie. m4.2xlarge) so you don’t waste reservations?

• confirm my understanding that if we purchase an RDS RI, it’ll automatically apply to existing instances? We don’t need to re-spin a new RDS from snapshot, right?

Marcin Brański avatar
Marcin Brański

Ye, you don’t need to respin. It automatically matches reservation. Saving plans can’t be applied to RDS.
do you guys standardize on a specific instance size (ie. m4.2xlarge) so you don’t waste reservations?
This depends. I actually never did it.

Check the https://aws.amazon.com/rds/reserved-instances/ How billing works

Amazon RDS Reserved Instances | Cloud Relational Database | Amazon Web Services

With Amazon RDS Reserved Instances, reserve a DB instance for a one or three year term at a significant discount compared to the On-Demand Instance pricing for the same DB instance.

Zach avatar

If your RDS are all in the same family (ie, M or R, etc) then the reservation is flexible and can be applied to any instance in that family

Zach avatar

for example if you are using entirely XLs you could actually purchase all Large RIs, and 2 Large RIs would cover 1 XL instance. Or vice versa, an XL RI could cover 2 Large RDS.

1
mikesew avatar
mikesew

Thanks - I see people in my org have done EC2 reservations but not savings plans. I like the flexibilityoption since I doubt i’d ever move from a t- to an m-.. most likely ever just go up and down (ie. m4)

mikesew avatar
mikesew

so in theory, this would be a completely separate and non-impacting change / process that doesn’t matter at ALL to my terraform/deployment setup.. I simply arrange the purchase with my cloudPlatform team/aws admins

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

has anyone seen this before ….

cloud-nuke defaults-aws
INFO[2021-04-01T13:40:37+01:00] Identifying enabled regions
ERRO[2021-04-01T13:40:37+01:00] session.AssumeRoleTokenProviderNotSetError AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
github.com/gruntwork-io/[email protected]/errors/errors.go:81 (0x16a1565)
runtime/panic.go:969 (0x1036699)
github.com/aws/[email protected]/aws/session/session.go:318 (0x1974a25)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:50 (0x19749ca)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:66 (0x1974b36)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:86 (0x1974ce6)
github.com/gruntwork-io/cloud-nuke/commands/cli.go:281 (0x199506c)
github.com/gruntwork-io/[email protected]/errors/errors.go:93 (0x16a175e)
github.com/urfave/[email protected]/app.go:490 (0x1691402)
github.com/urfave/[email protected]/command.go:210 (0x169269b)
github.com/urfave/[email protected]/app.go:255 (0x168f5e8)
github.com/gruntwork-io/[email protected]/entrypoint/entrypoint.go:21 (0x1996478)
github.com/gruntwork-io/cloud-nuke/main.go:13 (0x19966a7)
runtime/proc.go:204 (0x10395e9)
runtime/asm_amd64.s:1374 (0x106b901)
  error="AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set."
Marcin Brański avatar
Marcin Brański

it does say “AssumeRoleTokenProviderNotSetError: assume role with MFA enabled”. Do you have MFA enabled?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Yes I don’t think cloud nuke can handle it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We also have MFA

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My guess though is your ~/.aws/config is missing something

2021-04-02

loren avatar

Does anyone know of tools that can evaluate function code (e.g. lambda), identify API actions in the code, and compare those actions against a role or set of policy documents to determine whether all the permissions are accounted for?

1
loren avatar

Getting tired of being surprised after a deploy when an execution fails because the lambda was missing a permission…

Darren Cunningham avatar
Darren Cunningham

no, but if you find something I’m very interested in it!

1
Darren Cunningham avatar
Darren Cunningham

I’ve thought about creating a Managed Policy that was overly permissive and assigning Lambdas with that policy, then writing something that ran on a CloudWatch rule and evaluated the CloudTrail log history for a given lambda over the past 30 days and then automatically provisioned a new role with the correct level minimum required access and updated the Lambda to use that – but all that seems too magical and like it would be potentially messy when trying to later update a Lambda with more access

loren avatar

oh, i think that kind of thing does exist… https://github.com/duo-labs/cloudtracker

duo-labs/cloudtracker

CloudTracker helps you find over-privileged IAM users and roles by comparing CloudTrail logs with current IAM policies. - duo-labs/cloudtracker

bradym avatar

I’ve only tested this with the aws cli, but maybe it’d work if you ran the lambda code locally?

https://github.com/iann0036/iamlive

iann0036/iamlive

Generate an IAM policy from AWS calls using client-side monitoring (CSM) or embedded proxy - iann0036/iamlive

loren avatar

if i squint, i can kinda see how i might use iamlive to capture mocked api actions from my unit tests, but i’d still need to build something to analyze it’s output and compare against the role/policy… https://github.com/iann0036/iamlive

Darren Cunningham avatar
Darren Cunningham

glad I wouldn’t have to make it, but again I don’t think it’s the right solution as you’re counting on the CloudTrail auditing to prune the permissions, doesn’t really allow you to have ephemeral IAC that you can freely move between accounts

loren avatar

oh haha, @bradym beat me to iamlive

bradym avatar

great minds… or something

loren avatar

we’ve started writing unit tests for lambda code and mocking the api calls with moto. so that might be an avenue…

loren avatar

well i do use terraform to deploy the lambda and it’s execution role, so i could compare the iamlive output and the iam policy from the terraform plan/output…

loren avatar

hmm, puresec made a serverless plugin for exactly this, before they were bought by palo alto. unfortunately now the project seems inactive, https://github.com/puresec/serverless-puresec-cli/

puresec/serverless-puresec-cli

Serverless plugin for least privileges. Contribute to puresec/serverless-puresec-cli development by creating an account on GitHub.

loren avatar

seeing lots of tools analyzing roles and comparing allowed permissions against cloudtrail usage or iam access analyzer, but basically nothing i can use in a pipeline before a deployment

Darren Cunningham avatar
Darren Cunningham

Palo Alto strikes again – I’m afraid Bridgecrew will fall into this category too

1
Zach avatar

the same guy who wrote iamlive has a number of ‘iamfast’ repos that will inspect your code and produce a policy. Its very much “beta”

Zach avatar

also the minute that you have code that does something like “create cloudwatch stack” I think all bets are off

loren avatar

right, cloudformation becomes the executing agent… i aint doing that. i’ll just have terraform do it. and i’ll see the errors on tf apply

loren avatar

for me this is mostly a lambda that deploys fine, but then gets an event, executes, and dies because we missed a permission. it’s kinda “async” in terms of feedback of the error

loren avatar

thanks for the iamfast pointer… that’s promising, though i don’t want to generate the policy, i want to compare against a policy i already assign… https://github.com/iann0036/iamfast-python

iann0036/iamfast-python

Contribute to iann0036/iamfast-python development by creating an account on GitHub.

Zach avatar

ah. I think you’d have to do the diff manually … or write something on top of that

loren avatar
Use as a "linter" of sorts to validate a provided policy has required actions? · Issue #1 · iann0036/iamfast-python

Hi, this is a really cool project! I was wondering if you had ideas on how to do basically what you're doing, but instead of generating an iam policy, compare the discovered actions against a p…

2

2021-04-05

Mohammed Yahya avatar
Mohammed Yahya
Disaster Recovery (DR) Architecture on AWS, Part I: Strategies for Recovery in the Cloud | Amazon Web Servicesattachment image

As lead solutions architect for the AWS Well-Architected Reliability pillar, I help customers build resilient workloads on AWS. This helps them prepare for disaster events, which is one of the biggest challenges they can face. Such events include natural disasters like earthquakes or floods, technical failures such as power or network loss, and human actions […]

4
2
Jonathan Le avatar
Jonathan Le

Never seen the term Pilot light before. Thanks for sharing.

Disaster Recovery (DR) Architecture on AWS, Part I: Strategies for Recovery in the Cloud | Amazon Web Servicesattachment image

As lead solutions architect for the AWS Well-Architected Reliability pillar, I help customers build resilient workloads on AWS. This helps them prepare for disaster events, which is one of the biggest challenges they can face. Such events include natural disasters like earthquakes or floods, technical failures such as power or network loss, and human actions […]

2021-04-06

bradym avatar

We’re looking at sending events to kinesis from our frontend app. The examples in AWS docs for this all tell you to use cognito for this, but it’s not clear to me how/if that makes it any more secure or if it’s just obfuscation? Any thoughts/experiences here?

aaratn avatar
Amazon Cognito Streams - Amazon Cognito

Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time.

aaratn avatar

If its this one, its only applicable if you are using aws amplify + cognito in your web-app

bradym avatar

Should have provided the link, this is the example we’re looking at.

bradym avatar

Ugh, wrong one… too many aws tabs open

aaratn avatar

I see, this uses the federated ids which you can use. Its common pattern when you want to interact with aws services from frontend

bradym avatar

I guess my main question is, if you setup iam creds correctly with only the exact permissions you need, what is the benefit of introducing cognito?

aaratn avatar

Well, how would you hook-up iam on your web-app? Its running on client-side(User’s browser)

aaratn avatar

Could be okay if you want to do server-side operations

bradym avatar

What’s being suggested to me is that since the IAM user would only have very specific permissions… we could just put the IAM creds in the browser. I’m pushing back against that, but need to better understand why that is more risky than putting an identity pool id in the browser.

aaratn avatar

Yeah, not a good idea to put aws access keys in browser

bradym avatar

Yeah… that’s what I’m saying!

aaratn avatar

id-pool is a common aws design pattern which you should be using for these kind of operations, its isolated

bradym avatar

Any chance you can point me at some documentation that would help me understand it better?

aaratn avatar

https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-browser-credentials-cognito.html

Amazon Cognito helps you manage the abstraction of identities across multiple identity providers with the AWS.CognitoIdentityCredentials object. The identity that is loaded is then exchanged for credentials in AWS STS.

https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html

What you basically wanna do is create an un-authenticated role for the cognito identity pool and have that role access the kinesis

Using Amazon Cognito Identity to Authenticate Users - AWS SDK for JavaScript

The recommended way to obtain AWS credentials for your browser scripts is to use the Amazon Cognito Identity credentials object, AWS.CognitoIdentityCredentials . Amazon Cognito enables authentication of users through third-party identity providers.

Role-Based Access Control - Amazon Cognito

Concepts for role-based access control.

aaratn avatar

Hope this helps to prevent setting accesskeys on your frontend app

bradym avatar

Thank you!

1
bradym avatar

Fortunately since I first posted it’s been decided we’re going to have the frontend post to a backend server that will forward data onto kinesis. Maybe not the most elegant way, but better by far than aws creds in the frontend!

1
bradym avatar

I’m still gonna do some reading so I can better understand cognito for the next time something crazy like this comes up.

Darren Cunningham avatar
Darren Cunningham

I’m thinking about using a sidecar on my Fargate Service to proxy the database connection.

I’m thinking this helps address two issues:

  1. Simplify configuration - the applications can always use localhost:<port> to connect (though of course connection details could still be set through an env far just in case) to the database
  2. Security - nothing (besides code reviews) is stopping an application developer for printing the database connection details in their application. But, if the auth is associated to a sidecar that they don’t have access to I tried searching but I don’t see people talking about this. So I’m thinking either this is a bad idea or it’s just so obvious that I should done it sooner…
cool-doge1
jose.amengual avatar
jose.amengual

I think that is a bad idea, it is yet another hop in between the db and the app and something else you have to maintain

jose.amengual avatar
jose.amengual

you can use RDS proxy to do some connection handling but I guess the reason you have not seen any result to your question is because you are already in a container environment where you can pass ENV variables, deploy easily etc

jose.amengual avatar
jose.amengual

no reason to over complicate things

Darren Cunningham avatar
Darren Cunningham

I think it’s trading complexities rather than adding and the extra hop is nominal. My application priorities are stability and security, not saving nanoseconds or even milliseconds for that matter.

to that point, I’m mostly trying to address the latter concern of DB connection details being able to just be printed. But I guess to that point I could just be using IAM Authentication and locking down the Role.

Alex Jurkiewicz avatar
Alex Jurkiewicz

This change would exchange security for extra maintenance overhead and reduced developer autonomy.

You have to trust developers to some extent. How paranoid you need to be depends on your company industry and regulatory environment. If you can rely on existing controls, it will make your life easier. For example, your PR process will already ensure nobody adds this sort of code.

Darren Cunningham avatar
Darren Cunningham

yup, which is why I’m exploring the options. I see the pros/cons, but it’s fun to talk about them.

Darren Cunningham avatar
Darren Cunningham

wearing my developer hat, I think it’s cool to not have to think about db connection details and just be able to use localhost all the time and know that it’s going to be there.

Darren Cunningham avatar
Darren Cunningham

but cool doesn’t necessarily mean it’s good

Darren Cunningham avatar
Darren Cunningham

I don’t think that a PR process ensures that a line like this could be entered - I think it’s pretty easy to overlook a print/log statement.

Darren Cunningham avatar
Darren Cunningham

and I’m not even accounting for bad actors, I think this could easily be a whoops

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t think there is any added convenience from the developer with this approach. If you read the hostname from an env var, that’s no harder than a hardcoded localhost

Darren Cunningham avatar
Darren Cunningham

fair

Alex Jurkiewicz avatar
Alex Jurkiewicz

You are right that the PR process is not perfect. Governance never is. But it’s a lot cheaper and often Good Enough. The PR process protects against all developer bad actors. The sidecar approach protects only a single compromise. For example, a developer could write code to dump specific data to the log file, or even POST it to a remote system.

Alex Jurkiewicz avatar
Alex Jurkiewicz

(For example, a product I work on has functionality where our customers can add a webhook subscription for all changes. A developer could write a migration that adds their own webhook subscription for a customer and exfiltrate data that way. We trust our PR process to detect this.)

1
Darren Cunningham avatar
Darren Cunningham

thank you

1
Darren Cunningham avatar
Darren Cunningham
Setting up a ProxySQL Sidecar Container - Percona Database Performance Blogattachment image

Setting up a ProxySQL sidecar container to manage connections in AWS ECS (Elastic Container Service).

Darren Cunningham avatar
Darren Cunningham

was linked to this by AWS Support - I asked them the same initial question

Steven Hopkins avatar
Steven Hopkins


I guess to that point I could just be using IAM Authentication and locking down the Role
Are there issues for you with using this solution?

Darren Cunningham avatar
Darren Cunningham

nope, I just haven’t done it before.

Steven Hopkins avatar
Steven Hopkins

Looks like these are the limitations you need to watch out for https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Limitations

Mainly

• Instance type and max connections

• Are you using CNAME

• I don’t know if you mentioned rds type, but if postgres and you are using master (hopefully not) log in method may need to change for you

IAM database authentication for MySQL and PostgreSQL - Amazon Relational Database Service

Authenticate to your DB instance or cluster using IAM database authentication.

2
Darren Cunningham avatar
Darren Cunningham

Using PG and it will not be the master, we’d be creating IAM Roles and DB Users accordingly

Steven Hopkins avatar
Steven Hopkins

That solution give you the separation of concerns if thats what you are looking for, and removes the extra hop

jose.amengual avatar
jose.amengual

and IAM auth is not recommended for production workloads only humans

Steven Hopkins avatar
Steven Hopkins

you’re referring to
The maximum number of connections per second for your DB instance
correct?

jose.amengual avatar
jose.amengual

yes

1
jose.amengual avatar
jose.amengual

there is a limit of 200 connections per token

Steven Hopkins avatar
Steven Hopkins

right on, trying to find the aws doc/reference for that, do you have it?

jose.amengual avatar
jose.amengual

Mysql : Use IAM database authentication when your application requires fewer than 200 new IAM database authentication connections per second.

Steven Hopkins avatar
Steven Hopkins

got it, and postgres is just based off instance limits then?

jose.amengual avatar
jose.amengual

the recommended way to do this for apps is to use the Secret Manager autorotation

Steven Hopkins avatar
Steven Hopkins

Do you have some reference to that ^ for the OP

jose.amengual avatar
jose.amengual

I do not know but if Mysql has that limit I will say is the same for postgress since Mysql is always ahead that Postgres from AWS support

jose.amengual avatar
jose.amengual
Rotating Secrets for Supported Amazon RDS Databases - AWS Secrets Manager

Automatically rotate your Amazon RDS database secrets by using Lambda functions provided by Secrets Manager invoked automatically on a defined schedule.

1

2021-04-07

2021-04-09

mikesew avatar
mikesew

does anybody have a lookup table (or script) for all the RDS instance types and their core-processor counts?  This is for calculating licenses. an AWS cli option like aws rds describe-instance-types would have been great. (edited)

Mohammed Yahya avatar
Mohammed Yahya

A free and easy-to-use tool for comparing EC2 Instance features and prices.

RB avatar

hmm idk. your best bet might be scraping https://aws.amazon.com/rds/instance-types/

Amazon RDS Instance Types | Cloud Relational Database | Amazon Web Services

Amazon RDS provides a selection of general purpose and memory optimized instance types to fit different relational database use cases.

mikesew avatar
mikesew

yeah i know, it’s a little unfortunate that i gotta resort to that but totally, thanks.

Amazon RDS Instance Types | Cloud Relational Database | Amazon Web Services

Amazon RDS provides a selection of general purpose and memory optimized instance types to fit different relational database use cases.

RB avatar

nah i was wrong. you want this.

aws rds describe-orderable-db-instance-options --engine postgres
RB avatar

You can also add this to only get the enging version and dbinstanceclass

--query 'OrderableDBInstanceOptions[][EngineVersion, DBInstanceClass]'
RB avatar

add this to only get a specific version’s instance type

--engine-version 13.1
managedkaos avatar
managedkaos

@RB you are correct. came back with a solution before i could!

I was looking at getting all of them with a script like this:

REGION=us-east-1
aws rds describe-db-engine-versions --query="DBEngineVersions[].{Version:EngineVersion,Engine:Engine}" --output=text | while read engine version;
do
    for instance_class in $(aws rds describe-orderable-db-instance-options --engine "${engine}" --engine-version "${version}" --query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" --output text --region us-east-1);
    do
        echo "${engine} ${version} ${instance_class}"
        aws rds describe-orderable-db-instance-options --engine "${engine}" --db-instance-class "${instance_class}" --output=json --query="OrderableDBInstanceOptions.AvailableProcessorFeatures"
    done
done
RB avatar

yowza. looks like it could work but at that point, id just use python.

that wall of text is terrifying

1
managedkaos avatar
managedkaos

i haven’t run it yet but i think that will give you everything

managedkaos avatar
managedkaos

i am getting null for the processor stuff though. might need to tweak my quuery a bit. left to the reader!

RB avatar

haha nice nice

RB avatar

you could dump all the json and convert the json to a csv. might be easier

managedkaos avatar
managedkaos

yeah

managedkaos avatar
managedkaos

and i agree, when you start looping over services, a pythonic approach will save your sanity

managedkaos avatar
managedkaos

this approach works but i’m looking at this doc: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#USER_ConfigureProcessor.CLI.Example3

and all the queries i’ve done come up with null for the AvailableProcessorFeatures.

Are you seeing anything different?

DB instance classes - Amazon Relational Database Service

Determine the computation and memory capacity of an Amazon RDS DB instance by its DB instance class.

mikesew avatar
mikesew

incredible! thanks for the input, didn’t realize about describe-orderable-db-instance-options . The max-storage & maxIOPs is really nice. Was there any other table/lookup scheme that actually blew up the actual instance-class themselves (ie. db.m4.4xlarge ) to actual core-counts/GB? Again, that was where I was trying to help witih my licensing true-up.

sheldonh avatar
sheldonh

https://github.com/vantage-sh/ec2instances.info

And https://github.com/vantage-sh/ec2instances.info/blob/master/rds.py

Might give you a great start as well. Really cool sure and source available.

vantage-sh/ec2instances.info

Amazon EC2 instance comparison site. Contribute to vantage-sh/ec2instances.info development by creating an account on GitHub.

vantage-sh/ec2instances.info

Amazon EC2 instance comparison site. Contribute to vantage-sh/ec2instances.info development by creating an account on GitHub.

Victor Grenu avatar
Victor Grenu

Folks, I’ve just release a small tool to run AWS Access Analyzer Policy Validation against all your IAM Policies at account level. https://github.com/z0ph/aa-policy-validator Let me know if it helps!

z0ph/aa-policy-validator

Validate all your Customer Policies against AWS Access Analyzer - Policy Validator - z0ph/aa-policy-validator

5
1

2021-04-11

2021-04-12

uselessuseofcat avatar
uselessuseofcat

Hi, I’ve removed a subnet from Beanstalk environment, rebuildid, instances are not being launched in it, but load balancer from that environment is still using network interfaces belonging to removed subnet. How can I fux this? Thanks?

Alex Jurkiewicz avatar
Alex Jurkiewicz

To edit EB subnets, you need to recreate the enviroment from scratch, I believe

mikesew avatar
mikesew

Q: I’m trying to optimize RDS Patches (db minor engine upgrades). Is it true if I take a snapshot BEFORE the patch operation (say, 1 hour before), it would reduce the time it takes for the initial pre-patch snapshot???

Tamas Kadar avatar
Tamas Kadar

Q: I’m trying to build a simple PoC with CDK Pipelines and is there really no way to use GitLab as a source? Am I missing something obvious here?

2021-04-13

jose.amengual avatar
jose.amengual

Question: API Gateway VPC link subnets can be public or private but if the endpoint is public should I use a public or private subnet? using both subnets I can reach the endpoint and using both I can use a test endpoint outside the vpc and the docs……well….the docs are not the best explaining this part

1
managedkaos avatar
managedkaos

@jose.amengual your message is very timely for me. for the past few weeks i’ve been working on an HTTP API Gateway that is intended to be private. I say HTTP specifically because most of the documentation I have seen centers on REST APIs.

That aside, I am using VPC Link to attach the API to a private, internal NLB. In this case, the NLB and the EC2 instances are in private subnets. Also, I’m using private subnet for my VPC Link.

As in your case, I can access my API from the internet.

However, I have a custom domain for the API endpoint which is a subdomain on a public hosted zone. So if you know the endpoint, you can resolve it. Even if you don’t have a custom domain, the API GW still has a default endpoint with something like [amazonaws.com](http://amazonaws.com) on it which is also public resolvable.

So i explored using VPC endpoints. This may be the solution for you if the API is intended to be used internally only by other AWS resources like EC2, Lambda, etc. But note that which ever subnet you create the endpoint in for API GW, *all* services in that subnet will use if they need to access API GW. It might be OK but it also might not be what you want. Also, VPC endpoints are great for keeping traffic inside your VPC. But in my case, I’m trying to expose a service for applications outside of my VPC but only on the private network (peered VPCs).

My next iteration on this is to try a custom domain in a Private Hosted Zone. The intent being the endpoint will not be resolvable outside of my VPC and any peered VPCs/networks.

If you come up with a solution that doesn’t involve a private hosted zone, I would be happy to hear it!

jose.amengual avatar
jose.amengual

interesting

jose.amengual avatar
jose.amengual

in my case the alb with the api is public

jose.amengual avatar
jose.amengual

and the api gateway is api is public too

jose.amengual avatar
jose.amengual

the only reason to use the api gateway is to transition old apis to the new apis so we need to route some stuff to a lambda and such

managedkaos avatar
managedkaos

ahh i see. so in your case, i’m not sure. but if you intend for the API to be public, i would just go for the easy win and put the VPC link in the public subnet.

jose.amengual avatar
jose.amengual

so in my case I could get away no using a VPC link BUT it is required is you want to use a Load balancer endpoint

managedkaos avatar
managedkaos

yeah yeah.

managedkaos avatar
managedkaos

makes sense. So yeah, i see the need for VPC Link.

jose.amengual avatar
jose.amengual

the weird this is this: if I send all the traffic to a external public endpoint , it works

jose.amengual avatar
jose.amengual

if I send it to the alb I get 503

1
jose.amengual avatar
jose.amengual

so I was trying to figure out why

managedkaos avatar
managedkaos

ahh is the ALB in the public and private subnets? otherwise it won’t be able to route to anything on the back end in the private subnet.

jose.amengual avatar
jose.amengual

since is a public alb there is no SG to worry but since I was using the VPC link then what will be the SG to use ?

jose.amengual avatar
jose.amengual

it gets confusing

managedkaos avatar
managedkaos

yeah yeah! :sweat_smile:

seem like the SG on your ALB should allow traffic from 0.0.0.0/0 on the ports (80 and 443, perhaps) but it should also be in the public and private subnets.

jose.amengual avatar
jose.amengual

the ALB is internet facing on front of the EKS cluster

jose.amengual avatar
jose.amengual

is a ALB-controller

jose.amengual avatar
jose.amengual

yes it is 0.0.0.0 since is public

managedkaos avatar
managedkaos

i see. is it also in the EKS security group? so, subnets and security groups should all match up.

jose.amengual avatar
jose.amengual

well the alb works just fine without API gateway

1
managedkaos avatar
managedkaos

ok i misunderstood. the ALB works but the API pointing at the ALB does not. is that correct?

jose.amengual avatar
jose.amengual

api gateway pointing at the alb does not

managedkaos avatar
managedkaos

got it

jose.amengual avatar
jose.amengual

but if I add and integration to a ngrok url pointing to my computer it works just fine

managedkaos avatar
managedkaos

your set up might be different but here’s the TF code i am using for mine:

resource "aws_apigatewayv2_vpc_link" "link" {
  name               = "${var.name}-${var.environment}"
  security_group_ids = var.security_group_ids
  subnet_ids         = var.subnet_ids

  tags = merge(var.tags, {
    "Resource" = "aws_apigatewayv2_vpc_link"
  })

  lifecycle {
    ignore_changes = [tags]
  }
}

resource "aws_apigatewayv2_integration" "api" {
  api_id             = aws_apigatewayv2_api.api.id
  description        = var.description
  connection_id      = var.vpc_link_id
  integration_type   = "HTTP_PROXY"
  connection_type    = "VPC_LINK"
  integration_method = "ANY"
  tls_config {
    server_name_to_verify = var.server_name_to_verify
  }

  # For an HTTP API private integration, specify the ARN of an Application Load Balancer listener
  integration_uri = var.integration_uri
}

resource "aws_apigatewayv2_route" "api" {
  api_id             = aws_apigatewayv2_api.api.id
  route_key          = "$default"
  target             = "integrations/${aws_apigatewayv2_integration.api.id}"
  authorization_type = "CUSTOM"
  authorizer_id      = aws_apigatewayv2_authorizer.api.id
}

There’s more but these are the pieces that use VPC Link top the ALB.

managedkaos avatar
managedkaos

I would say, maybe check your integration_uri. Mine looks like this:

    "integration_uri" = "arn:aws:elasticloadbalancing:us-east-1:ABCDEFGHIJKLMN:listener/net/xyz-qa-api/2484cd5ecaba5155/d33d18f01a965145"
managedkaos avatar
managedkaos

But i am also using an HTTP API not a REST API so your mileage may vary.

jose.amengual avatar
jose.amengual

same url for me, is the https listener arn

jose.amengual avatar
jose.amengual

did you defined this payload_format_version = "1.0"?

jose.amengual avatar
jose.amengual

for the integration?

jose.amengual avatar
jose.amengual

mmm default is 1.0

managedkaos avatar
managedkaos

checking

managedkaos avatar
managedkaos

i get the same. not defined but that is how its configured:

    "payload_format_version" = "1.0"
jose.amengual avatar
jose.amengual

I’m starting to think that the ALB is meant to be internal and not internet facing

managedkaos avatar
managedkaos

hmmm, could be. however in my config the NLB is indeed internal. but still accessible from the API GW!

jose.amengual avatar
jose.amengual

yes trough the VPC link

managedkaos avatar
managedkaos

yep

jose.amengual avatar
jose.amengual

I bet if I add an integration to point directly to the ALB public url it will work

jose.amengual avatar
jose.amengual

what is your config for the SG of the vpc link ?

jose.amengual avatar
jose.amengual

do you have inbound rules and outbound ?

jose.amengual avatar
jose.amengual

or just inbound ?

managedkaos avatar
managedkaos

it has both inbound and outbound.

managedkaos avatar
managedkaos

in:

ype
Protocol
Port range
Source
Description - optional
HTTP	TCP	80	10.0.0.0/8	–
HTTPS	TCP	443	10.0.0.0/8

out:

Type
Protocol
Port range
Destination
Description - optional
All traffic	All	All	0.0.0.0/0	–
jose.amengual avatar
jose.amengual

what about your HTTPS hostname ?

jose.amengual avatar
jose.amengual

are you using a wildcar cert?

managedkaos avatar
managedkaos

i have that set to match the name of the site hosted at the ALB and yes, using a wild card cert

jose.amengual avatar
jose.amengual

in my case I have multiples services behind…

jose.amengual avatar
jose.amengual

so the HTTS Hostname = *.pepe.api.thatkeepscrashing.com

managedkaos avatar
managedkaos

hmm i wonder if that’s the catch? meh, not sure. but wondering if the API “hostname” needs to match the ALB hostname explicity?

managedkaos avatar
managedkaos

this is the part i am referring to:

  tls_config {
    server_name_to_verify = var.server_name_to_verify
  }
jose.amengual avatar
jose.amengual

but why the ALB hostname ?

managedkaos avatar
managedkaos

on the integration

jose.amengual avatar
jose.amengual

the api gateway should pass the Host header from the request

managedkaos avatar
managedkaos

ahh not really the ALB hostname, but the name of the site that matches the cert attached to the ALB

jose.amengual avatar
jose.amengual

yes, that yes

managedkaos avatar
managedkaos

jose.amengual avatar
jose.amengual

but that will happen with a plane ALB

jose.amengual avatar
jose.amengual

I mean is the same requirement for a public ALB

managedkaos avatar
managedkaos

i would imagine you could not verify the hostname with a cert

jose.amengual avatar
jose.amengual

CN = cert = host name

jose.amengual avatar
jose.amengual

correct

managedkaos avatar
managedkaos

but it should still route

jose.amengual avatar
jose.amengual

so I just created a new VPC link manually

jose.amengual avatar
jose.amengual

using private subnets

jose.amengual avatar
jose.amengual

same SG ( after I added the outbound rules)

jose.amengual avatar
jose.amengual

and now the ALB is responding

managedkaos avatar
managedkaos

jose.amengual avatar
jose.amengual

the old integration was using public subnets

managedkaos avatar
managedkaos

ahh got it

jose.amengual avatar
jose.amengual

but now I wonder if the SG was the problem all along

managedkaos avatar
managedkaos

jose.amengual avatar
jose.amengual

I’m going to test it and see

managedkaos avatar
managedkaos

jose.amengual avatar
jose.amengual

and now even works with the custom domain name too

managedkaos avatar
managedkaos

sweet!

jose.amengual avatar
jose.amengual

Thanks @managedkaos

managedkaos avatar
managedkaos

yeah no problem! it might be a week or so before i have any updates on my situation but i’ll keep you posted

jose.amengual avatar
jose.amengual

for your use case you can have VPC endpoint policies to restrict access to the private api endpoint

jose.amengual avatar
jose.amengual

I have used it in the pass to restrict only certain VPCs for examples

jose.amengual avatar
jose.amengual

and FYI you can’t use a custom domain name in the private API

managedkaos avatar
managedkaos

agree, we are prepared to skip the custom domain name on a private API. but i don’t know all the VPCs that will need to access the endpoint. so setting up an endpoint with rules will be difficult. This would be an API for use in an enterprise with many (hundreds) of AWS VPCs in addition to networks in other clouds.

jose.amengual avatar
jose.amengual

ahhh I see ok

managedkaos avatar
managedkaos

we really wanted a vanity URL but have gotten past that part. we just want it to be private now

jose.amengual avatar
jose.amengual

could you proxy to a LAMBDA authorizer to do authorization for the endpoint?

managedkaos avatar
managedkaos

yes, we have an authorizer in place. so that will be the gate for now.

jose.amengual avatar
jose.amengual

I see ok

managedkaos avatar
managedkaos

but still, say the endpoint is exposed and gets DDOS’d. how much of that do we have to pick up in cost of resource use vs AWS?

managedkaos avatar
managedkaos

i may be overthinking but i would rather have the endpoint unresolvable on the outside

jose.amengual avatar
jose.amengual

DDOS? you can set Throttling in that case

managedkaos avatar
managedkaos

indeed. like i said, forgive me for being paranoid

jose.amengual avatar
jose.amengual

but those are good concerns

jose.amengual avatar
jose.amengual

@managedkaos you were creating an Internal API?

jose.amengual avatar
jose.amengual

with version 1 of api gateway?

jose.amengual avatar
jose.amengual

correct?

managedkaos avatar
managedkaos

hey! no, internal api but with version 2. i got it worked out eventually.

managedkaos avatar
managedkaos

for now the endpoint is public. we have a lambda authorizer attached

jose.amengual avatar
jose.amengual

ahhh I c ok

jose.amengual avatar
jose.amengual

I’m building now and internanl api with vpc endpoint and such

jose.amengual avatar
jose.amengual

and now I need to change from alb to NLB since it does not seem to support albs

managedkaos avatar
managedkaos

yes. for our service we have an ALB for the web interface and an NLB for API interfaces. i suppose it can get pricey if you have both.

managedkaos avatar
managedkaos

instead of doing the VPC endpoint, i ended up using a VPC link to the NLB. that eliminates the need for the endpoint.

jose.amengual avatar
jose.amengual

but you can’t do a vpc link to an ALB, right?

jose.amengual avatar
jose.amengual

for an internal REST api

managedkaos avatar
managedkaos

hmmm not sure. i didn’t try.

managedkaos avatar
managedkaos

don’t think you can. this doc just specifies HTTP APIs https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vpc-links.html

Working with VPC links for HTTP APIs - Amazon API Gateway

Learn about working with VPC links for HTTP API private integrations.

managedkaos avatar
managedkaos

but using this integration method you might be able to do REST -> ALB https://docs.aws.amazon.com/apigateway/latest/developerguide/setup-http-integrations.html

Set up HTTP integrations in API Gateway - Amazon API Gateway

Learn how to configure HTTP integrations in API Gateway.

jose.amengual avatar
jose.amengual

the last link is what I’m doing right now with a public api

jose.amengual avatar
jose.amengual

in this case is for internal api

2021-04-14

Bart Coddens avatar
Bart Coddens

anyone seen this ? the ssm is blowing up itself

Bart Coddens avatar
Bart Coddens

in /var/log/amazon/ssm

Bart Coddens avatar
Bart Coddens

572M download

RB avatar

Holy cow

RB avatar

Are there errors in there?

Bart Coddens avatar
Bart Coddens

Well we use a sort of logger for root access on this machine, a wrapper on top of sudo: rootsh

Bart Coddens avatar
Bart Coddens

I can’t look into it because it needs space in /var/log

RB avatar

can you use ec2 serial to access the server ?

Jeff Dyke avatar
Jeff Dyke

Something i hardly ever need, but was confused today about some changes with Peer’d VPC’s across regions. $.20 later i had my test and proof case on what i thought was wrong. Its a nice feature. https://www.reddit.com/r/aws/comments/mr6v4w/have_to_give_a_nod_to_reachability_analyzer/

Have to give a nod to Reachability Analyzer.

I have 99% of my aws VPCs in terraform, but something had changed recently that was stopping packets from us-west-2 to us-east-1. I know the…

2021-04-15

rms1000watt avatar
rms1000watt

Can I get upvotes on this? (just :+1: on the PR comment to help prioritize it for review and merge.) Lol, it’s not my PR, but I want the feature :sob: https://github.com/hashicorp/terraform-provider-aws/pull/18644

Terraform AWS Provider to include trusted_key_groups in cloudfront distributions

r/aws_cloudfront_distribution: Support trusted_key_groups argument by shuheiktgw · Pull Request #18644 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

3
1
Matt Gowie avatar
Matt Gowie

I feel like we need a weekly thread with these…

“Hey everybody, here are the bugs that this community wants upvoted, let’s swarm em with s!”

r/aws_cloudfront_distribution: Support trusted_key_groups argument by shuheiktgw · Pull Request #18644 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

Matt Gowie avatar
Matt Gowie

There is 3700 people in this Slack, so I feel like that might be useful.

Alex Jurkiewicz avatar
Alex Jurkiewicz

the contributor’s lament

Marcin Brański avatar
Marcin Brański

Yeah Matt, that’s a good idea. I always thumbs up such issues, even if they don’t touch my current infra.

1
1
vicken avatar

Upvoted. and the thread idea is such a good idea.

This reminded me of a terraform PR I’ve had for almost 2 years, with tests and documentation, no amount of pings have helped. The infra it was meant for is long gone

1
Matt Gowie avatar
Matt Gowie

@Erik Osterman (Cloud Posse) let’s chat about a weekly thread or announcement post for a list of Terraform or Open source issues to

Matt Gowie avatar
Matt Gowie

I’d be happy to get together a simple spreadsheet if you could help with making it a weekly post in #announcements or something.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:16:36 AM

Success

rms1000watt avatar
rms1000watt

w00000!!!!

rms1000watt avatar
rms1000watt

thank you all!!!

Jakub avatar

Hello guys, sorry to bother you, but I have one question regards to connection between two AWS accounts, I want to have connection from one ec2 instance on Account A to RDS on account B. I have set up AWS Private Link between them. I have created everything for working like endpoint service on Account B (service provider), set up some security groups etc. and I have setup on the Account A (as consumer) endpoint which make a connection request in order to access to Account B everything looks great because I can telnet to the specific MySQL port but I am getting

k╝Host '172.xxxx' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' on the beginning I thought that problem is with the specific mysql instance but when I started to spin up new instance on Account B in order to check if connection works directly, between instance and rds, it works. do you know what could be a problem?

2021-04-16

2021-04-17

Leia Renée avatar
Leia Renée

aws-nuke disables console account password, any idea how to prevent it?

Leia Renée avatar
Leia Renée

Ok. Found the solution. Put it under filters IAMLoginProfile: - "<user-name>"

also put following to be able to switch account:

IAMRole: - "OrganizationAccountAccessRole" IAMRolePolicyAttachment: - "OrganizationAccountAccessRole -> AdministratorAccess"

2021-04-18

Milosb avatar

Hi, After you initially store objects in S3 Bucket. Lets say 20GB of data in March. Do you pay any additional in April if you don’t retrieve any data from that bucket? I cant see that is clearly stated anywhere, but I would say no…

Mazin Ahmed avatar
Mazin Ahmed

I think yes, you also pay storage fees in addition to the data transfer fee

2
1
Milosb avatar

I think you didn’t understand my question

Mazin Ahmed avatar
Mazin Ahmed

I was referring to this one https://aws.amazon.com/s3/pricing/ For storage fees, maybe I didn’t get the question

Amazon S3 Simple Storage Service Pricing - Amazon Web Services

Find detailed information on Free Tier, storage pricing, requests and data retrieval pricing, data transfer and transfer acceleration pricing, and data management features pricing options for all classes of S3 cloud storage.

1
Milosb avatar

If you see data transfer fee from internet to s3 is free… So there is no fee for that . Beside initial PUT requests and storage there is no any transfer from S3 or GET requests…

Milosb avatar

At least in example that I mentioned…

Milosb avatar

My question is simple. After you initially store 20 GB and pay for that at the end of month. Will you pay next month again for that storage if its not retrieved at all…

curious deviant avatar
curious deviant

You will have to pay for storage. If there are data transfers, there will be additional cost incurred. Try this to calculate the exact cost that will be incurred: https://calculator.aws/#/

AWS Pricing Calculator

AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS.

Milosb avatar

Well actually I did

Milosb avatar
Milosb
04:12:57 PM
1
Milosb avatar

Wouldn’t be cumulative if it is how you are saying?

Milosb avatar

that part isn’t clear to me to be honest

curious deviant avatar
curious deviant

If you could explain your question further, I can try and help answer.

Milosb avatar

lets say you store 20 GB each month

Milosb avatar

in march you will have 20GB. in April you will have 40GB and so on… Do you pay for storage that just sitting there and you don’t access it or you just pay when you initially store objects? Generally that’s what I asked in initial question as well.

curious deviant avatar
curious deviant

Oh I see now. You pay for storage sitting there. You get charged for 20GB in March and 40 GB in April. This is further clarified here : https://aws.amazon.com/s3/pricing/ where they say You pay for storing objects in your S3 buckets. The rate you're charged depends on your objects' size, how long you stored the objects during the month, and the storage class

Milosb avatar

well that exact line doesnt seem clear to me

Milosb avatar

that exact line lead me to post question here

Milosb avatar

and in that case that calculator would be misleading if you agree?

Milosb avatar

and basically you would see doubling s3 costs each month

curious deviant avatar
curious deviant

It would be helpful if you could share how do you think the calculations are done. Here’s how I interpret this : March :20 GB * (0.46 USD) , April: 40 GB * (0.46 USD) , assuming 20 GB storage was added in April.

Milosb avatar

amount of stored data over one month is same

Milosb avatar

To simplify question. Do you pay for data that is just sitting there without any retrieval?

Milosb avatar

by your example you say yes, is that correct?

loren avatar

Yes, you pay for data just sitting there, even if you don’t access it

Milosb avatar

Thanks again

Milosb avatar

Well calculator is useless in that case

Sean Holmes avatar
Sean Holmes

Go for infrequent access if you rarely retrieve it. Better yet, turn on Intelligent Tiering and let it drift to one of the Archive Access tiers. The total cost generally consists of how much you store + the object retrieval fees for whatever storage class you are looking at

Milosb avatar

yeah, that have sense. I just wanted to preform some initial calculation based on some inputs, and I run to calculator which is miss-leading Thing is that I still don’t know how frequent data will be accessed and how it will be processed.

Sean Holmes avatar
Sean Holmes

You definitely pay for the data sitting there on AWS servers/SANs as it costs them resources to host. They do give you 5 GB free, “As part of the AWS Free Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5GB of Amazon S3 storage in the S3 Standard storage class; 20,000 GET Requests; 2,000 PUT, COPY, POST, or LIST Requests; and 15GB of Data Transfer Out each month for one year.”

Sean Holmes avatar
Sean Holmes

In your case it sounds like you have even more reason to go with Intelligent Tiering - automatic cost savings for unknown/changing access patterns

Milosb avatar

Its unknown in this point, it will not be stored before I find out how it will processed. So less worries about choosing proper tier, and data retention/life-cycle.

Milosb avatar

I would just expect that their oob calculator would take that in count when gives 12 Months Estimate

Sean Holmes avatar
Sean Holmes

Ultimately, it is probably not worth your time to forecast the math that directly. Paying at most $23/TB and maybe just $4

Milosb avatar

What you mean?

Sean Holmes avatar
Sean Holmes

Ya probably should have a better calculator; just know that you want it in standard if you have lots of requests/retrievals

Sean Holmes avatar
Sean Holmes

$.023/GB/month = $23/TB/month

Milosb avatar

its not that simple

Sean Holmes avatar
Sean Holmes

ya that’s why their calculator is less than desirable for you; cost is variable based on retrievals during the period

Milosb avatar

we have options where costs dictate design, and opposite

Milosb avatar

but isnt

Milosb avatar

that’s just one parameter

Sean Holmes avatar
Sean Holmes

yep, exactly; the $23/TB/month is like a fixed constant, object retrievals is another parameter, etc.

Milosb avatar

and if you take in count that upload of same data could be done in few requests, or in millions requests

Milosb avatar

it can be huge difference on bill

Marcin Brański avatar
Marcin Brański

@Milosb you’ve been answered in first reply
I think yes, you also pay storage fees in addition to the data transfer fee
S3 is really cheap storage both for storing and for putting data there. Calculating how much it will cost to store 20GB is not worth the time spent on calculating

There are many nuances in fees though which you should be aware of, for example, watch out for intelligent tiering and IA because for small objects you may be paying a lot more than you can save on it.

Milosb avatar

@Marcin Brański Maybe, maybe not. I got answer on question that I didn’t ask, and that made me think that either I didn’t properly ask or question wasn’t understood well since I didn’t mention data transfer fee at all. Cheap is subjective term. I took 20GB as an example. But thanks guys for clearing it up, it was really helpful.

2021-04-19

barak avatar

Anyone has seen AWS Cloudsearch in use in production and know do/don’t around it vs AWS Elasticsearch?

sheldonh avatar
sheldonh

Anyone use a fargate container for remote ssh with visual studio remote ssh plugin? Seems promising to offer a quicker remote dev environment than instances if all i want is containers anyway.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

That’s what GitHub Codespaces or GitPod do Cloud9 is still using vanilla EC2s as far as I know.

I really like Gitpod as it’s the fastest option. You can also self-host it, if that matters to you

sheldonh avatar
sheldonh

Yeah, I’m very aware of codespaces, gitpod, and coder. However, I’m not using GitHub at my new role, so Azure DevOps only.

Codespaces also takes forever to build some of the larger images. Was thinking just doing SSH with container would give me the same experience in a way.

Gitpod SSH is really exciting, but again not showing Azure DevOps, and SSH is beta/Q1 2021 but no release date announced.

2021-04-20

Mads Hvelplund avatar
Mads Hvelplund

Hi channel.

AWS added support for using Docker images for Lambdas not so long ago, but unlike other uses of Docker (ECS, Batch), Lambdas can’t access cross account ECR repos. This is a pain if you, like me, like to build your artifacts in a tool account and then pull them in the customer facing accounts.

If you have time, please consider upvoting this proposal to add the feature: https://github.com/aws/containers-roadmap/issues/1281 :)

[Lambda] [request]: Cross-region cross-account ECR images · Issue #1281 · aws/containers-roadmapattachment image

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

3

2021-04-21

Igor avatar

Does anyone know of an ECS terraform module or a lambda function that handles events like deployment failure/task stopping and sends an SNS/Slack notification?

Joe Niland avatar
Joe Niland
maasglobal/ecs-event-notificationsattachment image

Contribute to maasglobal/ecs-event-notifications development by creating an account on GitHub.

Igor avatar

Looks pretty good. Thanks!

1

2021-04-22

sheldonh avatar
sheldonh

I have VPC -> IGW -> PublicSubnet(Application Load Balancer) -> PrivateSubnet(ECS Task)

The thing is I’ve got several components in the this various ECS tasks that want to talk to other Tasks in the private subnet.

Does anyone recommend public load balancer + internal load balancer combination, or another approach?

sheldonh avatar
sheldonh

https://sweetops.slack.com/archives/CCT1E7JJY/p1619127317235400 @managedkaos responding here for future archive/context

Basically I have some ecs tasks with 3-4 containers each. They might need to talk to another task’s container. They also might need to accept traffic from an ALB in the public subnet.

Trying to find the simple way to ensure they can talk but ideally not care about which task is running in the case of having multiple instances of the task for scaling.

@sheldonh all you should need to do is update your security groups to allow communication in the private subnets. I usually do that by including the ECS SG as a source in the other resource.

Can you give more detail on this part: “several components in the this various ECS tasks that want to talk to other Tasks in the private subnet”?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I’ve seen people use something like App Mesh, and other non-AWS implementations (Consul, etc.) to achieve such requirements at scale. Not sure how big the environment you’re thinking of here.

managedkaos avatar
managedkaos

yeah i think some sort of service discovery might be needed if you are considering inter-task communication. if all the tasks are in the same service, you might be able to achieve that pretty easily with container names similar to how docker-compose does it..not sure though!

as for the ALB, you can:

  1. put the ALB in public and private subnets. The ALB will be accessible publicly while the tasks will live in the private subnet.
  2. Attach a target group with the tasks added to them using the IP of the task
  3. Make sure the service has an security group that allows traffic from the ALB’s security group so the tasks can receive traffic
sheldonh avatar
sheldonh

is service discovery for tasks or the containers in the task? And is it pretty easy to get going with or adds a lot of complexity?

I was hoping for something as easy as a private dns type of entry or elastic ip perhaps

managedkaos avatar
managedkaos

i have not added/used service discovery with ECS so i can’t speak to the complexity. I know its an option though. You can decide how much you want to invest: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html

indeed if your tasks are running they have an IP depending on where they are deployed. i don’t think they will have a DNS entry associated though. I would be up to you to capture their IPs and then figure out how to access them from the other tasks. which is kind of what service discovery does for you.

Service Discovery - Amazon Elastic Container Service

Your Amazon ECS service can optionally be configured to use Amazon ECS Service Discovery. Service discovery uses AWS Cloud Map API actions to manage HTTP and DNS namespaces for your Amazon ECS services. For more information, see What Is AWS Cloud Map?

managedkaos avatar
managedkaos
Field Notes: Integrating HTTP APIs with AWS Cloud Map and Amazon ECS Services | Amazon Web Servicesattachment image

This post was cowritten with Preeti Pragya Jha, a senior software developer in Tata Consultancy Services (TCS). Companies are continually looking for ways to optimize cost. This is true of RS Components, a global trading brand of Electrocomponents plc, a global omni-channel provider of industrial and electronic products and solutions. RS Components set out to […]

1
sheldonh avatar
sheldonh

Would you all recommend external internal load balancer pattern over service discovery?

I’m learning a bit on discovery and seems to be what I need if it can handle multiple nodes of same. I’m not doing mucroservices though and want to make sure that using for a non microservices based environment still is useful to ease communication of fargate tasks internally.

Bonus points of anyone has an example in Terraform of setting that up.

managedkaos avatar
managedkaos

@sheldonh all you should need to do is update your security groups to allow communication in the private subnets. I usually do that by including the ECS SG as a source in the other resource.

Can you give more detail on this part: “several components in the this various ECS tasks that want to talk to other Tasks in the private subnet”?

2021-04-23

2021-04-26

uselessuseofcat avatar
uselessuseofcat

Please, I need help urgently. I’ve ran ScoutSuite to scan my AWS account but it hit API rate limit. Can this somehow break my AWS account services and communication between them? Thanks

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
Request throttling for the Amazon EC2 API - Amazon Elastic Compute Cloud

Amazon EC2 throttles EC2 API requests for each AWS account on a per-Region basis. We do this to help the performance of the service, and to ensure fair usage for all Amazon EC2 customers. Throttling ensures that calls to the Amazon EC2 API do not exceed the maximum allowed API request limits. API calls are subject to the request limits whether they originate from:

uselessuseofcat avatar
uselessuseofcat

Oh, I am good it’s only limited to:

• A third-party application

• A command line tool

• The Amazon EC2 console

Tim Birkett avatar
Tim Birkett
nccgroup/ScoutSuiteattachment image

Multi-Cloud Security Auditing Tool. Contribute to nccgroup/ScoutSuite development by creating an account on GitHub.

Tim Birkett avatar
Tim Birkett

If you did hit rate limiting during a scan, it won’t have caused a permanent issue. Other applications using the API should also respect rate limiting and retry with a backoff. If you’re running this as an automated job, run it with a reasonable –max-rate.

uselessuseofcat avatar
uselessuseofcat

Thanks guys, I was just worried if I screwed some AWS services with this…

Tim Birkett avatar
Tim Birkett

If you did… It’s too late now and the request “buckets” will have been replenished.

uselessuseofcat avatar
uselessuseofcat

Great, thanks

uselessuseofcat avatar
uselessuseofcat

@Tim Birkett it looks like there are separate buckets for Non-mutating actions and Unfiltered and unpaginated non-mutating actions, so basically with all describe and get actions, so I think that I’m good

sheldonh avatar
sheldonh

I have a single container that will need to do some pass through traffic in AWS. All my current architecture is ECS Fargate. The communication will be one a 1000ish range of ports randomly chosen by the caller and passing traffic. This container would need to take in traffic on this range of ports being managed by another service and pass it through to do it’s magic.

I’m not sure if I can do that with containers. Is that possible to do with ECS Fargate or such? If not, it seems I’ll have to have a single EC2 server in the entire setup which I was hoping to avoid

jose.amengual avatar
jose.amengual

very hard to give you any pointers but the client port usually is not important since it on their end so I do not know why is relevant in this case?

sheldonh avatar
sheldonh

One task decides the port and then the other container is supposed to accept a port in a large range. Most containers have specific port mapping so unclear if it’s possible to have a container in a task accept a wide range of ports inbound. A container seems limited to 100 from the docs but not clear if any alternative

jose.amengual avatar
jose.amengual

I see ok

jose.amengual avatar
jose.amengual

if you had a proxy layer on front you could do it

jose.amengual avatar
jose.amengual

something like haproxy, envoy

jose.amengual avatar
jose.amengual

you could add it as a sidecard of the fargate task or another service

jose.amengual avatar
jose.amengual

that is a very particular usecase

sheldonh avatar
sheldonh

Any docs or examples? This is new to me so anything more specific to help me get the concepts and know where to look would help

jose.amengual avatar
jose.amengual
frontend www.mysite.com
    bind 10.0.0.3:80-32221
    bind 10.0.0.3:443 ssl crt /etc/ssl/certs/mysite.pem
    http-request redirect scheme https unless { ssl_fc }
    use_backend api_servers if { path_beg /api/ }
    default_backend web_servers
jose.amengual avatar
jose.amengual

on Envoy you will need to check the docs and see if it is possible

Joe Niland avatar
Joe Niland

@sheldonh curious as to the reason the client needs to choose a random target port

sheldonh avatar
sheldonh

It’s connectivity service stuff, tunnels etc. Not a normal web service, so it needs to accept a range. It seems like I might need to stick with an EC2 instance for this unless you think fargate task can handle a long running service with many (say 1000 at least) port selection

sheldonh avatar
sheldonh

Let’s assume I do this with an EC2 instance instead to allow easier vertical scaling. 1 server only right now. I’m focused on the minimal “best practice” way to implement this currently, as I don’t have any clustering with this service at this time. Must be a single server for now.

Any best practice recommendations esp regarding:

  • Do you normally put a LB in front of any exposed instance, even if it’s just a single and not scaling at the time?
  • Do you put the instance in a private subnet and use LB to point to it, or default to public subnet to avoid NAT Gateway load if it’s going to be accepting inbound calls, and just not give it a public ip?
  • Or do you just leave the server as directly accessible on the range of ports?
managedkaos avatar
managedkaos

regarding best practices:

• Yes, even if i only have one EC2, i use an ALB for the endpoint. If I need to change the EC2 or scale up, I don’t have to change anything that points to the endpoint

• Yes, EC2 in private subnet to keep it (more) secure. and ALB in both so it can route the traffic.

• I avoid leaving the server directly accessible if at all possible.

jose.amengual avatar
jose.amengual

but for your use case you will not be able to use and ALB

jose.amengual avatar
jose.amengual

since the alb does not have concept of port ranges

1
sheldonh avatar
sheldonh

That’s what I was trying to figure out. So port range (not traditional web server), means this changes a bit. Normal single port stuff and I’d just use an ALB to a private subnet with ecs tasks I guess. But port ranges, it will be better to just assign an elastic ip/route53 to this, it seems. Just newer to the LB side and trying to vet I’m not missing anything obvious there

sheldonh avatar
sheldonh

For ec2 instance with a specific port only then ALB makes sense to put in front.

sheldonh avatar
sheldonh

Is there any reason to put in private subnet if the instance security group doesn’t have any inbound traffic other than from alb?

It adds more nat load so just checking if it’s a “go to” to always try to put in private subnet when not directly accessed externally, even if no public ip assigned.

jose.amengual avatar
jose.amengual

you need to build the LB layer yourself that is why I said that you will have to use something like haproxy in front of the instance(instances) that will receive the traffic

jose.amengual avatar
jose.amengual

your instances stay in the private subnet

jose.amengual avatar
jose.amengual

the haproxy will have a public and private subnet

sheldonh avatar
sheldonh

I’m not very familar with haproxy. I am thinking it might help in future, but the “MVP” of getting something up I might just need to simplify this. That’s a lot to figure out with little time.

jose.amengual avatar
jose.amengual

haproxy is very easy to configure to do that

jose.amengual avatar
jose.amengual

maybe 5 lines of configs

jose.amengual avatar
jose.amengual

early ELBs were haproxy (rumour has it)

sheldonh avatar
sheldonh

easy is relative. Is there a terraform module that makes this easy. My background isn’t networking so this is newer concepts to me. I have to accept a range of inbound and the easiest way is to allow inbound on this range.

sheldonh avatar
sheldonh

I don’t use k8’s or any of that, just fargate tasks so what’s easy to you might be science to me

jose.amengual avatar
jose.amengual

but that is the issue, you need some sort of software that can do that, otherwise your app will have to do it for you

jose.amengual avatar
jose.amengual

so you can use the cloudposse ec2 instance module and create userdata to install haproxy

sheldonh avatar
sheldonh

The service running is designed for connection management. It’s just i’m trying to figure if best practice would be for this to be behind a LB period of i just let it do it’s work directly.

If installing haproxy in user data is all it takes then maybe I could figure out. The docs seemed pretty detailed/complicated.

jose.amengual avatar
jose.amengual

but if you need to listen for a range of port you Can Not use aws load balancer

sheldonh avatar
sheldonh

Found this article too. https://medium.com/@onefabulousginger/fully-automated-infrastructure-and-code-deployment-using-ansible-and-terraform-2e318820fe0c This is useful to know about. I’m basically replacing LB usage based on service limitations with my own self-managed LB alternative.

Fully automated infrastructure and code deployment using Ansible and Terraform.attachment image

Ansible and Terraform can be used together to build/deploy any docker images, and create the AWS ECS services that will host it by using…

sheldonh avatar
sheldonh

Is there an absolutely critical reason why I have to have it run behind haproxy instead of accept inbound directly in your mind (if the tool is a connection management tool)

That’s kinda the piece I think I need to understand. I don’t mind evaluating for improvements, but at stage 1 does this add more complexity than value for a single instance?

jose.amengual avatar
jose.amengual

if you app can listen in a a range of ports then you do not need it

jose.amengual avatar
jose.amengual

if you app does not , then you need a layer to do it for you

jose.amengual avatar
jose.amengual

and that layer behaves like a load balancer but it has the capability to listen on a range of port and then forward to any type of endpoint ( fargate, ec2 instance, alb etc)

sheldonh avatar
sheldonh

Perfect. For now that’s what I’ll do then. The open source project https://github.com/jpillora/chisel this is what I’m using and it accepts a wide range of ports based on configuration. I think this means for my use public subnet directly should be used.

The only “abstraction” I can think of is using route53 to simplify calling directly regardless of ip changes.

jpillora/chiselattachment image

A fast TCP/UDP tunnel over HTTP. Contribute to jpillora/chisel development by creating an account on GitHub.

jose.amengual avatar
jose.amengual

if you have a network person in your company I recommend to go and talk to them to get a better idea

sheldonh avatar
sheldonh

thanks for the tips!

1
Tomek avatar

is it currently not possible to query S3 objects by their tags? I see mention of using the resource explorer for tags on the bucket level, but I don’t think I’m seeing anything on the object level.

Corey Gale avatar
Corey Gale

Hi all! Just posted a new article on AWS cost reduction: https://corey.tech/aws-cost/

My Comprehensive Guide to AWS Cost Control

By Corey Gale, Senior Engineering Manager @ GumGum

Alex Jurkiewicz avatar
Alex Jurkiewicz

The idea of combining AWS usage data with revenue sources is great, I would love to hear more about that part

My Comprehensive Guide to AWS Cost Control

By Corey Gale, Senior Engineering Manager @ GumGum

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It always amazes me how difficult it always is to track costs in AWS. I agree with @Alex Jurkiewicz, tracking usage along with revenue is a good way to look at this. For Cloudrail, we track spend at the account level, per service, etc., but not at a per-customer level.

This started coming up recently - looking at how much a given customer costs us (in Lambda, RDS, etc.) so we can later plot that vs revenue from given customers. Hopefully we’ll be able to share our learnings as well.

Great writeup @Corey Gale!

Corey Gale avatar
Corey Gale

Thanks @Yoni Leitersdorf (Indeni Cloudrail) & @Alex Jurkiewicz, appreciate it! We’ve been ingesting our CUR data into Snowflake via Snowpipe, which provides the ability to join against everything else in our data warehouse. Once in Snowflake, we use a custom built Looker Explore for insights that incorporate said business data (that’s where our CPM/IHMP metrics are calculated). My #1 tip would be put some time into designing a tagging system that matches your splicing needs since all billing tags are included on CUR data points.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Another Corey working on AWS cost control

Alex Jurkiewicz avatar
Alex Jurkiewicz
06:25:56 AM

now there are two of them

1

2021-04-27

Chris Gray avatar
Chris Gray

Anyone know if an EC2 instance refresh is in anyway network aware? I need to refresh my ECS clusters EC2 instances to use new AMIs and don’t want to cause an outage for any in progress users

jose.amengual avatar
jose.amengual

can you add more instances and then scale down?

jose.amengual avatar
jose.amengual

you want connection draining bascially

Chris Gray avatar
Chris Gray

Yep, I can add, just wondering if instance refresh would do it, not overly attached to the idea

jose.amengual avatar
jose.amengual

if you terminate an instance it will do connection draining

jose.amengual avatar
jose.amengual

instance refresh should do the same

RB avatar

You need a lifecycle hook on your ecs asg in order to first drain the instances and then terminate the instances once the running task count has reached 0 per instance

RB avatar
nitrocode/awesome-aws-lifecycle-hooksattachment image

Awesome aws autoscaling lifecycle hooks for ECS, EKS - nitrocode/awesome-aws-lifecycle-hooks

Sean Turner avatar
Sean Turner

Do custom eventbridge event buses not take aws.* (e.g. aws.ec2) events? I had a bug in my code where I was putting my event rule on the default event bus. When I changed the code and added it to my custom event bus, I stopped getting events

Sean Turner avatar
Sean Turner

Figured it out, they don’t, need to use the default event bus

kskewes avatar
kskewes

We just updated a EKS controllers from 1.15 to 1.16 in our staging cluster. This is the second cluster to do it but this one had a problem. All of our NLB target groups except 1 became unhealthy (all targets/ec2 instances) and we ended up recreating the Loadbalancer Service’s to restore service - with accompanying DNS change via External DNS.

  1. nginx-ingress health check nodePorts were responding 200’s from bastion, no change to security groups etc.
  2. EC2 Route Analyzer from NLB ENI to EC2 Instance ENI where nginx pod was was green.
  3. Nothing interesting in status when aws elbv2 describe-target-health ... .
  4. Can see routine Target Group update CloudTrail event but all correct port and instances. Have created a support ticket but a bit anxious about prod update.

Anyone have any ideas?

Jillian Rowe avatar
Jillian Rowe

I suppose this is a rancher / aws crossover event.

I’m trying to get the Rancher Quickstart https://github.com/rancher/quickstart to work, but I’m having issues getting the SSL correct. I’d also like to use my own domain name (hosted on AWS) .

I need this done and I can pay for anyone’s time if they’re looking for a (hopefully) quick gig. ;-)

rancher/quickstartattachment image

Contribute to rancher/quickstart development by creating an account on GitHub.

Jillian Rowe avatar
Jillian Rowe

If you’re interested in this as a paid gig either ping me here or email me at [email protected]

rancher/quickstartattachment image

Contribute to rancher/quickstart development by creating an account on GitHub.

2021-04-28

Andy avatar

Has anyone tried AWS WAF bot control? Looks like it could be quite expensive, but curious as to how well it works.

AWS WAF - Bot Control Feature | Amazon Web Services (AWS)

Bot Control Feature - AWS WAF - Amazon Web Services (AWS)

Darren Cunningham avatar
Darren Cunningham
Bot Control	$10.00 per month (prorated hourly)
AWS WAF - Bot Control Feature | Amazon Web Services (AWS)

Bot Control Feature - AWS WAF - Amazon Web Services (AWS)

Darren Cunningham avatar
Darren Cunningham

are you seeing something different?

Andy avatar

It starts to add up when you have many millions of requests though :)

Andy avatar

Although it looks like you can use Scope down statements to only use bot control for certain endpoints.

jason einon avatar
jason einon

hey, has anyone got good working examples of deploying and connecting to an efs from with eks, i have all the resources deployed, the pvc is connecting to the volume, however when the pod is trying to connect to the pvc i am gettign the following error:

 Warning  FailedMount  0s (x4 over 4s)  kubelet            MountVolume.MountDevice failed for volume "pv-efsdata" : rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock: connect: connection refused
Christian avatar
Christian

In the case of using a bastion to connect to instances on a load-balanced Elastic Beanstalk environment, is there a common way to dynamically name instances so that I can easily login to various instances from the bastion?

# Example: easily connecting to instances from the bastion
ssh api-1
ssh api-2

Also, I would be interested to know how this works in load-balanced environments using the EB CLI , e.g., eb ssh .

1
Christian avatar
Christian

EB CLI handles this by listing the instances when you run the command. (ref https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html)

This is handy to use in the bastion to connect to various instances in the environment — e.g., eb ssh -n 1 . However, this won’t work if the EC2 instances have public IP addresses assigned.

If https://github.com/aws/aws-elastic-beanstalk-cli/issues/3 gets resolved, it might be possible to do this without the constraint on public IP addresses.

See also

https://serverfault.com/questions/824409/aws-elasticbean-beanstalk-eb-ssh-using-private-dns-names

https://stackoverflow.com/questions/39613260/ssh-in-to-eb-instance-launched-in-vpc-with-nat-gateway

1

2021-04-29

sheldonh avatar
sheldonh

Can someone explain ngnix reverse proxy like I’m 5? Not certain how this fits with load balancers and all …. Maybe more like I’m 18 since 5 year olds probably don’t know the proxy.

Santiago Campuzano avatar
Santiago Campuzano

An nginx reverse proxy is like an HTTP(S) load balancer that sits on front of the users and transparently redirects http traffic to backend servers/services

Santiago Campuzano avatar
Santiago Campuzano

We use nginx at our company to front Tomcat servers, do SSL/TLS termination, static content caching, URL filtering/redirection, etc .

Santiago Campuzano avatar
Santiago Campuzano

So the users connect to the nginx server, it then forwards the connection back to one the backend servers,

sheldonh avatar
sheldonh

That’s a smart 18 year old answer. How about answer like I’m 5? I’m still wrapping my head around ssl/tls termination in this so that’s a good reminder.

sheldonh avatar
sheldonh

In one use case it’s in front of a single server so not quite sure what benefit it offers other than tls/ssl termination.

Darren Cunningham avatar
Darren Cunningham

proxies like nginx give the ability to take care of some of the request chain before actual routing traffic to your application server.

benefits:

• CPU resources saved - server running your application is only doing what it needs to do.

• flexibility - you can add/replace servers behind the proxy seamlessly

sheldonh avatar
sheldonh

Since a load balacer would let me replace a server that doesn’t apply for this I think.

Let’s take the CPU part. If the app is a Go app it should be able to handle reasonable concurrency with it’s http router/gorillamux etc. What load would you consider it helps with in a scenario like that? Any simple examples of a reverse proxy in front of a normal web server that might help give me context?

Darren Cunningham avatar
Darren Cunningham
What is a Reverse Proxy vs. Load Balancer? - NGINX

Learn the difference between a reverse proxy vs. load balancer, and how they fit into an web serving and application delivery architecture

cool-doge1
sheldonh avatar
sheldonh

https://stackoverflow.com/questions/17776584/what-are-the-benefits-of-using-nginx-in-front-of-a-webserver-for-go

Nice! Got me started on the right track . Making more sense now. Appreciate it

What are the benefits of using Nginx in front of a webserver for Go?

I am writing some webservices returning JSON data, which have lots of users. What are the benefits of using Nginx in front my server compared to just using the go http server?

sheldonh avatar
sheldonh

Also looks like gorilla web kit offers much of that functionality, albeit with a bit more work on middleware and all. Good info!

1
Darren Cunningham avatar
Darren Cunningham


“Let’s take the CPU part. If the app is a Go app it should be able to handle reasonable concurrency with it’s http router/gorillamux etc.”
it sounds like you’re only considering valid requests, something to remember is that when you have an application, if it’s exposed to the world it’s going to get crawled and bad actors are going to try to poke holes in it. by having layers (load balancers, reverse proxies, web application firewalls) in the request chain that can filter out erroneous or potentially malicious requests, you’re saving your application servers from having to spend the CPU cycles to discard such requests.

1
msharma24 avatar
msharma24

Looking for a friendly advise Customer has 2 AWS Orgs

• First Org has Legacy Landing Zone Setup

• Second Org has the Shiny Control Tower Setup I would like to make them one AWS CT Org I was thinking if it would just be easier to move the accounts from the Legacy Landing Zone Org to the CT Org ? or If I should convert the Landing Zone Org to CT and then move the accounts from other CT Org to the “new” CT Org ?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Sounds like you’ll have a mess if you just move accounts over, no? The legacy accounts will be using legacy configurations, and so you second org will be a “salad” of things.

msharma24 avatar
msharma24

They hired me for this job lol

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Any chance you can move the accounts one by one, but making sure they are cleaned up and meet the shiny org’s structure and settings?

msharma24 avatar
msharma24

The new CT org has only dev workloads and the legacy LZ is using production workloads but is not managed well

msharma24 avatar
msharma24

@Yoni Leitersdorf (Indeni Cloudrail) thats what Im getting to , I trying to find the least used AWS account in the Legacy Org and will move it across the Shiny CT ORG

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you do this without converting legacy org to use CT?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Any chance all of this is built using IaC?

msharma24 avatar
msharma24

They hire us when shit has hit the fan

msharma24 avatar
msharma24

The Legacy org is all hand jammed

loren avatar

define the configs in terraform, import everything, and use atlantis or TFC

msharma24 avatar
msharma24

I think that wil be alot of work

msharma24 avatar
msharma24

Im thinking Lean , and trying to find a way to first make them in one AWS ORG with CT

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It depends on what they want to achieve - a consistent org, or a mess?

loren avatar

CT is all cloudformation under the covers, right? does it even support the features needed to import or otherwise move resources between stacks?

loren avatar

and to update the existing resources to match new/different configs in the CT org?

loren avatar

moving them into one org is not all that hard. it’s a well defined process. iirc, you “leave” the org with the root user of the member account to make it a standalone account, send the invite from the new org, and accept the invite from the (now) standalone account

2
msharma24 avatar
msharma24

@loren just read the AWS KB on that too

loren avatar

but i have no idea on how an invited account would be managed by CT…

msharma24 avatar
msharma24

So I think my best option would be to start with the least productive account in the Legacy org ad migrate it to CT Org , may be in an “unregistered” OU and park the account , then apply the CT guardrails , rinse-repeat

1
loren avatar

in terraform, that’s pretty well defined also. cumbersome, but well defined. hence, my suggestion

1
loren avatar

could you maybe create a new account in the legacy org to test the workflow, instead of relying on the least productive?

2
msharma24 avatar
msharma24

Thanks for the suggestion.

loren avatar

it’s an interesting problem. i’d love to hear more on how it works out for you. this type of workflow was one of our main considerations for why we’ve avoided both LZ and CT (and CFN), and put all our focus on terraform… but if CT/CFN can actually do it……

msharma24 avatar
msharma24

Yes definitely

msharma24 avatar
msharma24

It’s a fun problem.

    keyboard_arrow_up