#aws (2023-01)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2023-01-01
2023-01-03
![Adrian Rodzik avatar](https://avatars.slack-edge.com/2022-12-22/4557345888660_892a53a8f9354921fe5f_72.png)
Hello everyone, I have an default Tag Policy applied to my aws organisation. I’ve added some new tags to this policy and reatached it, but it seems that it not applied. In ex. i want to add those new tags to my ec2 instances but the new tags are not available for them. The already existing tags are in place. Any idea what im doing wrong? Thanks in advance!
![Alan Kis avatar](https://avatars.slack-edge.com/2022-01-18/2990712824480_b05e9d1be3ba5ead9f1b_72.jpg)
Tag policy won’t just enforces specific tag compliance it doesn’t update existing tags. Maybe I misunderstood the question
2023-01-04
![Ashwin Jacob avatar](https://avatars.slack-edge.com/2022-08-24/3985413844931_091f259d03ab6fa2aafb_72.jpg)
Hello Everyone!
I am creating 2 VPCs: Dev and Production. They each have their own CIDR range and are on us-east-1. I am using tailscale to connect to the private instances. I am trying to figure out how to work on Step 3 where I need to add AWS DNS to my tailnet. I got it working in DEV perfectly. As I work in production, I am realizing that there is a conflict in the search domain (on tailscale). Both search domains are us-east-1.compute.internal
. How do I separate between DEV and PROD even though they are on the same region?
2023-01-05
![bradym avatar](https://avatars.slack-edge.com/2023-06-21/5464816405572_dd21bed1bf537acb6539_72.jpg)
I created a new ed25519 key pair in the aws console and I’ve got the .pem file. But I can’t figure out how to get the public key from it. My googling tells me that openssl pkey -in private.pem -pubout
should do it, but instead I get Could not read key from private.pem
. Anyone know the correct incantation to get the public key?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Why do you need the public key?
![bradym avatar](https://avatars.slack-edge.com/2023-06-21/5464816405572_dd21bed1bf537acb6539_72.jpg)
Want to rotate ssh keys without spinning up a new instance, but also want it in aws so we can use it when we do spin up new instances.
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
You can describe the public keys that are stored in Amazon EC2. You can also retrieve the public key material and identify the public key that was specified at launch.
![bradym avatar](https://avatars.slack-edge.com/2023-06-21/5464816405572_dd21bed1bf537acb6539_72.jpg)
Thanks. I looked at describe-key-pairs
but the --include-public-key
didn’t exist in the version of aws cli I had installed, updated to latest version and it works.
2023-01-09
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Hello everyone, I need to write a lambda which should login the instance and execute (service nginx status) command and print that result and post those result to the gmail
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
My suggestion will be to put the instance credentials in a secret, write a small Python Lambda function to log in, and execute the command. The results can be put on an SNS topic and sent to your email that way.
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Thanks for you suggestion since we are using jumpbox to connect instances, is there any possible to connect via SSM ??
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Huh, why run a lambda for this, just use Cron on the machine and call ses with the results
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Thanks @Warren Parad
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
@Warren Parad basically we are doing automation …we are not supposed to login the machine
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
why would you need to log onto the machine?
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
We are planning to do the maintaince activity automatically..before that we need to check which services are running inside the server and need to post the services status to the email and once activity completed we need to again check the services running on that server and post the status into the mail
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
so cron will solve that, right?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
IMO it’s good to avoid needing to install services/scripts directly onto a host, so this is why SSM is better than cron. cron is easier, but sometimes that’s not the best solution.
![Evanglist avatar](https://secure.gravatar.com/avatar/fbe86f814b72354136656fe3a0dcae17.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
Best possible solution is using system manager run command. Get the status in lambda and conditionally write logic based on status.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
oh and publish the result to SNS — you can then distribute to email or whatever else might need it in the future
![kirupakaran1799 avatar](https://secure.gravatar.com/avatar/2a361812b9af3f2bdc63083852dbf7ab.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Is there any possible way to do this automation ??
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Hi all,
I am working on a personal project without writing a long message I am going to have 2 lambdas ( one for GET and one for POST attached to one API GW each ) I am Throttling via a UsagePlan on the API GW and the use of keys ( not for auth, just for the usage plan) The Lambdas will be hiting an RDS
I am deploying these with SAM
Would you have any advise for me that I should consider ?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
you asking for advice?
• You have two lambdas => Have only one lambda
• You are using APIGW REST API => Use HTTP API instead
• You are using RDS => use DynamoDB instead
• you are using SAM => Use CFN Templates directly
![David avatar](https://avatars.slack-edge.com/2023-01-03/4602192766369_d1587f12ba15a36a2762_72.jpg)
that’s basically all the advice needed rolled in one! @Warren Parad
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise
HTTP API cool, I will check that !
Dynamo, yep, I ve been told again, I need to check that. Def RDS is an over provision for that
HM…. I started with SAM, then I couldnt get it moved to CF, it was a lot of code, but more control. then Moved to SAM mainly cause I couldnt be mothered to build my python code to a zip file as SAM does it for you ( I could be also wrong )
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise
definitely scalable and faster. If you are using python, you can use Chalice to do this
![Vladimir avatar](https://secure.gravatar.com/avatar/3f511524ec7ebda41b2e590c655a01e5.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
2023-01-10
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
is this channel pretty dead. /
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
?
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Yes, as @Soren Jensen said - it’s very much alive
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Anyone here have implemented/used Drift detection for Iac (terraform base)? what was the user flow, did it work well, if not why? autoremediation was a thing?
2023-01-11
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I could use some help here
I have the following Lambda It sends data to an RDS so I need it to be in the same VPC and Subnet as the RDS ( or does it ? )
also I need to get a secret from Secret manager so I have attached a policy
when I add the VPCconfig
, I cant anymore get the secret
Any thoughts ? cause I am very confused
HelloWorldFunctionGET:
Type: AWS::Serverless::Function # More info about Function Resource: <https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction>
Properties:
FunctionName: HelloWorldFunctionGET
CodeUri: rest-listener/
Handler: app.lambda_handler
Runtime: python3.9
VpcConfig:
SecurityGroupIds:
- sg-XXXXX
SubnetIds:
- subnet-XXXX
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Auth:
ApiKeyRequired: true
IAMP2L87H:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "IAMP2L87H"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "secretsmanager:GetSecretValue"
Resource: "arn:aws:secretsmanager:XXXXXXXXXXX"
Roles:
- !Ref "HelloWorldFunctionGETRole"
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
If your lambda is in a VPC you need to add a VPC endpoint for the Secrets Manager to access it in the VPC. For RDS, you need RDS proxy, so no, it doesn’t need to be in the same subnet
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you don’t have to use a VPC Endpoint – sure it’s an ideal security practice but you can do without it. be cautious about VPCE as they can add up quickly depending on number of VPC & subnets
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
and you don’t have to use RDS Data Proxy either, again probably ideal from a security practice and there have scaling benefits…but more
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
It’s VPCE or NAT
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
are you suggesting NAT?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
that’s way more expensive
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you’re assuming a private subnet, I’m assuming OP is using the default VPC with a IGW — so some clarification is required here.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
and NAT can be cheaper, all depends on VPC design but that’s another can of worms
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I suppose all I am trying to do is to write in RDS
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
something you’ll learn about IaaS is that there’s rarely one way to solve your riddle and answers vary wildly depending on factors such as: costs & security requirements
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Also, kind of noob in some areas so please bear with me !
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
are you doing this while trying to stay within free tier limits?
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
@Darren Cunningham yes please !! At least at this stage
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
tbh, networing is not my strong fit
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
so I am not particularly familiar or experianced with VPC and Subnets
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you gotta be careful with AWS then, you can find yourself easily racking up bills you weren’t ready for.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
so @Warren Parad or other am I right to believe that I need to add the Secrete Manager in the same VPCe as the VPC I am refering into my lambda. ?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
VPC Endpoints are dedicated private connections that allow you to directly connect to a service (EC2, SSM, etc…they’re all APIs) endpoint without having to leave your VPC. but they cost money per instance.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Also, I cant find any info on the cost of adding a VPC endpoint
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I see @Darren Cunningham
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
So I managed to get it to work with the VPC endpoint. It kind of makes sense.
regarding the cost. is it like couple of $/£ per month or we talking real £$££$£$
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Pricing per VPC endpoint per AZ ($/hour)
$0.01
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
literaly just seen it
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
thats pretty crappy ! that would add another $7
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
is there an alternative to access RDS without adding the Lambda in a VPC etc ? cause that way I wouldnt need a VPCe for the SM
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.
I have a small budget. but Ideally I get to do it with the smallest cost.
soecially if there is an alternative
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I also need to move of RDS to Dynamo, so that might be the alternative
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
smallest cost would be to go with DynamoDB, you can let your Lambda run without a VPC attachment
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
totally acceptable for the purposes of learning, at scale & when data security is a concern you’d VPC attach your lambda and setup a VPCE to DynamoDB, but you can worry about all that later
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
yep, I have been told that again I suppose I am droping the $7/month for VPCe and Also the 15-25$/m for RDS all together will be free
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I ddint think of it and now I have writen all of my py code for RDS… luckily its only one method I need to change that is the DB_Data_input/output
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
again, be careful with DynamoDB because you write a bad query you can blow up your bill
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
btw, for context, I am making a VScode extention that will allow you to “DM” a paste to another user instead of Copy, paste to slack etc
I do it for fun so I get to learn something, though I feel it is useful !
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
again, be careful with DynamoDB because you write a bad query you can blow up your bill
What you mean ?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
it’s not super easy to do so it’s not field of mines, but if you have a larger data set and your querying it all the time inefficiently it will add up quickly
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
more so if you have non-performant writes
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I see, I think I should be ok as I am only puting 3 values and retriving 2 values based on the ID
It should be a simple table 3xY (Y being the number of entries )
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
I wasn’t assuming a private subnet. Lambda can’t reach the internet in a public subnet without a NAT, right? Or did something change?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
IGW - no costs — public subnets NAT — costs (can be reduced by using a NAT instance) — private subnets
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
So bottom line !
I leaned about VPCe today I am writing code to store/retrive data from Dynamo, Lets see ! ( I might be asking more abt it as I am more familiar with SQL dbs ) It also seems way easier and in principle cheaper Keeping the RDS code though
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
btw, @Darren Cunningham and @Warren Parad your help has been great, I appreciate that, I hope I can return
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
- can’t answer about RDS since there aren’t any details about RDS VPC configuration
- check that your security group & subnet NACL allow for HTTPS — if you’re using a VPC Endpoint for SSM then check that the SG allows for inbound from your subnet CIDR
2023-01-12
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Hi folks,
If I have an account A and an account B
The account A has a R53 example.com how can I create R53 entries on the account B that will be forwarded on the account A
(I am confused on where I need to put what nameservers
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
what do you mean create entries, do mean create “DNS Records”?
![Sam avatar](https://secure.gravatar.com/avatar/7f7f5d75c3ec0ae933ea33f1b2c3737d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
You need to create a new zone in account B test.example.com copy the NS servers from account B, create an NS record on account A and paste the copied NS servers from account B.
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
maybe? We are just missing way too many pieces of information to suggest what the right solution is
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
funny enough i managed to do it
create a zone in account B then use the NS to create an NS on account A with the domain/subdomain of B
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
I am using aurora serverless v2 with 2 instances (multi az, writer/reader). Only the writer is effectively used by an application. The reader is pretty much useless atm except for failover.
Yet, even though, it’s only doing the replication it is using pretty much the same amount of ACU’s as the writer.
I would be expecting it to use much less and so save some money if not used. Anybody using multi-az aurora serverless v2? Is this behaviour normal? Is there any way to change it ?
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Hi Sweetops
I have created some resources from the console and when I use CF to manage them of course it complains that those reseouces are already exist, Is there a way to wither force the creation or import that state in the CF ?
![Evanglist avatar](https://secure.gravatar.com/avatar/fbe86f814b72354136656fe3a0dcae17.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
• https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
• https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
Bring existing resources into a new or existing stack to manage them using AWS CloudFormation.
Import an existing AWS resource into a stack to bring it into CloudFormation.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I suppose this wont be applicable with SAM ?
2023-01-14
2023-01-16
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
I want to do something in a lambda upon a DB cluster being available. I cannot find an event that says “DB cluster available” but there is one that says “DB cluster created” - RDS-EVENT-0170 Would this event fired also mean that the cluster is available?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
What does “available” mean to you?
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
e.g. ready for instances to be added
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
have you thought of using step fucntions to check by using AWS SDK(boto3 etc ) ?
I am not quiet sure what is your objective however, Mind giving us a bit more of the user story ?
![Evanglist avatar](https://secure.gravatar.com/avatar/fbe86f814b72354136656fe3a0dcae17.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
I think this might help : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-cloud-watch-events.html
Learn how to write rules to send Amazon RDS events to targets such as CloudWatch Events and Amazon EventBridge.
2023-01-17
![Yonatan Koren avatar](https://avatars.slack-edge.com/2023-01-08/4612627141524_cae57b3715b3fb292bd1_72.jpg)
Hey all,
What is the best way to do a sweeping destroy, or “nuke” a bunch of AWS resource that all consistently have one tag in particular? For example Environment: Foo
.
Think when you have some resources that were all spun up by Terraform some time ago and all have this consistent tag, but the Terraform config is so foobar’d that you cannot run a terraform destroy
.
I thought aws-nuke would be the absolute perfect candidate for this, but when trying to write an aws-nuke config that targets this tag across all resources, I ran into this issue, which shows that you have to know every resource type beforehand and write a filter for that resource (that filters for Environment: Foo
).
So my best bet is to write a bash script that iterates over aws-nuke resource-types
and spits out a YAML list item with that filter, and then shove that massive config into aws-nuke
.
Or maybe someone knows of a different tool that can fulfill this use case?
Can I add something similar to this request?
A per-account filter, that marks all resources with a certain tag to exclude.
Basically instead of filtering tagged resources per resource-type like this:
filters:
LambdaFunction:
- property: "tag:ManagedResource"
value: "True"
S3Bucket:
- property: "tag:ManagedResource"
value: "True"
CloudFormationStack:
- property: "tag:ManagedResource"
value: "True"
I want to be able to do this:
filters:
AllResources:
- property: "tag:ManagedResource"
value: "True"
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
have you looked at cloudnuke ?
Can I add something similar to this request?
A per-account filter, that marks all resources with a certain tag to exclude.
Basically instead of filtering tagged resources per resource-type like this:
filters:
LambdaFunction:
- property: "tag:ManagedResource"
value: "True"
S3Bucket:
- property: "tag:ManagedResource"
value: "True"
CloudFormationStack:
- property: "tag:ManagedResource"
value: "True"
I want to be able to do this:
filters:
AllResources:
- property: "tag:ManagedResource"
value: "True"
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it
![Yonatan Koren avatar](https://avatars.slack-edge.com/2023-01-08/4612627141524_cae57b3715b3fb292bd1_72.jpg)
Yeah i was aware of it from before, but I think the syntax is very similar https://github.com/gruntwork-io/cloud-nuke#example
Maybe it’s not really a limitation but rather a need to be very explicit in the config file… and creating a super big config file semi-automatically might be the best solution still…
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
seems like a nice use case. might be worth contributing the feature to one of the mentioned tools…
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@matt
![Yonatan Koren avatar](https://avatars.slack-edge.com/2023-01-08/4612627141524_cae57b3715b3fb292bd1_72.jpg)
I ended up doing this (posted it in the issue linked above for anyone bumping into it to see))) to build the aws-nuke config:
#!/bin/sh
cat <<-EOT
regions:
- us-east-1
- global
account-blocklist:
- "123456789012" # production
resource-types:
excludes:
# The following resources cannot be filtered by tags (aws-nuke error: "does not support custom properties")
- IAMSAMLProvider
- ECRRepository
- ElasticacheSubnetGroup
- CloudWatchEventsRule
- SQSQueue
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- NetpuneSnapshot
- NeptuneInstance
- NeptuneCluster
- LifecycleHook
- CloudWatchEventsTarget
- MQBroker
# The following resources are unvailable due to deprecated APIs or other issues:
- FMSPolicy
- MachineLearningMLModel
- FMSNotificationChannel
- MachineLearningBranchPrediction
- MachineLearningEvaluation
- MachineLearningDataSource
accounts:
"000000000000": # the account in question
filters:
EOT
for resource_type in $(aws-nuke resource-types);
do
cat <<-EOT
$resource_type:
- property: "tag:Environment"
value: "foo"
invert: true
EOT
done
Mind you because of the inability to filter some resources based on tags, the following resource types would be left alone and not nuked:
- IAMSAMLProvider
- ECRRepository
- ElasticacheSubnetGroup
- CloudWatchEventsRuleƒ
- SQSQueue
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- NetpuneSnapshot
- NeptuneInstance
- NeptuneCluster
- LifecycleHook
- CloudWatchEventsTarget
- MQBroker
Probably, the ones of interest to delete (would probably accrue costs or are more likely to be related to an environment you want to delete) are:
- ECRRepository
- ElasticacheSubnetGroup
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- MQBroker
- SQSQueue
2023-01-18
![Shreyank Sharma avatar](https://avatars.slack-edge.com/2020-10-21/1438500514694_ba31ccb589c56a529289_72.jpg)
Hello All, We are using AWS Lambdas and some of the lambdas will be running every 5min and generates a lot of logs. We noticed that these lambdas generate a lot of cloudwatch logs. and If a goto that Lambd’s log group and click on any log stream which generated the last 2 days back it takes time but it loads all the logs if I go more than 2 days back, like 3 or 4 days back cloudwatch logs for that lambda it loads for some time then just shows empty, but if I filter for some word like “bill” then it shows logs which have bill word it. old logs will not show but if I put a filter it will show, anyone faced this issue? will it work if I clear any old logs? right now its configured to keep forever thank you
![tyler avatar](https://secure.gravatar.com/avatar/34055b4e9181368303a13b9936f64543.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0001-72.png)
I think leveraging cloudwatch insights or aggregating the logs to another service would make for faster search time. Streams has always been cumbersome to search in for me.
![Shreyank Sharma avatar](https://avatars.slack-edge.com/2020-10-21/1438500514694_ba31ccb589c56a529289_72.jpg)
Thanks @tyler I will check on that
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
sounds like a bug that should be raised to AWS CloudWatch Support
![Shreyank Sharma avatar](https://avatars.slack-edge.com/2020-10-21/1438500514694_ba31ccb589c56a529289_72.jpg)
thanks @Darren Cunningham, but it works for other lambda which do not generate lot of logs,
2023-01-19
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hi, I have one ecs container running cron jobs inside, would codedeploy wait for all the process to finish when making a blue green deployment before deleting the container?
![Harry avatar](https://secure.gravatar.com/avatar/0cd5f2112ac91e49296b221b7adb58f3.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
I believe it’ll send a shutdown signal to the init process, which will cascade to running processes internally and then terminate after a timeout. If you’re running cron jobs that take a long time consider rewriting them to enqueue jobs on an external queue, so if a deploy lands during the execution and it terminates a worker while it’s running the next worker will be able to pick the next job up off the queue and finish the batch.
![Mike Robinson avatar](https://avatars.slack-edge.com/2021-03-03/1803370570759_7ac8b0706600a85aef5c_72.jpg)
I’m not sure how Codedeploy would handle this, but alternatively, use Eventbridge scheduled rules to launch individual ECS tasks for each job. This makes it so that if an error occurs in a job that triggers a shutdown (ie. out-of-memory), it won’t affect any other running job.
2023-01-20
2023-01-24
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
hi al
I habe a lambda+api_gw that does some DB work the api_gw resolves on api.domain.com/get-data or api.domain.com/put-data
I would like to have the notion of api versioning for example api.domain.com/v1/get-data etc
Any ideas how I can do that
![Sono Chhibber avatar](https://avatars.slack-edge.com/2022-10-03/4190228999152_7aca55c9ead6576723c2_72.jpg)
Short of it, you use stages.
V1 or V2 is just label, that is you can have “dev”, “prod”, “v1”, “v10”, etc….
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-stages.html
Learn about HTTP API stages.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
When I do a custom daomin however, I have to map the stage on this domain which then the stage path dissapears
Unless I am getting it wrong :/
![tsoe77 avatar](https://avatars.slack-edge.com/2022-09-29/4177334359776_e7af7744fd253574fd0f_72.png)
you can just put in v1 in the path field and access custom domain with v1 appended path <https://api.domain.com/v1/xxx>
where xxx
is the actual resource of the api that you are mapping to. considering you already have your api resource created as /get-data
and /put-data
, etc.
![tsoe77 avatar](https://avatars.slack-edge.com/2022-09-29/4177334359776_e7af7744fd253574fd0f_72.png)
if you want to follow api best practices, use noun like /data
for the url and resource method with http methods GET
PUT
,etc. https://swagger.io/resources/articles/best-practices-in-api-design/
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
I suppose I will need to have a different stage per path as well that would prob trigger a different lambda ?
![tsoe77 avatar](https://avatars.slack-edge.com/2022-09-29/4177334359776_e7af7744fd253574fd0f_72.png)
yea, you can have another custom domain for dev and map the same to dev
stage. e.g [api.dev.example.com/v1](http://api.dev.example.com/v1)
to dev
stage and [api.example.com/v1](http://api.example.com/v1)
to prod
stage
![tsoe77 avatar](https://avatars.slack-edge.com/2022-09-29/4177334359776_e7af7744fd253574fd0f_72.png)
if the domain is too much you can do path based like [api.example.com/dev/v1](http://api.example.com/dev/v1)
to dev
stage and [api.example.com/prod/v1](http://api.example.com/prod/v1)
to prod
stage.
2023-01-25
2023-01-26
![Martin Helfert avatar](https://secure.gravatar.com/avatar/7b3863beec0b1c37d497bbad61fdef84.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Hey. Would you recommend paid AWS support plans? We currently have a business support plan for our prod account which in fact we didn’t use for the last two years. Leave it like that for “just in case..” or switch it on/off whenever we have the need for extended support? How do you handle this? The cost add up to a fairly huge amount..
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I recommend them, but my team uses it fairly frequently. While it might not always net results, we use support as a sounding board to validate our implementation or if we’re about to start work on a new feature we’ll throw out a support request for input as to how we should solve for x
.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
but if you have an established environment that doesn’t change very often and your business wants to take on the risk of a longer response time during events to save money, it’s not inherently wrong.
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
We got a setup with ~20 accounts. I got the business support enabled in 6 of them. Planning to cut it down to 4. If I need help with an issue I either replicate it in an account with the support enabled. Or I enable it in the affected account. They don’t seam to give any less help if you have just enabled the support. So turn it off, save the money and enable when needed.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
oh yeah our org is ~ 50 accounts and we only have it enabled in 2 accounts. good call on enabling when needed.
![Martin Helfert avatar](https://secure.gravatar.com/avatar/7b3863beec0b1c37d497bbad61fdef84.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
sorry for the late reply. Thanks for your answers, that already makes the decision much clearer
2023-01-27
![fotag avatar](https://secure.gravatar.com/avatar/1f2ed855ad529031eed484898c52b68f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Hello :wave: !
Some questions related to AWS Aurora (Postgres like) serverlessV2 cluster with Multi-AZ (which created via terraform with count=2
at aws_rds_cluster_instance
- no multi-az conf available via TF for serverlessv2) in case anyone knows:
• Does this cluster use all instances created there (writer and readers) are in use via cluster endpoint or readers are standby until they will be needed?
• Do we have to specify allocated storage and storage type?
![Steven Miller avatar](https://avatars.slack-edge.com/2022-12-20/4557587791393_3f9bd0abc4fd39227fae_72.jpg)
Is anyone using multi architecture EKS + self-hosted github runners for multi architecture builds? The point would be to allow something like this:
runs-on: ubuntu-20.04-arm64-large
In github workflows. Is that overly complicated or even a valid concept? Is there some easier option we haven’t considered for self-hosted multi-architecture builds? Or maybe we just go with github-hosted runners. Is it already built in, like karpenter can deploy arm nodes noting the target architecture of the pods or something like that?
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
Anyone know if you can subscribe to when new AMI’s are released by AWS? AWS Inspector is identifying an issue in the latest image, it will be nice to know when to redeploy.
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
You can hook it to EventBridge and the rest of the magic you can do yourself.
![attachment image](https://d2908q01vomqb2.cloudfront.net/827bfc458708f0b442009c9c9836f7e4b65557fb/2020/06/03/Blog-Post_thumbnail.png)
This post is courtesy of Arend Castelein, Software Development Engineer – AWS Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has […]
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
Nice one! Thanks a million
2023-01-28
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Hi all,
I have a standard API->Lambda->Dynamo app.
Currently I have the API on aggressive throttle and quota to minimise usage and get DOW. Would you have any other advise on how to go about it without using expensive resources like WAF ? ( its only an OpenSource VSCode extension I am building )
Thanks in advance
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
WAF is $8/month that’s pretty cheap
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
Hm. Might check that.
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
It’s also a free VSCode extension. So I try to keep cost low.
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Can I recommend storing the user data in their own github or gitlab account then
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
then you don’t need to worry about the DDB cost either
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
dumb the WAF
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
and use APIGW HTTP API => Lambda
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
hm… The whole idea is to have something like pastebin, but in vscode, and way easier
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
open to thoughts though !
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
using gists as the backing store for that seems like an obvious solution
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
(or a separate git repo)
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Basically it’s the same as this: https://marketplace.visualstudio.com/items?itemName=kenhowardpdx.vscode-gist
Extension for Visual Studio Code - Create, open and edit Gists
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
gists is a good idea. gonna have to work with Github OAuth
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
the vscode extensions sync plugin does this, I bet there is some good code there to copy
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Synchronize your Visual Studio Code Settings Across Multiple Machines using GitHub GIST
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
And the default settings sync built into VS Code has some more code available: https://github.com/microsoft/vscode/issues/88309
Problem
There are currently some extensions that attempt to provide authentication abilities that can be reused by other extensions. (An example being the Azure Account extension). Now that we’ve begun working on login for settings sync, it’s worth revisiting if authentication should be a first-class concept in VS Code. By exposing an API to contribute an authentication flow
• the core of VSCode can potentially leverage authentication • other extensions can leverage authentication • UI for account management could be centralized
Proposal
I propose introducing a concept of an “AuthenticationProvider”. Such a provider implements methods for logging in and logging out of a specified account, and exposes a list of accounts that are currently available with an event listener for changes to these. This abstracts away refreshing tokens from consumers - the AuthenticationProvider extension can manage refreshing in the background and fire an event when the accessToken has been changed.
export interface Account {
readonly id: string;
readonly accessToken: string;
readonly displayName: string;
}
export interface AuthenticationProvider {
readonly id: string; // perhaps "type"? Would be something like "GitHub", "MSA", etc.
readonly displayName: string;
accounts: ReadonlyArray<Account>;
onDidChangeAccounts: Event<ReadonlyArray<Account>>;
login(): Promise<Account>;
logout(accountId: string): Promise<void>;
}
export namespace authentication {
export function registerAuthenticationProvider(provider: AuthenticationProvider): Disposable;
export const authenticationProviders: ReadonlyArray<AuthenticationProvider>;
}
Consumers would need to know the id of the provider they’re looking for. For example, the settings sync code would look for an “MSA” provider since this is what the setting sync backend currently needs.
Since the authentication provider extension would be activated in each VS Code window, the extension would be responsible for synchronizing state across instances. By default, such extensions would have [“ui”, “workspace”] extensionKind, so that they can store and read credentials on the local machine in both the desktop and web case.
2023-01-29
2023-01-30
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
If you need documentation and lists related to permissions in one place, this might be useful … https://aws.permissions.cloud/
2023-01-31
![John Stilia avatar](https://avatars.slack-edge.com/2021-12-24/2866996401143_5d4663dc8850d28a3d43_72.png)
hi folks
if I have apply a Usage plan on my REST API_GW, after the quota is met, and I get HTTP429, do I get charged for any subsequent API calls or aws takes care of it ? Cant find anything on docs
![Alex Atkinson avatar](https://avatars.slack-edge.com/2022-07-20/3814291485031_7e50a52ae8b830cdc7e2_72.jpg)
I haven’t looked at the west regions for quite a while. Is us-west-1 over capacity lately? us-west-1c is unavailable.
aws ec2 describe-availability-zones --region us-west-1
{
"AvailabilityZones": [
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1a",
"ZoneId": "usw1-az3",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
},
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1b",
"ZoneId": "usw1-az1",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
}
]
}
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Anyone here have success with AWS SSO account delegation? I heard it was buggy when it was first released and I’m wondering if that is still the case.
![Peter Luknár avatar](https://secure.gravatar.com/avatar/89d1185bf3fde582129cb021283ed34b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
What kind of bugs were there? I have several projects where I manage SSO (e2e) with Terraform and I had no problems since.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
the only issue I ran into was if I wanted to reassign the delegation to another region. the process support recommended was ugly. support told me they were working on a feature to make that “easy”, but I have since switched orgs and haven’t needed it so not sure where that ended up.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
AWS SSO isn’t a global service? Why did you need to reassign a region?
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Not sure what the bugs were… I just remember @Erik Osterman (Cloud Posse) and others talking about how it wasn’t working the way they wanted it to when it first came out.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
@Peter Luknár were you using delegated management? I.e. https://aws.amazon.com/blogs/security/how-to-delegate-management-of-identity-in-aws-single-sign-on/
![attachment image](https://d2908q01vomqb2.cloudfront.net/22d200f8670dbdb3e253a90eee5098477c95c23d/2021/02/26/Delegate-management-AWS-SSO-ForSocial.jpg)
September 26, 2022: This blog post has been updated to reflect corrections on sample codes. September 12, 2022: This blog post has been updated to reflect the new name of AWS Single Sign-On (SSO) – AWS IAM Identity Center. Read more about the name change here. Note on May 13, 2022: AWS IAM Identity Center […]
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Yes, I can’t remember what the problems were, only that it was at odds with doing it in Terraform. @Ben Smith (Cloud Posse) were you the one working on this?
![Ben Smith (Cloud Posse) avatar](https://avatars.slack-edge.com/2021-08-11/2383898637441_289b6cfcbd0d178c8183_72.png)
I’ve done this a couple times at this point, essentially our aws-sso component creates permission sets only. the rest is all still done through the console, the limitation (or atleast one of them) was probably no terraform / api support. looks like that’s been resolved.
They atleast now have account assignment and identitystore.
the main part is really setting it up with federation and setting up automatic provisioning.
Overall it seems like its in a good 1.0.0 state at this point
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Good stuff – Thanks for weighing in @Ben Smith (Cloud Posse) + @Erik Osterman (Cloud Posse)
![Peter Luknár avatar](https://secure.gravatar.com/avatar/89d1185bf3fde582129cb021283ed34b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
@Matt Gowie Yes, in TF I manage these resources:
• Indentity store group (aws_identitystore_group
)
• Identity store user (aws_identitystore_user
)
• Identity store group membership (aws_identitystore_group_membership
)
• Permission set (aws_ssoadmin_permission_set
)
• SSO Customer managed policy (aws_ssoadmin_customer_managed_policy_attachment
)
• SSO Account assignment (aws_ssoadmin_account_assignment
)
As @Ben Smith (Cloud Posse) mentioned, only thing that needs to be done manually in console is federation, but rest (onboarding/offboarding, managing policies) is automated. Only other thing is to renew cert from time to time.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Gotcha – Thanks Peter