#aws (2023-01)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2023-01-01

2023-01-03

Adrian Rodzik avatar
Adrian Rodzik

Hello everyone, I have an default Tag Policy applied to my aws organisation. I’ve added some new tags to this policy and reatached it, but it seems that it not applied. In ex. i want to add those new tags to my ec2 instances but the new tags are not available for them. The already existing tags are in place. Any idea what im doing wrong? Thanks in advance!

Alan Kis avatar
Alan Kis

Tag policy won’t just enforces specific tag compliance it doesn’t update existing tags. Maybe I misunderstood the question

2023-01-04

Ashwin Jacob avatar
Ashwin Jacob

Hello Everyone!

I am creating 2 VPCs: Dev and Production. They each have their own CIDR range and are on us-east-1. I am using tailscale to connect to the private instances. I am trying to figure out how to work on Step 3 where I need to add AWS DNS to my tailnet. I got it working in DEV perfectly. As I work in production, I am realizing that there is a conflict in the search domain (on tailscale). Both search domains are us-east-1.compute.internal. How do I separate between DEV and PROD even though they are on the same region?

2023-01-05

bradym avatar

I created a new ed25519 key pair in the aws console and I’ve got the .pem file. But I can’t figure out how to get the public key from it. My googling tells me that openssl pkey -in private.pem -pubout should do it, but instead I get Could not read key from private.pem. Anyone know the correct incantation to get the public key?

Warren Parad avatar
Warren Parad

Why do you need the public key?

bradym avatar

Want to rotate ssh keys without spinning up a new instance, but also want it in aws so we can use it when we do spin up new instances.

Warren Parad avatar
Warren Parad
Describe public keys - Amazon Elastic Compute Cloud

You can describe the public keys that are stored in Amazon EC2. You can also retrieve the public key material and identify the public key that was specified at launch.

bradym avatar

Thanks. I looked at describe-key-pairs but the --include-public-key didn’t exist in the version of aws cli I had installed, updated to latest version and it works.

2023-01-09

kirupakaran1799 avatar
kirupakaran1799

Hello everyone, I need to write a lambda which should login the instance and execute (service nginx status) command and print that result and post those result to the gmail

Soren Jensen avatar
Soren Jensen

My suggestion will be to put the instance credentials in a secret, write a small Python Lambda function to log in, and execute the command. The results can be put on an SNS topic and sent to your email that way.

1
kirupakaran1799 avatar
kirupakaran1799

Thanks for you suggestion since we are using jumpbox to connect instances, is there any possible to connect via SSM ??

Warren Parad avatar
Warren Parad

Huh, why run a lambda for this, just use Cron on the machine and call ses with the results

2
kirupakaran1799 avatar
kirupakaran1799

Thanks @Warren Parad

kirupakaran1799 avatar
kirupakaran1799

@Warren Parad basically we are doing automation …we are not supposed to login the machine

Warren Parad avatar
Warren Parad

why would you need to log onto the machine?

kirupakaran1799 avatar
kirupakaran1799

We are planning to do the maintaince activity automatically..before that we need to check which services are running inside the server and need to post the services status to the email and once activity completed we need to again check the services running on that server and post the status into the mail

Warren Parad avatar
Warren Parad

so cron will solve that, right?

Warren Parad avatar
Warren Parad

and you can even be clever about which means to run the cron on.

1
Darren Cunningham avatar
Darren Cunningham

IMO it’s good to avoid needing to install services/scripts directly onto a host, so this is why SSM is better than cron. cron is easier, but sometimes that’s not the best solution.

1
Evanglist avatar
Evanglist

Best possible solution is using system manager run command. Get the status in lambda and conditionally write logic based on status.

Darren Cunningham avatar
Darren Cunningham

oh and publish the result to SNS — you can then distribute to email or whatever else might need it in the future

1
kirupakaran1799 avatar
kirupakaran1799

Is there any possible way to do this automation ??

John Stilia avatar
John Stilia

Hi all,

I am working on a personal project without writing a long message I am going to have 2 lambdas ( one for GET and one for POST attached to one API GW each ) I am Throttling via a UsagePlan on the API GW and the use of keys ( not for auth, just for the usage plan) The Lambdas will be hiting an RDS

I am deploying these with SAM

Would you have any advise for me that I should consider ?

Warren Parad avatar
Warren Parad

you asking for advice?

• You have two lambdas => Have only one lambda

• You are using APIGW REST API => Use HTTP API instead

• You are using RDS => use DynamoDB instead

• you are using SAM => Use CFN Templates directly

2
David avatar

that’s basically all the advice needed rolled in one! @Warren Parad

John Stilia avatar
John Stilia

I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise

HTTP API cool, I will check that !

Dynamo, yep, I ve been told again, I need to check that. Def RDS is an over provision for that

HM…. I started with SAM, then I couldnt get it moved to CF, it was a lot of code, but more control. then Moved to SAM mainly cause I couldnt be mothered to build my python code to a zip file as SAM does it for you ( I could be also wrong )

Warren Parad avatar
Warren Parad


I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise
definitely scalable and faster. If you are using python, you can use Chalice to do this

1
1
Vladimir avatar
Vladimir

Hi, does AWS Beanstalk provider create a RDS instance implicitly?

1

2023-01-10

John Stilia avatar
John Stilia

is this channel pretty dead. /

John Stilia avatar
John Stilia

?

Soren Jensen avatar
Soren Jensen

Nope not dead.. There is plenty of life here as well as in other channel

this2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, as @Soren Jensen said - it’s very much alive

jose.amengual avatar
jose.amengual

Anyone here have implemented/used Drift detection for Iac (terraform base)? what was the user flow, did it work well, if not why? autoremediation was a thing?

2023-01-11

John Stilia avatar
John Stilia

I could use some help here

I have the following Lambda It sends data to an RDS so I need it to be in the same VPC and Subnet as the RDS ( or does it ? )

also I need to get a secret from Secret manager so I have attached a policy

when I add the VPCconfig, I cant anymore get the secret

Any thoughts ? cause I am very confused

  HelloWorldFunctionGET:
    Type: AWS::Serverless::Function # More info about Function Resource: <https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction>
    Properties:
      FunctionName: HelloWorldFunctionGET
      CodeUri: rest-listener/
      Handler: app.lambda_handler
      Runtime: python3.9
      VpcConfig:
        SecurityGroupIds:
          - sg-XXXXX
        SubnetIds:
          - subnet-XXXX
      Architectures:
        - x86_64
      Events:
        HelloWorld:
          Type: Api 
          Properties:
            Path: /hello
            Method: get
            Auth:
              ApiKeyRequired: true

  IAMP2L87H:
    Type: "AWS::IAM::Policy"
    Properties:
      PolicyName: "IAMP2L87H"
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Action:
              - "secretsmanager:GetSecretValue"
            Resource: "arn:aws:secretsmanager:XXXXXXXXXXX"
      Roles:
        - !Ref "HelloWorldFunctionGETRole"
Warren Parad avatar
Warren Parad

If your lambda is in a VPC you need to add a VPC endpoint for the Secrets Manager to access it in the VPC. For RDS, you need RDS proxy, so no, it doesn’t need to be in the same subnet

Darren Cunningham avatar
Darren Cunningham

you don’t have to use a VPC Endpoint – sure it’s an ideal security practice but you can do without it. be cautious about VPCE as they can add up quickly depending on number of VPC & subnets

Darren Cunningham avatar
Darren Cunningham

and you don’t have to use RDS Data Proxy either, again probably ideal from a security practice and there have scaling benefits…but more

Warren Parad avatar
Warren Parad

It’s VPCE or NAT

Warren Parad avatar
Warren Parad

are you suggesting NAT?

Warren Parad avatar
Warren Parad

that’s way more expensive

Darren Cunningham avatar
Darren Cunningham

you’re assuming a private subnet, I’m assuming OP is using the default VPC with a IGW — so some clarification is required here.

Darren Cunningham avatar
Darren Cunningham

and NAT can be cheaper, all depends on VPC design but that’s another can of worms

John Stilia avatar
John Stilia

I suppose all I am trying to do is to write in RDS

Darren Cunningham avatar
Darren Cunningham

something you’ll learn about IaaS is that there’s rarely one way to solve your riddle and answers vary wildly depending on factors such as: costs & security requirements

1
John Stilia avatar
John Stilia

Also, kind of noob in some areas so please bear with me !

Darren Cunningham avatar
Darren Cunningham

all good, just trying to help without boiling the ocean

1
Darren Cunningham avatar
Darren Cunningham

are you doing this while trying to stay within free tier limits?

John Stilia avatar
John Stilia

@Darren Cunningham yes please !! At least at this stage

John Stilia avatar
John Stilia

tbh, networing is not my strong fit

John Stilia avatar
John Stilia

so I am not particularly familiar or experianced with VPC and Subnets

Darren Cunningham avatar
Darren Cunningham

you gotta be careful with AWS then, you can find yourself easily racking up bills you weren’t ready for.

1
John Stilia avatar
John Stilia

so @Warren Parad or other am I right to believe that I need to add the Secrete Manager in the same VPCe as the VPC I am refering into my lambda. ?

Darren Cunningham avatar
Darren Cunningham

VPC Endpoints are dedicated private connections that allow you to directly connect to a service (EC2, SSM, etc…they’re all APIs) endpoint without having to leave your VPC. but they cost money per instance.

John Stilia avatar
John Stilia

Also, I cant find any info on the cost of adding a VPC endpoint

John Stilia avatar
John Stilia

I see @Darren Cunningham

John Stilia avatar
John Stilia

So I managed to get it to work with the VPC endpoint. It kind of makes sense.

regarding the cost. is it like couple of $/£ per month or we talking real £$££$£$

Darren Cunningham avatar
Darren Cunningham


Pricing per VPC endpoint per AZ ($/hour)
$0.01

John Stilia avatar
John Stilia

literaly just seen it

John Stilia avatar
John Stilia

thats pretty crappy ! that would add another $7

Darren Cunningham avatar
Darren Cunningham

TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.

John Stilia avatar
John Stilia

is there an alternative to access RDS without adding the Lambda in a VPC etc ? cause that way I wouldnt need a VPCe for the SM

John Stilia avatar
John Stilia


TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.
I have a small budget. but Ideally I get to do it with the smallest cost. soecially if there is an alternative

John Stilia avatar
John Stilia

I also need to move of RDS to Dynamo, so that might be the alternative

Darren Cunningham avatar
Darren Cunningham

smallest cost would be to go with DynamoDB, you can let your Lambda run without a VPC attachment

1
Darren Cunningham avatar
Darren Cunningham

totally acceptable for the purposes of learning, at scale & when data security is a concern you’d VPC attach your lambda and setup a VPCE to DynamoDB, but you can worry about all that later

John Stilia avatar
John Stilia

yep, I have been told that again I suppose I am droping the $7/month for VPCe and Also the 15-25$/m for RDS all together will be free

John Stilia avatar
John Stilia

I ddint think of it and now I have writen all of my py code for RDS… luckily its only one method I need to change that is the DB_Data_input/output

Darren Cunningham avatar
Darren Cunningham

again, be careful with DynamoDB because you write a bad query you can blow up your bill

John Stilia avatar
John Stilia

btw, for context, I am making a VScode extention that will allow you to “DM” a paste to another user instead of Copy, paste to slack etc

I do it for fun so I get to learn something, though I feel it is useful !

John Stilia avatar
John Stilia


again, be careful with DynamoDB because you write a bad query you can blow up your bill
What you mean ?

Darren Cunningham avatar
Darren Cunningham

it’s not super easy to do so it’s not field of mines, but if you have a larger data set and your querying it all the time inefficiently it will add up quickly

Darren Cunningham avatar
Darren Cunningham

more so if you have non-performant writes

John Stilia avatar
John Stilia

I see, I think I should be ok as I am only puting 3 values and retriving 2 values based on the ID

It should be a simple table 3xY (Y being the number of entries )

Warren Parad avatar
Warren Parad

I wasn’t assuming a private subnet. Lambda can’t reach the internet in a public subnet without a NAT, right? Or did something change?

Darren Cunningham avatar
Darren Cunningham

IGW - no costs — public subnets NAT — costs (can be reduced by using a NAT instance) — private subnets

John Stilia avatar
John Stilia

So bottom line !

I leaned about VPCe today I am writing code to store/retrive data from Dynamo, Lets see ! ( I might be asking more abt it as I am more familiar with SQL dbs ) It also seems way easier and in principle cheaper Keeping the RDS code though

John Stilia avatar
John Stilia

btw, @Darren Cunningham and @Warren Parad your help has been great, I appreciate that, I hope I can return

Darren Cunningham avatar
Darren Cunningham
  1. can’t answer about RDS since there aren’t any details about RDS VPC configuration
  2. check that your security group & subnet NACL allow for HTTPS — if you’re using a VPC Endpoint for SSM then check that the SG allows for inbound from your subnet CIDR
1

2023-01-12

John Stilia avatar
John Stilia

Hi folks,

If I have an account A and an account B

The account A has a R53 example.com how can I create R53 entries on the account B that will be forwarded on the account A

(I am confused on where I need to put what nameservers

Warren Parad avatar
Warren Parad

what do you mean create entries, do mean create “DNS Records”?

Sam avatar

You need to create a new zone in account B test.example.com copy the NS servers from account B, create an NS record on account A and paste the copied NS servers from account B.

Warren Parad avatar
Warren Parad

maybe? We are just missing way too many pieces of information to suggest what the right solution is

John Stilia avatar
John Stilia

funny enough i managed to do it

create a zone in account B then use the NS to create an NS on account A with the domain/subdomain of B

Gabriel avatar
Gabriel

I am using aurora serverless v2 with 2 instances (multi az, writer/reader). Only the writer is effectively used by an application. The reader is pretty much useless atm except for failover.

Yet, even though, it’s only doing the replication it is using pretty much the same amount of ACU’s as the writer.

I would be expecting it to use much less and so save some money if not used. Anybody using multi-az aurora serverless v2? Is this behaviour normal? Is there any way to change it ?

John Stilia avatar
John Stilia

Hi Sweetops

I have created some resources from the console and when I use CF to manage them of course it complains that those reseouces are already exist, Is there a way to wither force the creation or import that state in the CF ?

1
John Stilia avatar
John Stilia

I suppose this wont be applicable with SAM ?

2023-01-14

2023-01-16

Gabriel avatar
Gabriel

I want to do something in a lambda upon a DB cluster being available. I cannot find an event that says “DB cluster available” but there is one that says “DB cluster created” - RDS-EVENT-0170 Would this event fired also mean that the cluster is available?

Warren Parad avatar
Warren Parad

What does “available” mean to you?

Gabriel avatar
Gabriel

e.g. ready for instances to be added

John Stilia avatar
John Stilia

have you thought of using step fucntions to check by using AWS SDK(boto3 etc ) ?

I am not quiet sure what is your objective however, Mind giving us a bit more of the user story ?

Evanglist avatar
Evanglist
Creating a rule that triggers on an Amazon RDS event - Amazon Relational Database Service

Learn how to write rules to send Amazon RDS events to targets such as CloudWatch Events and Amazon EventBridge.

1

2023-01-17

Yonatan Koren avatar
Yonatan Koren

Hey all,

What is the best way to do a sweeping destroy, or “nuke” a bunch of AWS resource that all consistently have one tag in particular? For example Environment: Foo.

Think when you have some resources that were all spun up by Terraform some time ago and all have this consistent tag, but the Terraform config is so foobar’d that you cannot run a terraform destroy.

I thought aws-nuke would be the absolute perfect candidate for this, but when trying to write an aws-nuke config that targets this tag across all resources, I ran into this issue, which shows that you have to know every resource type beforehand and write a filter for that resource (that filters for Environment: Foo).

So my best bet is to write a bash script that iterates over aws-nuke resource-types and spits out a YAML list item with that filter, and then shove that massive config into aws-nuke.

Or maybe someone knows of a different tool that can fulfill this use case?

Comment on #218 Question: Only nuke Resources with certain Tag?

Can I add something similar to this request?

A per-account filter, that marks all resources with a certain tag to exclude.

Basically instead of filtering tagged resources per resource-type like this:

filters:
    LambdaFunction:
    - property: "tag:ManagedResource"
      value: "True"
    S3Bucket:
    - property: "tag:ManagedResource"
      value: "True"
    CloudFormationStack:
    - property: "tag:ManagedResource"
      value: "True"

I want to be able to do this:

filters:
    AllResources:
    - property: "tag:ManagedResource"
      value: "True"
RB avatar

have you looked at cloudnuke ?

Comment on #218 Question: Only nuke Resources with certain Tag?

Can I add something similar to this request?

A per-account filter, that marks all resources with a certain tag to exclude.

Basically instead of filtering tagged resources per resource-type like this:

filters:
    LambdaFunction:
    - property: "tag:ManagedResource"
      value: "True"
    S3Bucket:
    - property: "tag:ManagedResource"
      value: "True"
    CloudFormationStack:
    - property: "tag:ManagedResource"
      value: "True"

I want to be able to do this:

filters:
    AllResources:
    - property: "tag:ManagedResource"
      value: "True"
RB avatar
gruntwork-io/cloud-nuke

A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it

Yonatan Koren avatar
Yonatan Koren

Yeah i was aware of it from before, but I think the syntax is very similar https://github.com/gruntwork-io/cloud-nuke#example

Maybe it’s not really a limitation but rather a need to be very explicit in the config file… and creating a super big config file semi-automatically might be the best solution still…

loren avatar

seems like a nice use case. might be worth contributing the feature to one of the mentioned tools…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

Yonatan Koren avatar
Yonatan Koren

I ended up doing this (posted it in the issue linked above for anyone bumping into it to see))) to build the aws-nuke config:

#!/bin/sh

cat <<-EOT
regions:
  - us-east-1
  - global

account-blocklist:
  - "123456789012" # production

resource-types:
  excludes:
  # The following resources cannot be filtered by tags (aws-nuke error: "does not support custom properties")
  - IAMSAMLProvider
  - ECRRepository
  - ElasticacheSubnetGroup
  - CloudWatchEventsRule
  - SQSQueue
  - ElasticacheCacheCluster
  - ElasticacheReplicationGroup
  - NetpuneSnapshot
  - NeptuneInstance
  - NeptuneCluster
  - LifecycleHook
  - CloudWatchEventsTarget
  - MQBroker
  # The following resources are unvailable due to deprecated APIs or other issues:
  - FMSPolicy
  - MachineLearningMLModel
  - FMSNotificationChannel
  - MachineLearningBranchPrediction
  - MachineLearningEvaluation
  - MachineLearningDataSource

accounts:
  "000000000000": # the account in question
    filters:
EOT
for resource_type in $(aws-nuke resource-types); 
do  
    cat <<-EOT
      $resource_type:
      - property: "tag:Environment"
        value: "foo"
        invert: true
EOT
done

Mind you because of the inability to filter some resources based on tags, the following resource types would be left alone and not nuked:

  - IAMSAMLProvider
  - ECRRepository
  - ElasticacheSubnetGroup
  - CloudWatchEventsRuleƒ
  - SQSQueue
  - ElasticacheCacheCluster
  - ElasticacheReplicationGroup
  - NetpuneSnapshot
  - NeptuneInstance
  - NeptuneCluster
  - LifecycleHook
  - CloudWatchEventsTarget
  - MQBroker

Probably, the ones of interest to delete (would probably accrue costs or are more likely to be related to an environment you want to delete) are:

  - ECRRepository
  - ElasticacheSubnetGroup
  - ElasticacheCacheCluster
  - ElasticacheReplicationGroup
  - MQBroker
  - SQSQueue

2023-01-18

Shreyank Sharma avatar
Shreyank Sharma

Hello All, We are using AWS Lambdas and some of the lambdas will be running every 5min and generates a lot of logs. We noticed that these lambdas generate a lot of cloudwatch logs. and If a goto that Lambd’s log group and click on any log stream which generated the last 2 days back it takes time but it loads all the logs if I go more than 2 days back, like 3 or 4 days back cloudwatch logs for that lambda it loads for some time then just shows empty, but if I filter for some word like “bill” then it shows logs which have bill word it. old logs will not show but if I put a filter it will show, anyone faced this issue? will it work if I clear any old logs? right now its configured to keep forever thank you

tyler avatar

I think leveraging cloudwatch insights or aggregating the logs to another service would make for faster search time. Streams has always been cumbersome to search in for me.

Shreyank Sharma avatar
Shreyank Sharma

Thanks @tyler I will check on that

Darren Cunningham avatar
Darren Cunningham

sounds like a bug that should be raised to AWS CloudWatch Support

Shreyank Sharma avatar
Shreyank Sharma

thanks @Darren Cunningham, but it works for other lambda which do not generate lot of logs,

2023-01-19

vicentemanzano6 avatar
vicentemanzano6

Hi, I have one ecs container running cron jobs inside, would codedeploy wait for all the process to finish when making a blue green deployment before deleting the container?

Harry avatar

I believe it’ll send a shutdown signal to the init process, which will cascade to running processes internally and then terminate after a timeout. If you’re running cron jobs that take a long time consider rewriting them to enqueue jobs on an external queue, so if a deploy lands during the execution and it terminates a worker while it’s running the next worker will be able to pick the next job up off the queue and finish the batch.

Mike Robinson avatar
Mike Robinson

I’m not sure how Codedeploy would handle this, but alternatively, use Eventbridge scheduled rules to launch individual ECS tasks for each job. This makes it so that if an error occurs in a job that triggers a shutdown (ie. out-of-memory), it won’t affect any other running job.

2023-01-20

2023-01-24

John Stilia avatar
John Stilia

hi al

I habe a lambda+api_gw that does some DB work the api_gw resolves on api.domain.com/get-data or api.domain.com/put-data

I would like to have the notion of api versioning for example api.domain.com/v1/get-data etc

Any ideas how I can do that

Sono Chhibber avatar
Sono Chhibber

Short of it, you use stages.

V1 or V2 is just label, that is you can have “dev”, “prod”, “v1”, “v10”, etc….

https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-stages.html

John Stilia avatar
John Stilia

When I do a custom daomin however, I have to map the stage on this domain which then the stage path dissapears

Unless I am getting it wrong :/

tsoe77 avatar

you can just put in v1 in the path field and access custom domain with v1 appended path <https://api.domain.com/v1/xxx> where xxx is the actual resource of the api that you are mapping to. considering you already have your api resource created as /get-data and /put-data , etc.

1
tsoe77 avatar

if you want to follow api best practices, use noun like /data for the url and resource method with http methods GET PUT ,etc. https://swagger.io/resources/articles/best-practices-in-api-design/

1
John Stilia avatar
John Stilia

I suppose I will need to have a different stage per path as well that would prob trigger a different lambda ?

tsoe77 avatar

yea, you can have another custom domain for dev and map the same to dev stage. e.g [api.dev.example.com/v1](http://api.dev.example.com/v1) to dev stage and [api.example.com/v1](http://api.example.com/v1) to prod stage

tsoe77 avatar

if the domain is too much you can do path based like [api.example.com/dev/v1](http://api.example.com/dev/v1) to dev stage and [api.example.com/prod/v1](http://api.example.com/prod/v1) to prod stage.

2023-01-25

2023-01-26

Martin Helfert avatar
Martin Helfert

Hey. Would you recommend paid AWS support plans? We currently have a business support plan for our prod account which in fact we didn’t use for the last two years. Leave it like that for “just in case..” or switch it on/off whenever we have the need for extended support? How do you handle this? The cost add up to a fairly huge amount..

Darren Cunningham avatar
Darren Cunningham

I recommend them, but my team uses it fairly frequently. While it might not always net results, we use support as a sounding board to validate our implementation or if we’re about to start work on a new feature we’ll throw out a support request for input as to how we should solve for x.

Darren Cunningham avatar
Darren Cunningham

but if you have an established environment that doesn’t change very often and your business wants to take on the risk of a longer response time during events to save money, it’s not inherently wrong.

Soren Jensen avatar
Soren Jensen

We got a setup with ~20 accounts. I got the business support enabled in 6 of them. Planning to cut it down to 4. If I need help with an issue I either replicate it in an account with the support enabled. Or I enable it in the affected account. They don’t seam to give any less help if you have just enabled the support. So turn it off, save the money and enable when needed.

1
Darren Cunningham avatar
Darren Cunningham

oh yeah our org is ~ 50 accounts and we only have it enabled in 2 accounts. good call on enabling when needed.

1
Martin Helfert avatar
Martin Helfert

sorry for the late reply. Thanks for your answers, that already makes the decision much clearer

2023-01-27

fotag avatar

Hello :wave: !

Some questions related to AWS Aurora (Postgres like) serverlessV2 cluster with Multi-AZ (which created via terraform with count=2 at aws_rds_cluster_instance - no multi-az conf available via TF for serverlessv2) in case anyone knows:

• Does this cluster use all instances created there (writer and readers) are in use via cluster endpoint or readers are standby until they will be needed?

• Do we have to specify allocated storage and storage type?

Steven Miller avatar
Steven Miller

Is anyone using multi architecture EKS + self-hosted github runners for multi architecture builds? The point would be to allow something like this:

runs-on: ubuntu-20.04-arm64-large

In github workflows. Is that overly complicated or even a valid concept? Is there some easier option we haven’t considered for self-hosted multi-architecture builds? Or maybe we just go with github-hosted runners. Is it already built in, like karpenter can deploy arm nodes noting the target architecture of the pods or something like that?

Soren Jensen avatar
Soren Jensen

Anyone know if you can subscribe to when new AMI’s are released by AWS? AWS Inspector is identifying an issue in the latest image, it will be nice to know when to redeploy.

1
maarten avatar
maarten
Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store | Amazon Web Servicesattachment image

This post is courtesy of Arend Castelein, Software Development Engineer – AWS Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has […]

Soren Jensen avatar
Soren Jensen

Nice one! Thanks a million

2023-01-28

John Stilia avatar
John Stilia

Hi all,

I have a standard API->Lambda->Dynamo app.

Currently I have the API on aggressive throttle and quota to minimise usage and get DOW. Would you have any other advise on how to go about it without using expensive resources like WAF ? ( its only an OpenSource VSCode extension I am building )

Thanks in advance

Warren Parad avatar
Warren Parad

WAF is $8/month that’s pretty cheap

John Stilia avatar
John Stilia

Hm. Might check that.

John Stilia avatar
John Stilia

It’s also a free VSCode extension. So I try to keep cost low.

Warren Parad avatar
Warren Parad

Can I recommend storing the user data in their own github or gitlab account then

Warren Parad avatar
Warren Parad

then you don’t need to worry about the DDB cost either

Warren Parad avatar
Warren Parad

dumb the WAF

Warren Parad avatar
Warren Parad

and use APIGW HTTP API => Lambda

John Stilia avatar
John Stilia

hm… The whole idea is to have something like pastebin, but in vscode, and way easier

John Stilia avatar
John Stilia

open to thoughts though !

Warren Parad avatar
Warren Parad

using gists as the backing store for that seems like an obvious solution

Warren Parad avatar
Warren Parad

(or a separate git repo)

Warren Parad avatar
Warren Parad
Gist - Visual Studio Marketplace

Extension for Visual Studio Code - Create, open and edit Gists

1
John Stilia avatar
John Stilia

gists is a good idea. gonna have to work with Github OAuth

Warren Parad avatar
Warren Parad

the vscode extensions sync plugin does this, I bet there is some good code there to copy

1
Warren Parad avatar
Warren Parad
shanalikhan/code-settings-sync

Synchronize your Visual Studio Code Settings Across Multiple Machines using GitHub GIST

1
Warren Parad avatar
Warren Parad

And the default settings sync built into VS Code has some more code available: https://github.com/microsoft/vscode/issues/88309

#88309 Authentication Provider API

Problem

There are currently some extensions that attempt to provide authentication abilities that can be reused by other extensions. (An example being the Azure Account extension). Now that we’ve begun working on login for settings sync, it’s worth revisiting if authentication should be a first-class concept in VS Code. By exposing an API to contribute an authentication flow

• the core of VSCode can potentially leverage authentication • other extensions can leverage authentication • UI for account management could be centralized

Proposal

I propose introducing a concept of an “AuthenticationProvider”. Such a provider implements methods for logging in and logging out of a specified account, and exposes a list of accounts that are currently available with an event listener for changes to these. This abstracts away refreshing tokens from consumers - the AuthenticationProvider extension can manage refreshing in the background and fire an event when the accessToken has been changed.

export interface Account {
	readonly id: string;
	readonly accessToken: string;
	readonly displayName: string;
}

export interface AuthenticationProvider {
	readonly id: string; // perhaps "type"? Would be something like "GitHub", "MSA", etc.
	readonly displayName: string;

	accounts: ReadonlyArray<Account>;
	onDidChangeAccounts: Event<ReadonlyArray<Account>>;

	login(): Promise<Account>;
	logout(accountId: string): Promise<void>;
}

export namespace authentication {
	export function registerAuthenticationProvider(provider: AuthenticationProvider): Disposable;
	export const authenticationProviders: ReadonlyArray<AuthenticationProvider>;
}

Consumers would need to know the id of the provider they’re looking for. For example, the settings sync code would look for an “MSA” provider since this is what the setting sync backend currently needs.

Since the authentication provider extension would be activated in each VS Code window, the extension would be responsible for synchronizing state across instances. By default, such extensions would have [“ui”, “workspace”] extensionKind, so that they can store and read credentials on the local machine in both the desktop and web case.

1

2023-01-29

2023-01-30

Gabriel avatar
Gabriel

If you need documentation and lists related to permissions in one place, this might be useful … https://aws.permissions.cloud/

4

2023-01-31

John Stilia avatar
John Stilia

hi folks

if I have apply a Usage plan on my REST API_GW, after the quota is met, and I get HTTP429, do I get charged for any subsequent API calls or aws takes care of it ? Cant find anything on docs

Alex Atkinson avatar
Alex Atkinson

I haven’t looked at the west regions for quite a while. Is us-west-1 over capacity lately? us-west-1c is unavailable.

aws ec2 describe-availability-zones --region us-west-1
{
    "AvailabilityZones": [
        {
            "State": "available",
            "OptInStatus": "opt-in-not-required",
            "Messages": [],
            "RegionName": "us-west-1",
            "ZoneName": "us-west-1a",
            "ZoneId": "usw1-az3",
            "GroupName": "us-west-1",
            "NetworkBorderGroup": "us-west-1",
            "ZoneType": "availability-zone"
        },
        {
            "State": "available",
            "OptInStatus": "opt-in-not-required",
            "Messages": [],
            "RegionName": "us-west-1",
            "ZoneName": "us-west-1b",
            "ZoneId": "usw1-az1",
            "GroupName": "us-west-1",
            "NetworkBorderGroup": "us-west-1",
            "ZoneType": "availability-zone"
        }
    ]
}
Matt Gowie avatar
Matt Gowie

Anyone here have success with AWS SSO account delegation? I heard it was buggy when it was first released and I’m wondering if that is still the case.

3
Peter Luknár avatar
Peter Luknár

What kind of bugs were there? I have several projects where I manage SSO (e2e) with Terraform and I had no problems since.

Darren Cunningham avatar
Darren Cunningham

the only issue I ran into was if I wanted to reassign the delegation to another region. the process support recommended was ugly. support told me they were working on a feature to make that “easy”, but I have since switched orgs and haven’t needed it so not sure where that ended up.

Matt Gowie avatar
Matt Gowie

AWS SSO isn’t a global service? Why did you need to reassign a region?

Matt Gowie avatar
Matt Gowie

Not sure what the bugs were… I just remember @Erik Osterman (Cloud Posse) and others talking about how it wasn’t working the way they wanted it to when it first came out.

Matt Gowie avatar
Matt Gowie
How to delegate management of identity in AWS Single Sign-On | Amazon Web Servicesattachment image

September 26, 2022: This blog post has been updated to reflect corrections on sample codes. September 12, 2022: This blog post has been updated to reflect the new name of AWS Single Sign-On (SSO) – AWS IAM Identity Center. Read more about the name change here. Note on May 13, 2022: AWS IAM Identity Center […]

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, I can’t remember what the problems were, only that it was at odds with doing it in Terraform. @Ben Smith (Cloud Posse) were you the one working on this?

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

I’ve done this a couple times at this point, essentially our aws-sso component creates permission sets only. the rest is all still done through the console, the limitation (or atleast one of them) was probably no terraform / api support. looks like that’s been resolved.

They atleast now have account assignment and identitystore.

the main part is really setting it up with federation and setting up automatic provisioning.

Overall it seems like its in a good 1.0.0 state at this point

Matt Gowie avatar
Matt Gowie

Good stuff – Thanks for weighing in @Ben Smith (Cloud Posse) + @Erik Osterman (Cloud Posse)

Peter Luknár avatar
Peter Luknár

@Matt Gowie Yes, in TF I manage these resources: • Indentity store group (aws_identitystore_group) • Identity store user (aws_identitystore_user) • Identity store group membership (aws_identitystore_group_membership) • Permission set (aws_ssoadmin_permission_set) • SSO Customer managed policy (aws_ssoadmin_customer_managed_policy_attachment) • SSO Account assignment (aws_ssoadmin_account_assignment) As @Ben Smith (Cloud Posse) mentioned, only thing that needs to be done manually in console is federation, but rest (onboarding/offboarding, managing policies) is automated. Only other thing is to renew cert from time to time.

2
Matt Gowie avatar
Matt Gowie

Gotcha – Thanks Peter

    keyboard_arrow_up