#aws (2023-01)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2023-01-01
2023-01-03
Hello everyone, I have an default Tag Policy applied to my aws organisation. I’ve added some new tags to this policy and reatached it, but it seems that it not applied. In ex. i want to add those new tags to my ec2 instances but the new tags are not available for them. The already existing tags are in place. Any idea what im doing wrong? Thanks in advance!
Tag policy won’t just enforces specific tag compliance it doesn’t update existing tags. Maybe I misunderstood the question
2023-01-04
Hello Everyone!
I am creating 2 VPCs: Dev and Production. They each have their own CIDR range and are on us-east-1. I am using tailscale to connect to the private instances. I am trying to figure out how to work on Step 3 where I need to add AWS DNS to my tailnet. I got it working in DEV perfectly. As I work in production, I am realizing that there is a conflict in the search domain (on tailscale). Both search domains are us-east-1.compute.internal
. How do I separate between DEV and PROD even though they are on the same region?
2023-01-05
I created a new ed25519 key pair in the aws console and I’ve got the .pem file. But I can’t figure out how to get the public key from it. My googling tells me that openssl pkey -in private.pem -pubout
should do it, but instead I get Could not read key from private.pem
. Anyone know the correct incantation to get the public key?
Why do you need the public key?
Want to rotate ssh keys without spinning up a new instance, but also want it in aws so we can use it when we do spin up new instances.
You can describe the public keys that are stored in Amazon EC2. You can also retrieve the public key material and identify the public key that was specified at launch.
Thanks. I looked at describe-key-pairs
but the --include-public-key
didn’t exist in the version of aws cli I had installed, updated to latest version and it works.
2023-01-09
Hello everyone, I need to write a lambda which should login the instance and execute (service nginx status) command and print that result and post those result to the gmail
My suggestion will be to put the instance credentials in a secret, write a small Python Lambda function to log in, and execute the command. The results can be put on an SNS topic and sent to your email that way.
Thanks for you suggestion since we are using jumpbox to connect instances, is there any possible to connect via SSM ??
Huh, why run a lambda for this, just use Cron on the machine and call ses with the results
Thanks @Warren Parad
@Warren Parad basically we are doing automation …we are not supposed to login the machine
why would you need to log onto the machine?
We are planning to do the maintaince activity automatically..before that we need to check which services are running inside the server and need to post the services status to the email and once activity completed we need to again check the services running on that server and post the status into the mail
so cron will solve that, right?
IMO it’s good to avoid needing to install services/scripts directly onto a host, so this is why SSM is better than cron. cron is easier, but sometimes that’s not the best solution.
Best possible solution is using system manager run command. Get the status in lambda and conditionally write logic based on status.
oh and publish the result to SNS — you can then distribute to email or whatever else might need it in the future
Is there any possible way to do this automation ??
Hi all,
I am working on a personal project without writing a long message I am going to have 2 lambdas ( one for GET and one for POST attached to one API GW each ) I am Throttling via a UsagePlan on the API GW and the use of keys ( not for auth, just for the usage plan) The Lambdas will be hiting an RDS
I am deploying these with SAM
Would you have any advise for me that I should consider ?
you asking for advice?
• You have two lambdas => Have only one lambda
• You are using APIGW REST API => Use HTTP API instead
• You are using RDS => use DynamoDB instead
• you are using SAM => Use CFN Templates directly
that’s basically all the advice needed rolled in one! @Warren Parad
I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise
HTTP API cool, I will check that !
Dynamo, yep, I ve been told again, I need to check that. Def RDS is an over provision for that
HM…. I started with SAM, then I couldnt get it moved to CF, it was a lot of code, but more control. then Moved to SAM mainly cause I couldnt be mothered to build my python code to a zip file as SAM does it for you ( I could be also wrong )
I have two lambdas cause otherwise I need to add logic in the one to handle GET/POST/OTHER, Not sure if this is scalable code wise
definitely scalable and faster. If you are using python, you can use Chalice to do this
2023-01-10
is this channel pretty dead. /
?
Yes, as @Soren Jensen said - it’s very much alive
Anyone here have implemented/used Drift detection for Iac (terraform base)? what was the user flow, did it work well, if not why? autoremediation was a thing?
2023-01-11
I could use some help here
I have the following Lambda It sends data to an RDS so I need it to be in the same VPC and Subnet as the RDS ( or does it ? )
also I need to get a secret from Secret manager so I have attached a policy
when I add the VPCconfig
, I cant anymore get the secret
Any thoughts ? cause I am very confused
HelloWorldFunctionGET:
Type: AWS::Serverless::Function # More info about Function Resource: <https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction>
Properties:
FunctionName: HelloWorldFunctionGET
CodeUri: rest-listener/
Handler: app.lambda_handler
Runtime: python3.9
VpcConfig:
SecurityGroupIds:
- sg-XXXXX
SubnetIds:
- subnet-XXXX
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Auth:
ApiKeyRequired: true
IAMP2L87H:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "IAMP2L87H"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "secretsmanager:GetSecretValue"
Resource: "arn:aws:secretsmanager:XXXXXXXXXXX"
Roles:
- !Ref "HelloWorldFunctionGETRole"
If your lambda is in a VPC you need to add a VPC endpoint for the Secrets Manager to access it in the VPC. For RDS, you need RDS proxy, so no, it doesn’t need to be in the same subnet
you don’t have to use a VPC Endpoint – sure it’s an ideal security practice but you can do without it. be cautious about VPCE as they can add up quickly depending on number of VPC & subnets
and you don’t have to use RDS Data Proxy either, again probably ideal from a security practice and there have scaling benefits…but more
It’s VPCE or NAT
are you suggesting NAT?
that’s way more expensive
you’re assuming a private subnet, I’m assuming OP is using the default VPC with a IGW — so some clarification is required here.
and NAT can be cheaper, all depends on VPC design but that’s another can of worms
I suppose all I am trying to do is to write in RDS
something you’ll learn about IaaS is that there’s rarely one way to solve your riddle and answers vary wildly depending on factors such as: costs & security requirements
Also, kind of noob in some areas so please bear with me !
are you doing this while trying to stay within free tier limits?
@Darren Cunningham yes please !! At least at this stage
tbh, networing is not my strong fit
so I am not particularly familiar or experianced with VPC and Subnets
you gotta be careful with AWS then, you can find yourself easily racking up bills you weren’t ready for.
so @Warren Parad or other am I right to believe that I need to add the Secrete Manager in the same VPCe as the VPC I am refering into my lambda. ?
VPC Endpoints are dedicated private connections that allow you to directly connect to a service (EC2, SSM, etc…they’re all APIs) endpoint without having to leave your VPC. but they cost money per instance.
Also, I cant find any info on the cost of adding a VPC endpoint
I see @Darren Cunningham
So I managed to get it to work with the VPC endpoint. It kind of makes sense.
regarding the cost. is it like couple of $/£ per month or we talking real £$££$£$
Pricing per VPC endpoint per AZ ($/hour)
$0.01
literaly just seen it
thats pretty crappy ! that would add another $7
TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.
is there an alternative to access RDS without adding the Lambda in a VPC etc ? cause that way I wouldnt need a VPCe for the SM
TBH, $7 in AWS world isn’t much. but I get that your’e trying to do this free tier to learn so yeah it can add up quick.
I have a small budget. but Ideally I get to do it with the smallest cost.
soecially if there is an alternative
I also need to move of RDS to Dynamo, so that might be the alternative
smallest cost would be to go with DynamoDB, you can let your Lambda run without a VPC attachment
totally acceptable for the purposes of learning, at scale & when data security is a concern you’d VPC attach your lambda and setup a VPCE to DynamoDB, but you can worry about all that later
yep, I have been told that again I suppose I am droping the $7/month for VPCe and Also the 15-25$/m for RDS all together will be free
I ddint think of it and now I have writen all of my py code for RDS… luckily its only one method I need to change that is the DB_Data_input/output
again, be careful with DynamoDB because you write a bad query you can blow up your bill
btw, for context, I am making a VScode extention that will allow you to “DM” a paste to another user instead of Copy, paste to slack etc
I do it for fun so I get to learn something, though I feel it is useful !
again, be careful with DynamoDB because you write a bad query you can blow up your bill
What you mean ?
it’s not super easy to do so it’s not field of mines, but if you have a larger data set and your querying it all the time inefficiently it will add up quickly
more so if you have non-performant writes
I see, I think I should be ok as I am only puting 3 values and retriving 2 values based on the ID
It should be a simple table 3xY (Y being the number of entries )
I wasn’t assuming a private subnet. Lambda can’t reach the internet in a public subnet without a NAT, right? Or did something change?
IGW - no costs — public subnets NAT — costs (can be reduced by using a NAT instance) — private subnets
So bottom line !
I leaned about VPCe today I am writing code to store/retrive data from Dynamo, Lets see ! ( I might be asking more abt it as I am more familiar with SQL dbs ) It also seems way easier and in principle cheaper Keeping the RDS code though
btw, @Darren Cunningham and @Warren Parad your help has been great, I appreciate that, I hope I can return
- can’t answer about RDS since there aren’t any details about RDS VPC configuration
- check that your security group & subnet NACL allow for HTTPS — if you’re using a VPC Endpoint for SSM then check that the SG allows for inbound from your subnet CIDR
2023-01-12
Hi folks,
If I have an account A and an account B
The account A has a R53 example.com how can I create R53 entries on the account B that will be forwarded on the account A
(I am confused on where I need to put what nameservers
what do you mean create entries, do mean create “DNS Records”?
You need to create a new zone in account B test.example.com copy the NS servers from account B, create an NS record on account A and paste the copied NS servers from account B.
maybe? We are just missing way too many pieces of information to suggest what the right solution is
funny enough i managed to do it
create a zone in account B then use the NS to create an NS on account A with the domain/subdomain of B
I am using aurora serverless v2 with 2 instances (multi az, writer/reader). Only the writer is effectively used by an application. The reader is pretty much useless atm except for failover.
Yet, even though, it’s only doing the replication it is using pretty much the same amount of ACU’s as the writer.
I would be expecting it to use much less and so save some money if not used. Anybody using multi-az aurora serverless v2? Is this behaviour normal? Is there any way to change it ?
Hi Sweetops
I have created some resources from the console and when I use CF to manage them of course it complains that those reseouces are already exist, Is there a way to wither force the creation or import that state in the CF ?
• https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
• https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
Bring existing resources into a new or existing stack to manage them using AWS CloudFormation.
Import an existing AWS resource into a stack to bring it into CloudFormation.
I suppose this wont be applicable with SAM ?
2023-01-14
2023-01-16
I want to do something in a lambda upon a DB cluster being available. I cannot find an event that says “DB cluster available” but there is one that says “DB cluster created” - RDS-EVENT-0170 Would this event fired also mean that the cluster is available?
What does “available” mean to you?
e.g. ready for instances to be added
have you thought of using step fucntions to check by using AWS SDK(boto3 etc ) ?
I am not quiet sure what is your objective however, Mind giving us a bit more of the user story ?
I think this might help : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-cloud-watch-events.html
Learn how to write rules to send Amazon RDS events to targets such as CloudWatch Events and Amazon EventBridge.
2023-01-17
Hey all,
What is the best way to do a sweeping destroy, or “nuke” a bunch of AWS resource that all consistently have one tag in particular? For example Environment: Foo
.
Think when you have some resources that were all spun up by Terraform some time ago and all have this consistent tag, but the Terraform config is so foobar’d that you cannot run a terraform destroy
.
I thought aws-nuke would be the absolute perfect candidate for this, but when trying to write an aws-nuke config that targets this tag across all resources, I ran into this issue, which shows that you have to know every resource type beforehand and write a filter for that resource (that filters for Environment: Foo
).
So my best bet is to write a bash script that iterates over aws-nuke resource-types
and spits out a YAML list item with that filter, and then shove that massive config into aws-nuke
.
Or maybe someone knows of a different tool that can fulfill this use case?
Can I add something similar to this request?
A per-account filter, that marks all resources with a certain tag to exclude.
Basically instead of filtering tagged resources per resource-type like this:
filters:
LambdaFunction:
- property: "tag:ManagedResource"
value: "True"
S3Bucket:
- property: "tag:ManagedResource"
value: "True"
CloudFormationStack:
- property: "tag:ManagedResource"
value: "True"
I want to be able to do this:
filters:
AllResources:
- property: "tag:ManagedResource"
value: "True"
have you looked at cloudnuke ?
Can I add something similar to this request?
A per-account filter, that marks all resources with a certain tag to exclude.
Basically instead of filtering tagged resources per resource-type like this:
filters:
LambdaFunction:
- property: "tag:ManagedResource"
value: "True"
S3Bucket:
- property: "tag:ManagedResource"
value: "True"
CloudFormationStack:
- property: "tag:ManagedResource"
value: "True"
I want to be able to do this:
filters:
AllResources:
- property: "tag:ManagedResource"
value: "True"
A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it
Yeah i was aware of it from before, but I think the syntax is very similar https://github.com/gruntwork-io/cloud-nuke#example
Maybe it’s not really a limitation but rather a need to be very explicit in the config file… and creating a super big config file semi-automatically might be the best solution still…
seems like a nice use case. might be worth contributing the feature to one of the mentioned tools…
@matt
I ended up doing this (posted it in the issue linked above for anyone bumping into it to see))) to build the aws-nuke config:
#!/bin/sh
cat <<-EOT
regions:
- us-east-1
- global
account-blocklist:
- "123456789012" # production
resource-types:
excludes:
# The following resources cannot be filtered by tags (aws-nuke error: "does not support custom properties")
- IAMSAMLProvider
- ECRRepository
- ElasticacheSubnetGroup
- CloudWatchEventsRule
- SQSQueue
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- NetpuneSnapshot
- NeptuneInstance
- NeptuneCluster
- LifecycleHook
- CloudWatchEventsTarget
- MQBroker
# The following resources are unvailable due to deprecated APIs or other issues:
- FMSPolicy
- MachineLearningMLModel
- FMSNotificationChannel
- MachineLearningBranchPrediction
- MachineLearningEvaluation
- MachineLearningDataSource
accounts:
"000000000000": # the account in question
filters:
EOT
for resource_type in $(aws-nuke resource-types);
do
cat <<-EOT
$resource_type:
- property: "tag:Environment"
value: "foo"
invert: true
EOT
done
Mind you because of the inability to filter some resources based on tags, the following resource types would be left alone and not nuked:
- IAMSAMLProvider
- ECRRepository
- ElasticacheSubnetGroup
- CloudWatchEventsRuleƒ
- SQSQueue
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- NetpuneSnapshot
- NeptuneInstance
- NeptuneCluster
- LifecycleHook
- CloudWatchEventsTarget
- MQBroker
Probably, the ones of interest to delete (would probably accrue costs or are more likely to be related to an environment you want to delete) are:
- ECRRepository
- ElasticacheSubnetGroup
- ElasticacheCacheCluster
- ElasticacheReplicationGroup
- MQBroker
- SQSQueue
2023-01-18
Hello All, We are using AWS Lambdas and some of the lambdas will be running every 5min and generates a lot of logs. We noticed that these lambdas generate a lot of cloudwatch logs. and If a goto that Lambd’s log group and click on any log stream which generated the last 2 days back it takes time but it loads all the logs if I go more than 2 days back, like 3 or 4 days back cloudwatch logs for that lambda it loads for some time then just shows empty, but if I filter for some word like “bill” then it shows logs which have bill word it. old logs will not show but if I put a filter it will show, anyone faced this issue? will it work if I clear any old logs? right now its configured to keep forever thank you
I think leveraging cloudwatch insights or aggregating the logs to another service would make for faster search time. Streams has always been cumbersome to search in for me.
Thanks @tyler I will check on that
sounds like a bug that should be raised to AWS CloudWatch Support
thanks @Darren Cunningham, but it works for other lambda which do not generate lot of logs,
2023-01-19
Hi, I have one ecs container running cron jobs inside, would codedeploy wait for all the process to finish when making a blue green deployment before deleting the container?
I believe it’ll send a shutdown signal to the init process, which will cascade to running processes internally and then terminate after a timeout. If you’re running cron jobs that take a long time consider rewriting them to enqueue jobs on an external queue, so if a deploy lands during the execution and it terminates a worker while it’s running the next worker will be able to pick the next job up off the queue and finish the batch.
I’m not sure how Codedeploy would handle this, but alternatively, use Eventbridge scheduled rules to launch individual ECS tasks for each job. This makes it so that if an error occurs in a job that triggers a shutdown (ie. out-of-memory), it won’t affect any other running job.
2023-01-20
2023-01-24
hi al
I habe a lambda+api_gw that does some DB work the api_gw resolves on api.domain.com/get-data or api.domain.com/put-data
I would like to have the notion of api versioning for example api.domain.com/v1/get-data etc
Any ideas how I can do that
Short of it, you use stages.
V1 or V2 is just label, that is you can have “dev”, “prod”, “v1”, “v10”, etc….
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-stages.html
Learn about HTTP API stages.
When I do a custom daomin however, I have to map the stage on this domain which then the stage path dissapears
Unless I am getting it wrong :/
you can just put in v1 in the path field and access custom domain with v1 appended path <https://api.domain.com/v1/xxx>
where xxx
is the actual resource of the api that you are mapping to. considering you already have your api resource created as /get-data
and /put-data
, etc.
if you want to follow api best practices, use noun like /data
for the url and resource method with http methods GET
PUT
,etc. https://swagger.io/resources/articles/best-practices-in-api-design/
I suppose I will need to have a different stage per path as well that would prob trigger a different lambda ?
yea, you can have another custom domain for dev and map the same to dev
stage. e.g [api.dev.example.com/v1](http://api.dev.example.com/v1)
to dev
stage and [api.example.com/v1](http://api.example.com/v1)
to prod
stage
if the domain is too much you can do path based like [api.example.com/dev/v1](http://api.example.com/dev/v1)
to dev
stage and [api.example.com/prod/v1](http://api.example.com/prod/v1)
to prod
stage.
2023-01-25
2023-01-26
Hey. Would you recommend paid AWS support plans? We currently have a business support plan for our prod account which in fact we didn’t use for the last two years. Leave it like that for “just in case..” or switch it on/off whenever we have the need for extended support? How do you handle this? The cost add up to a fairly huge amount..
I recommend them, but my team uses it fairly frequently. While it might not always net results, we use support as a sounding board to validate our implementation or if we’re about to start work on a new feature we’ll throw out a support request for input as to how we should solve for x
.
but if you have an established environment that doesn’t change very often and your business wants to take on the risk of a longer response time during events to save money, it’s not inherently wrong.
We got a setup with ~20 accounts. I got the business support enabled in 6 of them. Planning to cut it down to 4. If I need help with an issue I either replicate it in an account with the support enabled. Or I enable it in the affected account. They don’t seam to give any less help if you have just enabled the support. So turn it off, save the money and enable when needed.
oh yeah our org is ~ 50 accounts and we only have it enabled in 2 accounts. good call on enabling when needed.
sorry for the late reply. Thanks for your answers, that already makes the decision much clearer
2023-01-27
Hello :wave: !
Some questions related to AWS Aurora (Postgres like) serverlessV2 cluster with Multi-AZ (which created via terraform with count=2
at aws_rds_cluster_instance
- no multi-az conf available via TF for serverlessv2) in case anyone knows:
• Does this cluster use all instances created there (writer and readers) are in use via cluster endpoint or readers are standby until they will be needed?
• Do we have to specify allocated storage and storage type?
Is anyone using multi architecture EKS + self-hosted github runners for multi architecture builds? The point would be to allow something like this:
runs-on: ubuntu-20.04-arm64-large
In github workflows. Is that overly complicated or even a valid concept? Is there some easier option we haven’t considered for self-hosted multi-architecture builds? Or maybe we just go with github-hosted runners. Is it already built in, like karpenter can deploy arm nodes noting the target architecture of the pods or something like that?
Anyone know if you can subscribe to when new AMI’s are released by AWS? AWS Inspector is identifying an issue in the latest image, it will be nice to know when to redeploy.
You can hook it to EventBridge and the rest of the magic you can do yourself.
This post is courtesy of Arend Castelein, Software Development Engineer – AWS Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has […]
Nice one! Thanks a million
2023-01-28
Hi all,
I have a standard API->Lambda->Dynamo app.
Currently I have the API on aggressive throttle and quota to minimise usage and get DOW. Would you have any other advise on how to go about it without using expensive resources like WAF ? ( its only an OpenSource VSCode extension I am building )
Thanks in advance
WAF is $8/month that’s pretty cheap
Hm. Might check that.
It’s also a free VSCode extension. So I try to keep cost low.
Can I recommend storing the user data in their own github or gitlab account then
then you don’t need to worry about the DDB cost either
dumb the WAF
and use APIGW HTTP API => Lambda
hm… The whole idea is to have something like pastebin, but in vscode, and way easier
open to thoughts though !
using gists as the backing store for that seems like an obvious solution
(or a separate git repo)
Basically it’s the same as this: https://marketplace.visualstudio.com/items?itemName=kenhowardpdx.vscode-gist
Extension for Visual Studio Code - Create, open and edit Gists
gists is a good idea. gonna have to work with Github OAuth
the vscode extensions sync plugin does this, I bet there is some good code there to copy
Synchronize your Visual Studio Code Settings Across Multiple Machines using GitHub GIST
And the default settings sync built into VS Code has some more code available: https://github.com/microsoft/vscode/issues/88309
Problem
There are currently some extensions that attempt to provide authentication abilities that can be reused by other extensions. (An example being the Azure Account extension). Now that we’ve begun working on login for settings sync, it’s worth revisiting if authentication should be a first-class concept in VS Code. By exposing an API to contribute an authentication flow
• the core of VSCode can potentially leverage authentication • other extensions can leverage authentication • UI for account management could be centralized
Proposal
I propose introducing a concept of an “AuthenticationProvider”. Such a provider implements methods for logging in and logging out of a specified account, and exposes a list of accounts that are currently available with an event listener for changes to these. This abstracts away refreshing tokens from consumers - the AuthenticationProvider extension can manage refreshing in the background and fire an event when the accessToken has been changed.
export interface Account {
readonly id: string;
readonly accessToken: string;
readonly displayName: string;
}
export interface AuthenticationProvider {
readonly id: string; // perhaps "type"? Would be something like "GitHub", "MSA", etc.
readonly displayName: string;
accounts: ReadonlyArray<Account>;
onDidChangeAccounts: Event<ReadonlyArray<Account>>;
login(): Promise<Account>;
logout(accountId: string): Promise<void>;
}
export namespace authentication {
export function registerAuthenticationProvider(provider: AuthenticationProvider): Disposable;
export const authenticationProviders: ReadonlyArray<AuthenticationProvider>;
}
Consumers would need to know the id of the provider they’re looking for. For example, the settings sync code would look for an “MSA” provider since this is what the setting sync backend currently needs.
Since the authentication provider extension would be activated in each VS Code window, the extension would be responsible for synchronizing state across instances. By default, such extensions would have [“ui”, “workspace”] extensionKind, so that they can store and read credentials on the local machine in both the desktop and web case.
2023-01-29
2023-01-30
If you need documentation and lists related to permissions in one place, this might be useful … https://aws.permissions.cloud/
2023-01-31
hi folks
if I have apply a Usage plan on my REST API_GW, after the quota is met, and I get HTTP429, do I get charged for any subsequent API calls or aws takes care of it ? Cant find anything on docs
I haven’t looked at the west regions for quite a while. Is us-west-1 over capacity lately? us-west-1c is unavailable.
aws ec2 describe-availability-zones --region us-west-1
{
"AvailabilityZones": [
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1a",
"ZoneId": "usw1-az3",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
},
{
"State": "available",
"OptInStatus": "opt-in-not-required",
"Messages": [],
"RegionName": "us-west-1",
"ZoneName": "us-west-1b",
"ZoneId": "usw1-az1",
"GroupName": "us-west-1",
"NetworkBorderGroup": "us-west-1",
"ZoneType": "availability-zone"
}
]
}
Anyone here have success with AWS SSO account delegation? I heard it was buggy when it was first released and I’m wondering if that is still the case.
What kind of bugs were there? I have several projects where I manage SSO (e2e) with Terraform and I had no problems since.
the only issue I ran into was if I wanted to reassign the delegation to another region. the process support recommended was ugly. support told me they were working on a feature to make that “easy”, but I have since switched orgs and haven’t needed it so not sure where that ended up.
AWS SSO isn’t a global service? Why did you need to reassign a region?
Not sure what the bugs were… I just remember @Erik Osterman (Cloud Posse) and others talking about how it wasn’t working the way they wanted it to when it first came out.
@Peter Luknár were you using delegated management? I.e. https://aws.amazon.com/blogs/security/how-to-delegate-management-of-identity-in-aws-single-sign-on/
September 26, 2022: This blog post has been updated to reflect corrections on sample codes. September 12, 2022: This blog post has been updated to reflect the new name of AWS Single Sign-On (SSO) – AWS IAM Identity Center. Read more about the name change here. Note on May 13, 2022: AWS IAM Identity Center […]
Yes, I can’t remember what the problems were, only that it was at odds with doing it in Terraform. @Ben Smith (Cloud Posse) were you the one working on this?
I’ve done this a couple times at this point, essentially our aws-sso component creates permission sets only. the rest is all still done through the console, the limitation (or atleast one of them) was probably no terraform / api support. looks like that’s been resolved.
They atleast now have account assignment and identitystore.
the main part is really setting it up with federation and setting up automatic provisioning.
Overall it seems like its in a good 1.0.0 state at this point
Good stuff – Thanks for weighing in @Ben Smith (Cloud Posse) + @Erik Osterman (Cloud Posse)
@Matt Gowie Yes, in TF I manage these resources:
• Indentity store group (aws_identitystore_group
)
• Identity store user (aws_identitystore_user
)
• Identity store group membership (aws_identitystore_group_membership
)
• Permission set (aws_ssoadmin_permission_set
)
• SSO Customer managed policy (aws_ssoadmin_customer_managed_policy_attachment
)
• SSO Account assignment (aws_ssoadmin_account_assignment
)
As @Ben Smith (Cloud Posse) mentioned, only thing that needs to be done manually in console is federation, but rest (onboarding/offboarding, managing policies) is automated. Only other thing is to renew cert from time to time.
Gotcha – Thanks Peter