#aws (2022-09)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-09-01
![Vlad Ionescu (he/him) avatar](https://avatars.slack-edge.com/2020-10-03/1417676895681_ea45b3f22e5fea04f2fc_72.png)
This is going to make a lot of people happy: https://aws.amazon.com/about-aws/whats-new/2022/09/aws-iam-identity-center-apis-manage-users-groups-scale/
AWS is launching additional APIs to create, read, update and delete users and groups in AWS IAM Identity Center (successor to AWS Single Sign-On)
![cool-doge](/assets/images/custom_emojis/cool-doge.gif)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
first customer managed policies and permission boundaries, now user and group management! hurrah! now if they’d just separate it from the org and make it a standalone service, i’d be ecstatic!
2022-09-02
![Jan-Arve Nygård avatar](https://secure.gravatar.com/avatar/04d3edf28ff98940ed69f268b54a93ea.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Anyone else using Account Factory for Terraform and having issues with the CodeBuild job for creating the customization pipeline layer for Lambda looping and being built on every terraform plan and apply?
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
So right now we have a bunch of S3 buckets and each bucket has their own lambda function and corresponding IAM roles/policies to be sure that said function can 100% only access that bucket. Is there a way to consolidate down to a single policy for all but still enforce that least-access principle? Playing around with conditionals TagKeys
and ResourceKeys
, but can’t seem to find the proper DWIW.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
It would be possible but it sounds like a bad idea
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Since buckets have a global namespace, there’s no guarantee you will always get the bucket name you want.
But more importantly, complex IAM policies are a special circle of hell all by themselves. Why would you change something that works for something that’s clever?
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
Because we’re hitting the hard caps
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
X policies * y customers is approaching 5k, so we’re trying to figure how to cut that down while keeping least access
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Makes sense. If you ask AWS support, they will write policies like this for you
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Conditions and abac are hard
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Could I suggest instead an AWS credentials vending machine, which lambda uses to get credentials that are scoped directly to the relevant bucket via a role that has the customer account imbedded in it?
It might also help for me to understand what actions you are taking with the bucket in question to give a recommendation
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
It’s really just mostly get/put operations. Wanting to make sure that regardless of what code gets uploaded to a lambda, make sure from a policy perspective. That the trigger can only operate on the bucket that it was triggered from
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
I’d probably create an intermediary that can do the validation and generate a presigned get/post to pass to the lambda to trigger. Then it doesn’t even need any credentials
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
anyone using the terraform-aws-eks-cluster
and terraform-aws-eks-node-group
modules setting the ENABLE_POD_ENI
for the aws-node
to tell the CNI to utilize pod security groups?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Andriy Knysh (Cloud Posse)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
we did not use it (looks like a new feature), but it looks like it requires two steps to enable this:
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
The following command adds the policy AmazonEKSVPCResourceController to a cluster role.
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
--role-name ${ROLE_NAME}
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
which can be done here w/o modifying the module https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/variables.tf#L98
variable "node_role_policy_arns" {
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
or could be added here as another policy attachment (requires module modifications) https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/iam.tf#L39
resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
steps #2 to execute
kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
Yeah I saw there qas an additional IAM policy needed to the role which I didn’t see as hard to accomplish, as you said it could be an additional policy attached to the role not needed to be done in the module per se. I was however not seeing anything apparent to set the necessary env variable to ‘true’ as I can see node groups deployed via the module have it set to ‘false’ bit that seems like just default values
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
This was more an exploratory inquiry but I have been asked to deploy out a Windows node group to our EKS cluster and preferably via TF
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
you mean you want to set ENABLE_POD_ENI
via TF and not calling kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
?
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
@Andriy Knysh (Cloud Posse) Yes, I was curious if the TF module already supported a way to set this or if otherwise possible to set using the TF as if we went with using it would like to deploy out with the TF not execute additional CLI commands. Right now without it you end up with node level security groups which are fine if you trust all pods running in the cluster on those nodes, I’m just looking into LOE to enable pod level security groups with existing deployment method that could reduce the effective blast radius.
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
k8s resources can be provisioned using terraform kubernetes provider, but I’m not sure what can be used to set env
2022-09-03
2022-09-04
![Niv Weiss avatar](https://avatars.slack-edge.com/2022-03-22/3278518702036_e8e9dc4b640af88a835a_72.jpg)
We are uploading our product to AWS marketplace. Where do I need to provide this one license secret key
?
Thanks!
![Nyshawn Burton avatar](https://avatars.slack-edge.com/2022-09-03/4030360303877_cfa5eee21c19907e0958_72.png)
Not entirely sure but you will need to provide the license secret key in the AWS marketplace under the product listing.
2022-09-05
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hi, everyone aware sitemap.xml, my problem is ngnix will take sometime to load the proxy pass.
2022-09-06
![idan levi avatar](https://avatars.slack-edge.com/2021-10-18/2629280056609_a23e173158a977252a76_72.png)
Hey all! I’m using route53 as my DNS provider and Nginx-ingress-controller as ingress in my k8s env. I want to redirect between 2 ingresses, for example, all request that go to app.x.io will redirect to app.x.com. tried to create an CName alias but it doesn’t work. Does someone have an idea?
![Tommy avatar](https://avatars.slack-edge.com/2022-08-16/3932880992839_ca082bd1ed39856f6acc_72.gif)
try A alias instead of CNAME
![idan levi avatar](https://avatars.slack-edge.com/2021-10-18/2629280056609_a23e173158a977252a76_72.png)
Cannot cause the original record (app.x.io) is a CNAME and A alias is looking for A recored
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
This is a really oddball solution but, if you have the stomach for it:
- create an S3 bucket website with 0 content and a rule to redirect requests to app.x.com
- create a route 53 entry for app.x.io and add the S3 bucket as the target.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
Configure your bucket as a website by setting up redirect locations where you can redirect requests for an object to another object.
Route traffic using Route 53 to a website that is hosted in an Amazon S3 bucket.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Note that this solution is the most cost effective (compared to running a webserver on EC2/ECS or using an ALB).
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Before you create the bucket, keep these points in mind (since this is the only way it will work)
Value/Route traffic to
Choose Alias to S3 website endpoint, then choose the Region that the endpoint is from.
Choose the bucket that has the same name that you specified for Record name.
The list includes a bucket only if the bucket meets the following requirements:
• The name of the bucket is the same as the name of the record that you’re creating.
• The bucket is configured as a website endpoint.
• The bucket was created by the current AWS account.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
This one it the most important:
• The name of the bucket is the same as the name of the record that you’re creating.
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hey all, can we have same size of cpu and memory in ecs fargate. ex: cpu=2048 and memory = 2048 ?
![Tommy avatar](https://avatars.slack-edge.com/2022-08-16/3932880992839_ca082bd1ed39856f6acc_72.gif)
CPU value Memory value (MiB) 256 (.25 vCPU) 512 (0.5GB), 1024 (1GB), 2048 (2GB) 512 (.5 vCPU) 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) 1024 (1 vCPU) 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) 4096 (4 vCPU) Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB)
![Tommy avatar](https://avatars.slack-edge.com/2022-08-16/3932880992839_ca082bd1ed39856f6acc_72.gif)
that are the allowed combinations (copied from the documentation)
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Thank you
![Tommy avatar](https://avatars.slack-edge.com/2022-08-16/3932880992839_ca082bd1ed39856f6acc_72.gif)
so, in your case: 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB)
![Jonas Steinberg avatar](https://avatars.slack-edge.com/2022-05-24/3585152935089_6942cc6f1d84195e388b_72.jpg)
Curious what tags people think are critical? Here’s a list of the ones I think are generally useful, but would sure love to learn more:
• environment: [dev, qa, staging, prod, whatever]
• version control: [github, gitlab, whatever]
• cicd: [circle, github, gitlab, whatever]
• needs-to-stay-on-24hours: [true, false]
• various-can-cannot-be-public: true, false]
• chargeback_id: 123456789
• department: [finance, it, eng, whatever]
• repo: some-github-repo
• product_owner: [[email protected]](mailto:[email protected])
still thinking
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
We have tags that specify:
• Owner (business unit, service name)
• Source (source repo and path)
• Environment
![Zoltan K avatar](https://avatars.slack-edge.com/2022-03-01/3162051615815_26a5d2c4e672b787b067_72.jpg)
we called CostCenter what you have as chargeback I guess. I would use camelCase or similar naming for all tags but not mixed dash or underscore. we had additional info on classification e.g data classification for s3 bucket also service tier could be a good addition imo. plus I see you have product owner but I would add product as well just for grouping, tech contact also missing… e.g. LauchedBy / OwnerTeam etc
2022-09-07
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.
![deniz gökçin avatar](https://avatars.slack-edge.com/2022-04-01/3315876182103_97282c9e64652354b035_72.png)
Hello I am having problems with Cloudmap + ecs service discovery. I am not able to ping or dig a container from another container(using ecs exec) in the same ecs fargate cluster(awsvpc mode). Anyone had a similar problem? Looking forward for replies. Thanks!
2022-09-08
![Eric Berg avatar](https://avatars.slack-edge.com/2022-02-23/3149638965779_b5a77c77548365fff07f_72.jpg)
When my AWS managed node groups (created with terraform-aws-modules/eks/aws//modules/eks-managed-node-group
) change using Terraform (or related launch configs, security groups, etc.), and the MNG’s ASG is recycled, I have a min/max/desired or 1/2/1, and during the recycling, it spins up up to 7 additional EC2 instances, before settling down on a single one.
Anybody else see this and/or know how to manage it?
![Eric Berg avatar](https://avatars.slack-edge.com/2022-02-23/3149638965779_b5a77c77548365fff07f_72.jpg)
This is expected behavior, and it’s based on the number of subnets. For example, we’re deployed in us-east-2, so there are 3 subnets, and our MNG is set to 1/1/1, so it spins up 2 new nodes in each AZ, before settling on one.
The Amazon EKS managed worker node upgrade strategy has four different phases described in the following sections.
2022-09-09
2022-09-10
![Taner avatar](https://secure.gravatar.com/avatar/42ef4b2d3b49722064eb505a7fb4437c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Hello all, I am having trouble with terraform
. Basically the problem is somewhat related to unreadable vpc_id
although I can see it gets read on the state file. Anybody has similar error before?
![Alexandr Bortnik avatar](https://avatars.slack-edge.com/2022-09-10/4066123136404_a2c12c1d47591a691d73_72.png)
Hello!
I would like to clarify about cloudposse/eks-node-group/aws, so is it possible to disable random_pet ?
2022-09-11
![idan levi avatar](https://avatars.slack-edge.com/2021-10-18/2629280056609_a23e173158a977252a76_72.png)
Hey all Small question about Route53, I’m using Kinsta as my domain host and Route53 as my DNS mgmt. i need to renew my SSL Certificate in my domain. I didn’t understand to the end what is the process to do it with the TXT record on Route53, someone is able to few questions?
![venkata.mutyala avatar](https://avatars.slack-edge.com/2022-01-10/2935964026964_e3525ee61170d7dc3198_72.png)
Hi you likely just need to add the text record to route53
![venkata.mutyala avatar](https://avatars.slack-edge.com/2022-01-10/2935964026964_e3525ee61170d7dc3198_72.png)
Basically go into route53 and create the record they tell you with the value they provided you with
![venkata.mutyala avatar](https://avatars.slack-edge.com/2022-01-10/2935964026964_e3525ee61170d7dc3198_72.png)
^ not sure if that helps or not. The txt records allows them to verify that they can give you a cert for the domain. Otherwise you could request a cert for any domain and easily get a cert
2022-09-12
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hi all, Our database has been attacked by sql injection, we are using aurora mysql and cpu utiliztion almost 100%, how can i stop this any suggestions ?
![akhan4u avatar](https://secure.gravatar.com/avatar/8f1b85160628385ed2ae16794b68cc58.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
Maybe list all active connections and verify if its the same IP for attack and block them with a security_group rule
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
doesn’t sound like IP restrictions would help if the attack is SQL injection, you’ll want to kill the processes that are eating the CPU then patch the application(s) ASAP. If this is going to take “too long” you might choose to make your application connection to the DB read-only and/or potentially take an outage. but these are all considerations for the business team.
The MySQL database hangs, due to some queries. How can I find the processes and kill them?
![jedineeper avatar](https://secure.gravatar.com/avatar/51ef9324b2eb6d67fc7be5ac6803102d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
anyone got an advice on how I could better present a service in EKS as an origin for a cloudfront distribution? I’m currently just going through my ingress controller to a domain name that the distribution reads, but that means I have an intermediate domain name for the ingress as well as a public origin that I’d rather secure down to just cloudfront.
![Denis avatar](https://avatars.slack-edge.com/2022-07-05/3755698025589_2dee8d81d277563f5d20_72.jpg)
I don’t think I can help you on this front but I am genuinely curious about your use case here. What kind of an EKS service is it that you need the CloudFront to deliver? I’ve only used CF for static websites and presenting static large media files, so that’s why I’m asking.
![jedineeper avatar](https://secure.gravatar.com/avatar/51ef9324b2eb6d67fc7be5ac6803102d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
running a nodejs app, it has some static elements but a bunch of dynamic stuff as well. using cache-headers per endpoint to dictate to CF when stuff should be cached or not but it gives me a single endpoint for all the content.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Hey folks — Quick AWS Route53 question I have while migrating a client’s DNS architecture:
Is it possible to have two Route53 Hosted Zones control the same domain (e.g. *.example.com) across separate accounts? In that I have some records for www.example.com and *.example.com on Hosted Zone #1 and then I have similar records for *.example.com on Hosted Zone #2 as well?
I am hoping so if they both point their NS records at the correct, authoritative nameservers, but I figured I’d check here before I tested this out.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
If they are public hosted zones, then yes it’s easy. In the zone hosting example.com, create ns records for subdomain.example.com and you’re golden
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you can have [example.com](http://example.com)
set up in Account A and [subdomain.example.com](http://subdomain.example.com)
in Account B — you would just setup the NS from Account A to point the Nameservers for Account B
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
If they are private hosted zones, you need to do magic with route53 resolver rules, since private zones do not honor ns records
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah no — Sorry misunderstanding. I’ve done a hosted zone delegation like example.com in one account and then *.subdomain.example.com in another account.
What I’m trying to do is:
Account One (Legacy) — Existing Hosted Zone for example.com Account Two (New) — New Hosted Zone for example.com
I want records that are created in both Hosted Zones to work. And then I’ll be creating delegated (e.g. *.subdomain.example.com) hosted zones in other accounts.
I don’t think the account boundaries actually matter, but it’s just to illustrate the point: This is because I’m working with a client who has all of their resources in one account right now and we’re building out a proper account hierarchy for them now.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
I’m re-reading my initial question and I see how I made that confusing, my bad.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
No, I don’t think you can do that? I’m trying to think how the ns records would look… You might be able to create the zones and records, but at some point you have to transfer the public ns records so public name servers resolve from the new zone… It’s basically a zone transfer
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah this is from the AWS Route53 FAQs:
Q. Can I create multiple hosted zones for the same domain name?
Yes. Creating multiple hosted zones allows you to verify your DNS setting in a “test” environment, and then replicate those settings on a “production” hosted zone. For example, hosted zone Z1234 might be your test version of example.com, hosted on name servers ns-1, ns-2, ns-3, ns-4, ns-5, ns-6. Similarly, hosted zone Z5678 might be your production version of example.com, hosted on ns-7, ns-8, ns-9, ns-10, ns-11 and ns-12. Since each hosted zone has a virtual set of name servers associated with that zone, Route 53 will answer DNS queries for example.com differently depending on which name server you send the DNS query to.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
But that doesn’t sound like what I would want…
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
what’s the goal of having both zones handling queries? not doubting you, just making sure I’m not recommending something that breaks the goal
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
I don’t want to touch the client’s existing Hosted Zone in their legacy all-in-one account. I’d rather leave that alone as it is and then manage a new hosted zone for all new records and delegated zones.
![ghostface avatar](https://avatars.slack-edge.com/2022-07-13/3796132684516_877916c98e328c30312b_72.jpg)
Hi all,
I have a pod in EKS configured with a ServiceAccount which configures a role for the pod to use. so AWS_ROLE_ARN=arn:aws:sts::000000000:assumed-role/podrole
aws sts get-caller-identity
{
"UserId": "0000E:botocore-session-0000000",
"Account": "000000",
"Arn": "arn:aws:sts::000000000:assumed-role/podrole/botocore-session-222222222"
}
i want to allow this role to assume another role in a different account via a profile in ~/.aws/config
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
credential_source = Environment
this is an example from the docs here. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
i was hoping credential source would pick up the AWS_ROLE_ARN
env vars set by the service account.
aws sts get-caller-identity --profile marketingadmin
Error when retrieving credentials from Environment: No credentials found in credential_source referenced in profile marketingadmin
does anyone have a work around?
2022-09-13
![deniz gökçin avatar](https://avatars.slack-edge.com/2022-04-01/3315876182103_97282c9e64652354b035_72.png)
Hi all! A quick aws security question. Is there anyone who is using aws security hub and aws config with aws organizations? I am not able to see the resources from member accounts and I have “Config.1 AWS Config should be enabled” error. Do I need to enable aws config in each member account manually?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you can setup a delegated administrator account from your org settings and within that account you can configure security hub to automatically enroll all member accounts
![deniz gökçin avatar](https://avatars.slack-edge.com/2022-04-01/3315876182103_97282c9e64652354b035_72.png)
@Darren Cunningham from security hubs side, everything looks fine. I can see the accounts in my organization. I believe my problem is with aws config. I am not sure on how to enable it in member accounts. Does delegated administrator account handle enabling aws config?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
ah sorry, IIRC AWS config has “delegated admin” but the rollout of enabling AWS Config in all accounts/regions is not something that’s integrated into the product but there is a CF StackSet that’s provided: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html#config-how-to-enable
Learn about the requirements to enable and configure AWS Config before you enable Security Hub.
![deniz gökçin avatar](https://avatars.slack-edge.com/2022-04-01/3315876182103_97282c9e64652354b035_72.png)
I only have 3 accounts per environment(qa, prod, staging, management) what do you think is the difference between enabling the app config manually vs deploying the stackset?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
well AWS Config also needs to be deployed per region so it’s accounts x regions which doesn’t sound fun to do manually
![deniz gökçin avatar](https://avatars.slack-edge.com/2022-04-01/3315876182103_97282c9e64652354b035_72.png)
thank you after enabling config in the regions that my resources are in and after 24 hours, I was able to see the security scores in the management account’s security hub dashboard.
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
I am trying to get the aws-ebs-csi-driver helm chart working on a EKS 1.23
cluster.
The message I am getting from PVC events
failed to provision volume with StorageClass "gp2": error generating accessibility requirements: no topology key found on CSINode
The CSI topology feature docs say that:
• The PluginCapability
must support VOLUME_ACCESSIBILITY_CONSTRAINTS
.
• The plugin must fill in accessible_topology
in NodeGetInfoResponse
. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object.
• During CreateVolume
, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements
.
I am not sure how to configure these points.
![Adnan avatar](https://secure.gravatar.com/avatar/86fbcb1983990cec4ffd9e7f6b009669.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0009-72.png)
I looked at the worker nodes (ec2) launch template / user data. The kubelet root path was not the standard /var/lib/kubelet
. Instead it was a different one. I fixed the missing CSINode driver information by updating the volumes host paths with the correct kubelet root path.
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
hello. what is the limit of the subaccounts ? If I would like to run customer cluster in separate subaccount is that possible? Or i have a limit ?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
there’s a soft limit of 10 accounts but that can be increased with a service request - largest org I’ve seen was ~220 accounts but I’m sure there are larger ones
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
thanks
2022-09-14
![Soren Jensen avatar](https://avatars.slack-edge.com/2022-03-29/3335297940336_31e10a3485a2bd8c35af_72.png)
One thing to be aware of is it takes a lot more effort to delete an account than creating one. So depending on how long engagement you expect from your users it might not be worth the hassle.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
it’s a lot easier now that they introduced https://docs.aws.amazon.com/cli/latest/reference/organizations/close-account.html
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
but still has it’s limits
2022-09-15
![Bogdan avatar](https://secure.gravatar.com/avatar/f95a0ce1c97af150589253bb0a5c7393.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
cross-posting from hangops since I’m really looking for a solution:
does anyone know if there’s an automatic way to block pulling/consuming of a Docker image from AWS ECR if the said image has been discovered to have vulnerabilities? By automatic here I am thinking of even updating IAM policies with a DENY statement…
![Maciek Strömich avatar](https://secure.gravatar.com/avatar/98de12365b633b063e208220100d4594.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0002-72.png)
you mean something like this? https://github.com/aws-samples/aws-securityhub-remediations/tree/main/aws-ecr-continuouscompliance
![cool-doge](/assets/images/custom_emojis/cool-doge.gif)
![Bogdan avatar](https://secure.gravatar.com/avatar/f95a0ce1c97af150589253bb0a5c7393.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
good find @Maciek Strömich - it’s what I was looking for
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Hello all, I am testing aws organization with SSO with extrnal IDP. Is that possible that only saml is the possible option and no oidc ?
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
https://docs.aws.amazon.com/singlesignon/latest/userguide/other-idps.html and it this reqs still valid? 1.1 saml ?
Learn about how other external identity providers work with IAM Identity Center.
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
• IAM Identity Center requires a SAML nameID format of email address (that is, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
).
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
solved.
2022-09-16
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
2022-09-17
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
Hello all, Can I store docker images into S3 instead of ECR in order to optimize cost?
For example: If I use ECR with VPC endpoints (ecr.dkr, ecr.api), then Pricing will be, per VPC endpoint per AZ ($/hour) which is costly but If I store docker images in S3 with gateway VPC endpoint for S3 which is free and use S3 docker image path inside tasks definition then cost might be less.
What is the best practice? What would be the disadvantages of storing docker images into S3 instead of ECR? Is this correct approach to store docker images in S3?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you probably could rig up a solution to publish images to S3 and pull them via s3, but the cost of all that complexity (and likely marooning yourself from the integrations with image scanning, eks, fargate, etc) just isn’t worth it.
![this](/assets/images/custom_emojis/this.png)
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
I am using ECS fargate and I think I can integrate it with S3 and could point S3 inside task definitation instead of ECR?
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
because there is lot of cost with ECR private endpoints. ECR use S3 internally then why not use S3 instead of ECR with free s3 gateway endpoint.
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
could you please elaborate on why isn’t worth it?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I don’t have all the data points about your situation, so I could be wrong. but, in general (IMO) the more off script you go, the more complexity you have. complexity has operational costs (more difficult to onboard other team members to home grown solution), takes longer to modify your solution in the event that it breaks or needs upgraded as service change and you end up finding yourself on the outside looking in when improvements to integrations are made by AWS.
![jsreed avatar](https://avatars.slack-edge.com/2022-12-06/4491361948977_169d2199777bd480b3dd_72.png)
Agree… increasing complexity to save a buck is a bad idea, at a minimum use an EC2 with S3 backed storage and create a private registry via docker. Again costs may be a wash in that scenario
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
Thanks guys
2022-09-18
2022-09-19
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hi everyone, when i restored aws aurora instance , i have to create reader instance manually or else it will create reader instances manually??
![Karim Benjelloun avatar](https://avatars.slack-edge.com/2020-02-26/970268829476_48bd73425f3094c51c8f_72.png)
Hello, any alternatives to run a Managed Private CA? I feel AWS Pricing is quite expensive (400$ per month + 0,75$ per certificate)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Could you self sign a cert and then any certs could then just be imported into ACM ?
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
The drawback with that is that you’d have to renew your certs manually I believe
![Karim Benjelloun avatar](https://avatars.slack-edge.com/2020-02-26/970268829476_48bd73425f3094c51c8f_72.png)
We need to be able to easily create/revoke SSL certificates, since these need to be deployed on IoT devices
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
any reason you need to use a private ca instead of amazon’s ca ?
![Karim Benjelloun avatar](https://avatars.slack-edge.com/2020-02-26/970268829476_48bd73425f3094c51c8f_72.png)
We need to create and deploy Private certificates, we will use these to connect to an MQTT broker
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Hashicorp vault can work as a Private CA, I think. Not sure how much cheaper it would be, especially if you need HA
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
How does the broker know the cert is valid?
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
You can revoke public ACM certificates. It requires a support request so the viability of this depends on how many certs you revoke
![Alanis Swanepoel avatar](https://avatars.slack-edge.com/2022-06-24/3739166585152_acef2e16a544a0e63cbd_72.png)
Private CA - look at easyrsa (this is used by openVPN for their certificates) https://github.com/OpenVPN/easy-rsa
easy-rsa - Simple shell based CA utility
2022-09-20
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
hello, can I create a 4eyes solution with aws resources for aws switch role ? idea is to give read permission to user and give the admin role with switch role but only with approval
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
AWS doesn’t natively support step up authorization for multiparty, you would need a dedicated solution for that. Or find a provider that offers off the shelf support
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Do you know any?
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
I think I would need to know more about your specific use case to make a suggestion. Would you be able to add more color to it?
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
We just would like to add another layer to security. So on base step. Everybody would get read only access to aws console and/or progmatic access, and few person ( admins) can get admin access if there is any issue in the system, but for the access they would need to get an approvement. Using IAM roles
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Usually I’ve found if you need this, something organizationally has gone wrong. Like more than ~8 people using one AWS account. The ROI on segregating AWS accounts at the team boundary is sooo high compared to implementing something like this. Also it won’t work with the console, you’d need something custom outside and it would only support API interactions.
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
what do you meant ~8 people using one aws account. you mean 8 admin in the account? we will implement organization and separate deployments and infra part to separate UO and subaccount. This 4eyes is a big dream from one of the manager and that’s why I am trying to find something to implement.
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
yeah that is correct currently. My idea and goal is at the end to create account per customer. ok for dev part maybe we will have more than 8 users, but per customer hope it will be 1 / account
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
I’m not sure what “customer” is here
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
How does “8 people per account” work with EKS?
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
we don’t use eks. we create cluster with kops and using spot instances. the clusters are not too big. we did not have any issue with that currently. customer = companies that bought our product. we have an isolation by design request.
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
Don’t use EKS. I’ll let you know when I encounter a problem at scale that requires its usage
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
And what is your advice to fullfill manager’s request?:)
![Warren Parad avatar](https://avatars.slack-edge.com/2022-05-16/3514444274407_77d40ebceaf5a1dde6da_72.jpg)
![Zoltan K avatar](https://avatars.slack-edge.com/2022-03-01/3162051615815_26a5d2c4e672b787b067_72.jpg)
let me add my 2 cents to the conversation above. I would grant STS AssumeRole access to the users. so when you add a user to a specific role you can get approval for that… But when a user uses the role all you can do is log the activities… so no approval in this granularity is possible and this would make approver’s life a nightmare… logging should be more than enough…
2022-09-21
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
looking for a bit of inspiration, I want to walk my AWS Accounts, on a regular basis (say hourly) and catalogue all EC2 instances, that meet a certain set of tag conditions and display details in a ‘status’ type way, eg: filter all EC2 where tag1=false, tag2!=bob print {tag3, tag4, tag5} in a nice dashboard type table, I thought this would be easy to do with datadog and tags, but because it’s using just tags or conditional tag searches, it’s bad
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
AWS has several products that can do this
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
AWS Systems Manager Inventory is specifically for EC2
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
Hmm, I’d not looked at it that way,
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
that’s useful to think about
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
AWS Config may also be appropriate
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
config can get me the data, but it’s not great at displaying it
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Why do you want to display it? A wall dashboard?
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
something like that yes
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
see the status of a specific deployed fleet
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
It might be reasonable to get data off the AWS event bus and push it to your own system then (like datadog)
![MJD avatar](https://secure.gravatar.com/avatar/e647d68d5c33e928693ba6884aa184a2.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0011-72.png)
I can ‘get’ the data with ease, visualising it in a human format that’s simple to read is where I’m looking for inspiriation
![Joe Niland avatar](https://secure.gravatar.com/avatar/b90c8e752dd648ef229096c60ba2408f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
cli or web interface?
![kirupakaran avatar](https://secure.gravatar.com/avatar/859bdb912a6edf491ba74ea2f2e06ff7.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hi everyone, how can I automate…AWS aurora automate backup , partial data should be exported into S3 ??
![Vinko Vrsalovic avatar](https://avatars.slack-edge.com/2022-08-28/4001596195987_59c9e6f18f4287092f55_72.png)
Can I ask architecture questions? I want to deploy a dotnet 6 application that is backed by PostgreSQL. The application exposes a REST API and also has an internally scheduled process that runs batch processing. I’m torn between splitting up the batch processing from the REST API, using Lambda+API Gateway for the API and a simple ECS container for the batch processing. OR, having containers for both things. I’m thinking about provisioned Aurora for PostgreSQL (serverless v2 seems really pricey for now)
I’m also torn between ECS and EKS, I feel that EKS might be overkill for now.
Any other options I’m missing?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Might be an easier question to ask during #office-hours
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
ECS fargate is probably the lowest complexity approach. Use the same compute type for all your apps for simplicity
![sohaibahmed98 avatar](https://secure.gravatar.com/avatar/7b37243707b021807fb94c79561a22b0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0012-72.png)
I think EKS will increase cost and complexity. ECS Fargate would be best option.
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Resurrecting an old topic. With aws-okta no longer maintained and no longer installable via Homebrew, what are folks using to grant CLI access to AWS via Okta? We use Okta SSO as a SAML provider for our AWS org.
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Scouting around here an in other slack orgs, so far I’ve gathered (in this order of preference):
• https://github.com/godaddy/aws-okta-processor
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
I use the first one and like it a lot. It has the best model for understanding that there are two tokens/credentials with different expirations (one for okta, one or more per aws role) and managing them separately
![cool-doge](/assets/images/custom_emojis/cool-doge.gif)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
The other option I’ve used is AWS SSO, with okta as the external IDP for AWS SSO, and SCIM syncing all users and groups… Then you can use anything that understands AWS SSO
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
When doing that, I like granted
a lot, https://github.com/common-fate/granted
The easiest way to access your cloud.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Or leaap, but leaap has problems with govcloud and other non-aws
partitions
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I’m looking into AWS SSO, thanks
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Looks like granted is geared towards browser access?
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I’m looking for a pure CLI tool (if I can login to the browser via CLI that’s a bonus but not the main goal)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Not necessarily, with granted, assume
will export the creds to your env, and assume -c
will open the console in a browser container-tab, and assume exec
will just run the command with the credential
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
They’re also working on a credential_process
version that won’t muck with the env and will support a refreshable credential
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Nice
![Rohit S avatar](https://secure.gravatar.com/avatar/acee8163c2a0420d9f970885390ae4cf.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
![Larry Gadallah avatar](https://avatars.slack-edge.com/2022-09-20/4123318864609_307ef54175bde913bff3_72.jpg)
Another vote for saml2aws
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I tried out aws-okta-processor
. So far so good. I just don’t see a way to switch between roles. I have to run rm -rf ~/.aws/boto/cache/
every time and have to do another eval $(aws-okta-processor authenticate --environment)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
Oh I use it with credential_process, to avoid polluting my env and get a refreshable credential for free. So I have a different aws-cli profile for every role
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
I think for your use case, there is a cli option to disable the cache
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Cheers, gonna keep exploring
![Andrea Cavagna avatar](https://avatars.slack-edge.com/2021-06-03/2117246507255_286fcae8e21f30cbdc32_72.jpg)
https://docs.leapp.cloud/0.14.3/configuring-session/configure-aws-iam-role-federated/
you can add your okta federated role in Leapp
Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally
![Andrea Cavagna avatar](https://avatars.slack-edge.com/2021-06-03/2117246507255_286fcae8e21f30cbdc32_72.jpg)
for AWS SSO it integrates automatically,
what are the issues with Leapp and cloudgov @loren ? I would love to solve those issues.
Let me know if this docs are solving your problem @yegorski
Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
![Andrea Cavagna avatar](https://avatars.slack-edge.com/2021-06-03/2117246507255_286fcae8e21f30cbdc32_72.jpg)
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Leapp definitely the “wow factor” looks great
![Andrea Cavagna avatar](https://avatars.slack-edge.com/2021-06-03/2117246507255_286fcae8e21f30cbdc32_72.jpg)
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I see, I need a “valid SSO portal URL” so to use this tool I need to set up AWS SSO. This requires changing how I currently have our 6 AWS accounts connected, via federated login
![Andrea Cavagna avatar](https://avatars.slack-edge.com/2021-06-03/2117246507255_286fcae8e21f30cbdc32_72.jpg)
What do you mean? Your accounts are connected via SAML with Okta?
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Yeah right all accounts are connected with Okta SAML, going through the main AWS org account.
![Isaac avatar](https://avatars.slack-edge.com/2022-04-23/3431177368626_901ee4e2884e7d1f9fea_72.jpg)
Old topic but I’ve used https://github.com/dowjones/tokendito in the past.
Generate temporary AWS credentials via Okta.
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
Thanks! Yep, that’s on my radar
![yegorski avatar](https://secure.gravatar.com/avatar/dff3fe554c0be542962fa7d83b0d29bc.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
We decided to stick with aws-okta
2022-09-22
![fotag avatar](https://secure.gravatar.com/avatar/1f2ed855ad529031eed484898c52b68f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Does anyone know why (technically) you can’t delete/modify an RDS instance that’s at stopped
state?
![JoseF avatar](https://avatars.slack-edge.com/2023-11-17/6215230659202_ac23db21c0c0c05010a4_72.jpg)
Does it has activated deletion protection in his configs? Unless this should not stop you from modified it.
![fotag avatar](https://secure.gravatar.com/avatar/1f2ed855ad529031eed484898c52b68f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
thanks, I’ll check it
![fotag avatar](https://secure.gravatar.com/avatar/1f2ed855ad529031eed484898c52b68f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Nope, even deletion protection has been removed, when you try to delete an instance it asks to start cluster first
![Vinko Vrsalovic avatar](https://avatars.slack-edge.com/2022-08-28/4001596195987_59c9e6f18f4287092f55_72.png)
I’ve seen that, for both serverless v2 and provisioned RDS instances
![Vinko Vrsalovic avatar](https://avatars.slack-edge.com/2022-08-28/4001596195987_59c9e6f18f4287092f55_72.png)
I can’t tell you why, maybe it is to run the last snapshot? I do have last snapshot turned on and haven’t tried it without
![Azar avatar](https://avatars.slack-edge.com/2022-10-07/4184307388294_c6fb315e7099e3915263_72.png)
https://www.reddit.com/r/kubernetes/comments/xlfcs2/what_should_make_me_consider_moving_from_ecs_to/
Has lots of good insights. But the one is this only
2022-09-23
![Aritra Banerjee avatar](https://avatars.slack-edge.com/2021-09-03/2449518581030_e532268f938f4f1e9a80_72.png)
Hi, does AWS Database Migration Service work between RDS to RDS transfer. We have a new site going live and we want to sync prod database with a new database and after everything is verified we will change the rds from the old one to the new one
![Paula avatar](https://avatars.slack-edge.com/2022-09-13/4070142320726_24f91e7b54e97b142967_72.jpg)
Hi! im not an expert, i used the service once or twice but with other porpouse, i think DMS have to work for that use case (that is the original use case i guess). You can test it setting the source endpoint the RDS N°1 and for the target endpoint the RDS N°2. Be carefoul if you are using secrets manager for the passwords and make a diferent secret for each RDS or you can accidentally replicate the data in the original RDS
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
@Aritra Banerjee: DMS indeed does support RDS like any other source or destination. When you choose ‘rds’ as a source, you can now select it from a drop-down list instead of having to manually enter host/port/service details, so that’s a bit nicer. in the docs, you’ll notice that both source and targets both list RDS as a potential.
• https://medium.com/team-pratilipi/how-to-migrate-rds-to-rds-via-dms-b8f9b86f23c
• https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html
• https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Targets.html
![Aritra Banerjee avatar](https://avatars.slack-edge.com/2021-09-03/2449518581030_e532268f938f4f1e9a80_72.png)
wow, great thank you
2022-09-24
2022-09-27
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
Is there a way to force iam_policy_document
to output the principals as a list even if there’s a single element?
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
principals {
identifiers = ["arn:aws:sts::${local.account_id}:assumed-role/task-role/*"]
type = "AWS"
}
gets spit out to
"Principal": { "AWS": "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" }
but OpenSearch wants that as
"Principal": { "AWS": [ "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" ] }
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
What do you mean, open search “wants” the latter format? The two forms are functionally identical.
Perhaps you could specify the statement as raw JSON rather than using the Terraform data source
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
Seems to be an AWS quirk, https://github.com/hashicorp/terraform/issues/6438
Terraform Version
Terraform v0.6.15
Affected Resource(s)
• aws_elasticsearch_domain
Terraform Configuration Files
provider "aws" {
region = "us-east-1"
}
resource "aws_iam_user" "es" {
name = "srv_user1"
}
resource "aws_iam_access_key" "es" {
user = "${aws_iam_user.es.name}"
}
resource "aws_elasticsearch_domain" "es" {
domain_name = "es1"
advanced_options {
"rest.action.multi.allow_explicit_index" = true
}
snapshot_options {
"automated_snapshot_start_hour" = 23
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "es:*",
"Principal": {
"AWS": "${aws_iam_user.es.arn}"
}
}
]
}
CONFIG
}
Debug Output
https://gist.github.com/jritsema/8d4060e703c9a287753e1e0db5c41afd
Panic Output
none
Expected Behavior
An Elasticsearch domain should be created with a policy that grants access to the newly created user.
Actual Behavior
Throws the following error
Error applying plan:
1 error(s) occurred:
* aws_elasticsearch_domain.es: InvalidTypeException: Error setting policy: [ {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "es:*",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxx:user/srv_user1"
}
}
]
}
]
status code: 409, request id: 5ce1b757-1060-11e6-800a-c363f7f5dcbd
Steps to Reproduce
Please list the steps required to reproduce the issue
terraform apply
Important Factoids
none
References
• GH-4485
Notes
• if I run terraform apply
twice, it works the second time
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
If you have a statement with a wildcard and add a second statement AWS will barf if you don’t listify the principals in the second statement
![Brent Garber avatar](https://avatars.slack-edge.com/2021-08-04/2372755651008_f0024ce2395959ee12de_72.jpg)
Even trying to adding it through the AWS Console will fail until you add the []
s
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
try using just jsonencode instead of iam_policy_document
?
2022-09-28
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
hello,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MyOrgOnly",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::thebucketofmydreams",
"arn:aws:s3:::thebucketofmydreams/*"
],
"Condition": {
"ForAnyValue:StringLike": {
"aws:PrincipalOrgPaths": ["o-funny/r-stuff/ou-path"]
}
}
}
]
}
what is the issue with this? My goal is give access to subaccount in organiaztion under an OU to a resource that it is in another account in same organization
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
difficult to answer without knowing what the problem is
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I have account under infra ou and account under dev ou. I have s3 bucket in infra that ai would like to access from account under dev ou. But only one bucket and only from that account. I get 403 when I try to download a file from that bucket
![Joe Niland avatar](https://secure.gravatar.com/avatar/b90c8e752dd648ef229096c60ba2408f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
When you’re not using a wildcard, pretty sure you should be using the “ForAnyValue:StringEquals” operator
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
got same error, when I set it to GetObject and try to download file
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
if I add a ‘*/ after r-stuff… it works… I limited the access to getobject and if I add the condition like this:
o-funny/r-stuff/*/ou-path/*
can somebody explain why I need the * in the condition? second i think I know, but the 1st ?
2022-09-29
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
hello, another day, another question I have vpc in account A and private hosted zone in account B I would like to associate them, but don1t want to use creds from a. I created a role, in a that can call from B, but how could I call it? I need to automate this
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
solved
2022-09-30
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Hello! I have a dedicated connection with direct connect. According to the engineer who setup direct connect on their end, I should be able to Telnet a host on port 53. He told me that I need to set the primary and backup DNS to x.x.x.1 and x.x.x.2 (I guess this is done by changing the DHCP option sets in the VPC but I am not sure). Is that the right approach to set DNS as per the engineer’s requests? If so how can I reach the instance via RDP on the private subnet? I think a RD Gateway could help but I am a bit lost, changing DHCP make the instance unreachable via vpc endpoints and SSM session
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
port 53 is the DNS port. Why are you telnetting a host of port 53?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
If so how can I reach the instance via RDP on the private subnet?
This depends on where are you connecting from? Are you connecting from another Windows instance in one of your private subnets?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
If yes, then simply, go into your hosts server manager > IPV4 > Go into the respective network interface setting > Advanced > DNS
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
set your DNS entries there.
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Yes that’s right it’s a windows vm on a private subnet
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
change your DNS on that Windows server. If you change AWS DHCP options, then you will have wider issues
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
The engineer sent me a screenshot of what I should see when doing telnet (53), he claims I should be able to connect
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Inside the vm? Alright thank you so much!
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Once I change the dns on the windows server, what would be the easiest way to rdp into it?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
Simply launch RDP from your machine and connect to the other machine
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
I am a bit confused, the vm is inside a private subnet and it only has a private ip, can I still access it just with rdp without any vpn or bastion host?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
ok, tell me this…
From where are you trying to access the VM? From your laptop? Or from another Windows host in your VPC?
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
From my laptop
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
Are you conencting your laptop to a VPN?
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Not at the moment, I used to use SSM Sessions and RDP into it but changing the dns inside the vm makes the host unreachable
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
in all, your laptop and the vm you are trying to connect must be in the network that have a route between each other. Currently how is this routing established?
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Via ssm, it opens a port and it allows me to access the vm on that given port in localhost
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
if you are able to access the VM via SSM what is the issue that you want resolved?
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
you can also use FLEET MANAGER
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
to directly RDP from AWS console
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
I think I get you
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
you log into that VM using SSM, and on the VM, launch server manager > > IPV4 > Go into the respective network interface setting > Advanced > DNS and set the DNS on the VM
![Sudhakar Isireddy avatar](https://secure.gravatar.com/avatar/78a5e45398667f240ac1adebc7782ff4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0007-72.png)
after changing DNS on the VM as described above, you can still connect to it via SSM
![vicentemanzano6 avatar](https://secure.gravatar.com/avatar/d1d21bea59f1c9c1d1eaa1a9f8e7e80f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0014-72.png)
Fleet manager? That sounds good I will definitely give it a go, thank you so much!
![Balazs Varga avatar](https://secure.gravatar.com/avatar/944e59f1543dc43935bda4d7b9be7f85.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0019-72.png)
I did not find any info about transit gateway modify. My question is, will there be any outage if I modify the tgw to enable the cross account auto accept shared attachment
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
So I’m looking at being prepared to upgrade AWS EKS cluster to 1.23+ which requires the EBS CSI driver. Currently using the cloudposse/eks-cluster/aws
module and looking to see if anyone else has already attempted this and if so what changes are needed
![venkata.mutyala avatar](https://avatars.slack-edge.com/2022-01-10/2935964026964_e3525ee61170d7dc3198_72.png)
Hi did you find a solution for this?
![Jeremy (UnderGrid Network Services) avatar](https://avatars.slack-edge.com/2021-12-29/2893240357986_43abb0cb567d0eb2a80a_72.png)
Not one I particularly like… I manually upgraded the cluster and node group through console and then updated terraform version to match. I found if I changed the version in the terraform then the plan would fail.