#aws (2022-09)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-09-01

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

This is going to make a lot of people happy: https://aws.amazon.com/about-aws/whats-new/2022/09/aws-iam-identity-center-apis-manage-users-groups-scale/
AWS is launching additional APIs to create, read, update and delete users and groups in AWS IAM Identity Center (successor to AWS Single Sign-On)

cool-doge1
1
loren avatar

first customer managed policies and permission boundaries, now user and group management! hurrah! now if they’d just separate it from the org and make it a standalone service, i’d be ecstatic!

1

2022-09-02

Jan-Arve Nygård avatar
Jan-Arve Nygård

Anyone else using Account Factory for Terraform and having issues with the CodeBuild job for creating the customization pipeline layer for Lambda looping and being built on every terraform plan and apply?

Brent Garber avatar
Brent Garber

So right now we have a bunch of S3 buckets and each bucket has their own lambda function and corresponding IAM roles/policies to be sure that said function can 100% only access that bucket. Is there a way to consolidate down to a single policy for all but still enforce that least-access principle? Playing around with conditionals TagKeys and ResourceKeys , but can’t seem to find the proper DWIW.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It would be possible but it sounds like a bad idea

Alex Jurkiewicz avatar
Alex Jurkiewicz

Since buckets have a global namespace, there’s no guarantee you will always get the bucket name you want.

But more importantly, complex IAM policies are a special circle of hell all by themselves. Why would you change something that works for something that’s clever?

Brent Garber avatar
Brent Garber

Because we’re hitting the hard caps

Brent Garber avatar
Brent Garber

X policies * y customers is approaching 5k, so we’re trying to figure how to cut that down while keeping least access

Alex Jurkiewicz avatar
Alex Jurkiewicz

Makes sense. If you ask AWS support, they will write policies like this for you

Alex Jurkiewicz avatar
Alex Jurkiewicz

Conditions and abac are hard

Warren Parad avatar
Warren Parad

Could I suggest instead an AWS credentials vending machine, which lambda uses to get credentials that are scoped directly to the relevant bucket via a role that has the customer account imbedded in it?

It might also help for me to understand what actions you are taking with the bucket in question to give a recommendation

Brent Garber avatar
Brent Garber

It’s really just mostly get/put operations. Wanting to make sure that regardless of what code gets uploaded to a lambda, make sure from a policy perspective. That the trigger can only operate on the bucket that it was triggered from

Warren Parad avatar
Warren Parad

I’d probably create an intermediary that can do the validation and generate a presigned get/post to pass to the lambda to trigger. Then it doesn’t even need any credentials

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

anyone using the terraform-aws-eks-cluster and terraform-aws-eks-node-group modules setting the ENABLE_POD_ENI for the aws-node to tell the CNI to utilize pod security groups?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not use it (looks like a new feature), but it looks like it requires two steps to enable this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The following command adds the policy AmazonEKSVPCResourceController to a cluster role.

aws iam attach-role-policy \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
    --role-name ${ROLE_NAME}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "node_role_policy_arns" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or could be added here as another policy attachment (requires module modifications) https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/iam.tf#L39

resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

steps #2 to execute

kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Yeah I saw there qas an additional IAM policy needed to the role which I didn’t see as hard to accomplish, as you said it could be an additional policy attached to the role not needed to be done in the module per se. I was however not seeing anything apparent to set the necessary env variable to ‘true’ as I can see node groups deployed via the module have it set to ‘false’ bit that seems like just default values

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

This was more an exploratory inquiry but I have been asked to deploy out a Windows node group to our EKS cluster and preferably via TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you mean you want to set ENABLE_POD_ENI via TF and not calling kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true ?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@Andriy Knysh (Cloud Posse) Yes, I was curious if the TF module already supported a way to set this or if otherwise possible to set using the TF as if we went with using it would like to deploy out with the TF not execute additional CLI commands. Right now without it you end up with node level security groups which are fine if you trust all pods running in the cluster on those nodes, I’m just looking into LOE to enable pod level security groups with existing deployment method that could reduce the effective blast radius.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

k8s resources can be provisioned using terraform kubernetes provider, but I’m not sure what can be used to set env

2022-09-03

2022-09-04

Niv Weiss avatar
Niv Weiss

We are uploading our product to AWS marketplace. Where do I need to provide this one license secret key? Thanks!

Nyshawn Burton avatar
Nyshawn Burton

Not entirely sure but you will need to provide the license secret key in the AWS marketplace under the product listing.

https://www.buymeacoffee.com/nyshawnb

Nyshawn BURTONattachment image

Whatever you can spare would help tremendously! 

2022-09-05

kirupakaran avatar
kirupakaran

Hi, everyone aware sitemap.xml, my problem is ngnix will take sometime to load the proxy pass.

2022-09-06

idan levi avatar
idan levi

Hey all! I’m using route53 as my DNS provider and Nginx-ingress-controller as ingress in my k8s env. I want to redirect between 2 ingresses, for example, all request that go to app.x.io will redirect to app.x.com. tried to create an CName alias but it doesn’t work. Does someone have an idea?

Tommy avatar

try A alias instead of CNAME

idan levi avatar
idan levi

Cannot cause the original record (app.x.io) is a CNAME and A alias is looking for A recored

managedkaos avatar
managedkaos

This is a really oddball solution but, if you have the stomach for it:

  1. create an S3 bucket website with 0 content and a rule to redirect requests to app.x.com
  2. create a route 53 entry for app.x.io and add the S3 bucket as the target.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html

(Optional) Configuring a webpage redirect - Amazon Simple Storage Service

Configure your bucket as a website by setting up redirect locations where you can redirect requests for an object to another object.

Routing traffic to a website that is hosted in an Amazon S3 bucket - Amazon Route 53

Route traffic using Route 53 to a website that is hosted in an Amazon S3 bucket.

managedkaos avatar
managedkaos

Note that this solution is the most cost effective (compared to running a webserver on EC2/ECS or using an ALB).

managedkaos avatar
managedkaos

Before you create the bucket, keep these points in mind (since this is the only way it will work)
Value/Route traffic to
Choose Alias to S3 website endpoint, then choose the Region that the endpoint is from.
Choose the bucket that has the same name that you specified for Record name.
The list includes a bucket only if the bucket meets the following requirements:
• The name of the bucket is the same as the name of the record that you’re creating.
• The bucket is configured as a website endpoint.
• The bucket was created by the current AWS account.

managedkaos avatar
managedkaos

This one it the most important:
The name of the bucket is the same as the name of the record that you’re creating.

kirupakaran avatar
kirupakaran

Hey all, can we have same size of cpu and memory in ecs fargate. ex: cpu=2048 and memory = 2048 ?

Tommy avatar

CPU value Memory value (MiB) 256 (.25 vCPU) 512 (0.5GB), 1024 (1GB), 2048 (2GB) 512 (.5 vCPU) 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) 1024 (1 vCPU) 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) 4096 (4 vCPU) Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB)

Tommy avatar

that are the allowed combinations (copied from the documentation)

kirupakaran avatar
kirupakaran

Thank you

Tommy avatar

so, in your case: 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB)

1
Jonas Steinberg avatar
Jonas Steinberg

Curious what tags people think are critical? Here’s a list of the ones I think are generally useful, but would sure love to learn more:

environment: [dev, qa, staging, prod, whatever]version control: [github, gitlab, whatever]cicd: [circle, github, gitlab, whatever]needs-to-stay-on-24hours: [true, false]various-can-cannot-be-public: true, false]chargeback_id: 123456789department: [finance, it, eng, whatever]repo: some-github-repoproduct_owner: [[email protected]](mailto:[email protected]) still thinking

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

We have tags that specify:

• Owner (business unit, service name)

• Source (source repo and path)

• Environment

2022-09-07

kirupakaran avatar
kirupakaran

can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.

deniz gökçin avatar
deniz gökçin

Hello I am having problems with Cloudmap + ecs service discovery. I am not able to ping or dig a container from another container(using ecs exec) in the same ecs fargate cluster(awsvpc mode). Anyone had a similar problem? Looking forward for replies. Thanks!

2022-09-08

Eric Berg avatar
Eric Berg

When my AWS managed node groups (created with terraform-aws-modules/eks/aws//modules/eks-managed-node-group) change using Terraform (or related launch configs, security groups, etc.), and the MNG’s ASG is recycled, I have a min/max/desired or 1/2/1, and during the recycling, it spins up up to 7 additional EC2 instances, before settling down on a single one.

Anybody else see this and/or know how to manage it?

Eric Berg avatar
Eric Berg

This is expected behavior, and it’s based on the number of subnets. For example, we’re deployed in us-east-2, so there are 3 subnets, and our MNG is set to 1/1/1, so it spins up 2 new nodes in each AZ, before settling on one.

Managed node update behavior - Amazon EKS

The Amazon EKS managed worker node upgrade strategy has four different phases described in the following sections.

2022-09-09

2022-09-10

Taner avatar

Hello all, I am having trouble with terraform . Basically the problem is somewhat related to unreadable vpc_id although I can see it gets read on the state file. Anybody has similar error before?

Alexandr Bortnik avatar
Alexandr Bortnik

Hello!

I would like to clarify about cloudposse/eks-node-group/aws, so is it possible to disable random_pet ?

2022-09-11

idan levi avatar
idan levi

Hey all Small question about Route53, I’m using Kinsta as my domain host and Route53 as my DNS mgmt. i need to renew my SSL Certificate in my domain. I didn’t understand to the end what is the process to do it with the TXT record on Route53, someone is able to few questions?

venkata.mutyala avatar
venkata.mutyala

Hi you likely just need to add the text record to route53

venkata.mutyala avatar
venkata.mutyala

Basically go into route53 and create the record they tell you with the value they provided you with

venkata.mutyala avatar
venkata.mutyala

^ not sure if that helps or not. The txt records allows them to verify that they can give you a cert for the domain. Otherwise you could request a cert for any domain and easily get a cert

2022-09-12

kirupakaran avatar
kirupakaran

Hi all, Our database has been attacked by sql injection, we are using aurora mysql and cpu utiliztion almost 100%, how can i stop this any suggestions ?

akhan4u avatar
akhan4u

Maybe list all active connections and verify if its the same IP for attack and block them with a security_group rule

Darren Cunningham avatar
Darren Cunningham

doesn’t sound like IP restrictions would help if the attack is SQL injection, you’ll want to kill the processes that are eating the CPU then patch the application(s) ASAP. If this is going to take “too long” you might choose to make your application connection to the DB read-only and/or potentially take an outage. but these are all considerations for the business team.

How to find MySQL process list and to kill those processes?

The MySQL database hangs, due to some queries. How can I find the processes and kill them?

jedineeper avatar
jedineeper

anyone got an advice on how I could better present a service in EKS as an origin for a cloudfront distribution? I’m currently just going through my ingress controller to a domain name that the distribution reads, but that means I have an intermediate domain name for the ingress as well as a public origin that I’d rather secure down to just cloudfront.

Denis avatar

I don’t think I can help you on this front but I am genuinely curious about your use case here. What kind of an EKS service is it that you need the CloudFront to deliver? I’ve only used CF for static websites and presenting static large media files, so that’s why I’m asking.

jedineeper avatar
jedineeper

running a nodejs app, it has some static elements but a bunch of dynamic stuff as well. using cache-headers per endpoint to dictate to CF when stuff should be cached or not but it gives me a single endpoint for all the content.

Matt Gowie avatar
Matt Gowie

Hey folks — Quick AWS Route53 question I have while migrating a client’s DNS architecture:

Is it possible to have two Route53 Hosted Zones control the same domain (e.g. *.example.com) across separate accounts? In that I have some records for www.example.com and *.example.com on Hosted Zone #1 and then I have similar records for *.example.com on Hosted Zone #2 as well?

I am hoping so if they both point their NS records at the correct, authoritative nameservers, but I figured I’d check here before I tested this out.

loren avatar

If they are public hosted zones, then yes it’s easy. In the zone hosting example.com, create ns records for subdomain.example.com and you’re golden

Darren Cunningham avatar
Darren Cunningham

you can have [example.com](http://example.com) set up in Account A and [subdomain.example.com](http://subdomain.example.com) in Account B — you would just setup the NS from Account A to point the Nameservers for Account B

1
loren avatar

If they are private hosted zones, you need to do magic with route53 resolver rules, since private zones do not honor ns records

Darren Cunningham avatar
Darren Cunningham

but yeah great callout @loren this assumes publicly hosted zones

1
Matt Gowie avatar
Matt Gowie

Ah no — Sorry misunderstanding. I’ve done a hosted zone delegation like example.com in one account and then *.subdomain.example.com in another account.

What I’m trying to do is:

Account One (Legacy) — Existing Hosted Zone for example.com Account Two (New) — New Hosted Zone for example.com

I want records that are created in both Hosted Zones to work. And then I’ll be creating delegated (e.g. *.subdomain.example.com) hosted zones in other accounts.

I don’t think the account boundaries actually matter, but it’s just to illustrate the point: This is because I’m working with a client who has all of their resources in one account right now and we’re building out a proper account hierarchy for them now.

Matt Gowie avatar
Matt Gowie

I’m re-reading my initial question and I see how I made that confusing, my bad.

loren avatar

No, I don’t think you can do that? I’m trying to think how the ns records would look… You might be able to create the zones and records, but at some point you have to transfer the public ns records so public name servers resolve from the new zone… It’s basically a zone transfer

Matt Gowie avatar
Matt Gowie

Ah this is from the AWS Route53 FAQs:
Q. Can I create multiple hosted zones for the same domain name?

Yes. Creating multiple hosted zones allows you to verify your DNS setting in a “test” environment, and then replicate those settings on a “production” hosted zone. For example, hosted zone Z1234 might be your test version of example.com, hosted on name servers ns-1, ns-2, ns-3, ns-4, ns-5, ns-6. Similarly, hosted zone Z5678 might be your production version of example.com, hosted on ns-7, ns-8, ns-9, ns-10, ns-11 and ns-12. Since each hosted zone has a virtual set of name servers associated with that zone, Route 53 will answer DNS queries for example.com differently depending on which name server you send the DNS query to.

Matt Gowie avatar
Matt Gowie

But that doesn’t sound like what I would want…

Darren Cunningham avatar
Darren Cunningham

what’s the goal of having both zones handling queries? not doubting you, just making sure I’m not recommending something that breaks the goal

Matt Gowie avatar
Matt Gowie

I don’t want to touch the client’s existing Hosted Zone in their legacy all-in-one account. I’d rather leave that alone as it is and then manage a new hosted zone for all new records and delegated zones.

atlantisdude avatar
atlantisdude

Hi all,

I have a pod in EKS configured with a ServiceAccount which configures a role for the pod to use. so AWS_ROLE_ARN=arn:aws:sts::000000000:assumed-role/podrole

 aws sts get-caller-identity
{
    "UserId": "0000E:botocore-session-0000000",
    "Account": "000000",
    "Arn": "arn:aws:sts::000000000:assumed-role/podrole/botocore-session-222222222"
}

i want to allow this role to assume another role in a different account via a profile in ~/.aws/config

[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
credential_source = Environment

this is an example from the docs here. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html

i was hoping credential source would pick up the AWS_ROLE_ARN env vars set by the service account.

aws sts get-caller-identity --profile marketingadmin
Error when retrieving credentials from Environment: No credentials found in credential_source referenced in profile marketingadmin

does anyone have a work around?

2022-09-13

deniz gökçin avatar
deniz gökçin

Hi all! A quick aws security question. Is there anyone who is using aws security hub and aws config with aws organizations? I am not able to see the resources from member accounts and I have “Config.1 AWS Config should be enabled” error. Do I need to enable aws config in each member account manually?

Darren Cunningham avatar
Darren Cunningham

you can setup a delegated administrator account from your org settings and within that account you can configure security hub to automatically enroll all member accounts

deniz gökçin avatar
deniz gökçin

@Darren Cunningham from security hubs side, everything looks fine. I can see the accounts in my organization. I believe my problem is with aws config. I am not sure on how to enable it in member accounts. Does delegated administrator account handle enabling aws config?

Darren Cunningham avatar
Darren Cunningham

ah sorry, IIRC AWS config has “delegated admin” but the rollout of enabling AWS Config in all accounts/regions is not something that’s integrated into the product but there is a CF StackSet that’s provided: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html#config-how-to-enable

Enabling and configuring AWS Config - AWS Security Hub

Learn about the requirements to enable and configure AWS Config before you enable Security Hub.

deniz gökçin avatar
deniz gökçin

I only have 3 accounts per environment(qa, prod, staging, management) what do you think is the difference between enabling the app config manually vs deploying the stackset?

Darren Cunningham avatar
Darren Cunningham

well AWS Config also needs to be deployed per region so it’s accounts x regions which doesn’t sound fun to do manually

deniz gökçin avatar
deniz gökçin

thank you after enabling config in the regions that my resources are in and after 24 hours, I was able to see the security scores in the management account’s security hub dashboard.

1
Gabriel avatar
Gabriel

I am trying to get the aws-ebs-csi-driver helm chart working on a EKS 1.23 cluster.

The message I am getting from PVC events

failed to provision volume with StorageClass "gp2": error generating accessibility requirements: no topology key found on CSINode

The CSI topology feature docs say that:

• The PluginCapability must support VOLUME_ACCESSIBILITY_CONSTRAINTS. • The plugin must fill in accessible_topology in NodeGetInfoResponse. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object. • During CreateVolume, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements. I am not sure how to configure these points.

Gabriel avatar
Gabriel

I looked at the worker nodes (ec2) launch template / user data. The kubelet root path was not the standard /var/lib/kubelet. Instead it was a different one. I fixed the missing CSINode driver information by updating the volumes host paths with the correct kubelet root path.

Balazs Varga avatar
Balazs Varga

hello. what is the limit of the subaccounts ? If I would like to run customer cluster in separate subaccount is that possible? Or i have a limit ?

Darren Cunningham avatar
Darren Cunningham

there’s a soft limit of 10 accounts but that can be increased with a service request - largest org I’ve seen was ~220 accounts but I’m sure there are larger ones

Balazs Varga avatar
Balazs Varga

thanks

2022-09-14

Soren Jensen avatar
Soren Jensen

One thing to be aware of is it takes a lot more effort to delete an account than creating one. So depending on how long engagement you expect from your users it might not be worth the hassle.

Darren Cunningham avatar
Darren Cunningham
Darren Cunningham avatar
Darren Cunningham

but still has it’s limits

2022-09-15

Bogdan avatar

cross-posting from hangops since I’m really looking for a solution:
does anyone know if there’s an automatic way to block pulling/consuming of a Docker image from AWS ECR if the said image has been discovered to have vulnerabilities? By automatic here I am thinking of even updating IAM policies with a DENY statement…

Bogdan avatar

good find @Maciek Strömich - it’s what I was looking for

Balazs Varga avatar
Balazs Varga

Hello all, I am testing aws organization with SSO with extrnal IDP. Is that possible that only saml is the possible option and no oidc ?

loren avatar

Correct, only saml

1
Balazs Varga avatar
Balazs Varga
Other identity providers - AWS IAM Identity Center (successor to AWS Single Sign-On)

Learn about how other external identity providers work with IAM Identity Center.

Balazs Varga avatar
Balazs Varga

• IAM Identity Center requires a SAML nameID format of email address (that is, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress).

    keyboard_arrow_up