#aws (2022-09)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-09-01

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

This is going to make a lot of people happy: https://aws.amazon.com/about-aws/whats-new/2022/09/aws-iam-identity-center-apis-manage-users-groups-scale/
AWS is launching additional APIs to create, read, update and delete users and groups in AWS IAM Identity Center (successor to AWS Single Sign-On)

cool-doge1
1
loren avatar

first customer managed policies and permission boundaries, now user and group management! hurrah! now if they’d just separate it from the org and make it a standalone service, i’d be ecstatic!

1

2022-09-02

Jan-Arve Nygård avatar
Jan-Arve Nygård

Anyone else using Account Factory for Terraform and having issues with the CodeBuild job for creating the customization pipeline layer for Lambda looping and being built on every terraform plan and apply?

Brent Garber avatar
Brent Garber

So right now we have a bunch of S3 buckets and each bucket has their own lambda function and corresponding IAM roles/policies to be sure that said function can 100% only access that bucket. Is there a way to consolidate down to a single policy for all but still enforce that least-access principle? Playing around with conditionals TagKeys and ResourceKeys , but can’t seem to find the proper DWIW.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It would be possible but it sounds like a bad idea

Alex Jurkiewicz avatar
Alex Jurkiewicz

Since buckets have a global namespace, there’s no guarantee you will always get the bucket name you want.

But more importantly, complex IAM policies are a special circle of hell all by themselves. Why would you change something that works for something that’s clever?

Brent Garber avatar
Brent Garber

Because we’re hitting the hard caps

Brent Garber avatar
Brent Garber

X policies * y customers is approaching 5k, so we’re trying to figure how to cut that down while keeping least access

Alex Jurkiewicz avatar
Alex Jurkiewicz

Makes sense. If you ask AWS support, they will write policies like this for you

Alex Jurkiewicz avatar
Alex Jurkiewicz

Conditions and abac are hard

Warren Parad avatar
Warren Parad

Could I suggest instead an AWS credentials vending machine, which lambda uses to get credentials that are scoped directly to the relevant bucket via a role that has the customer account imbedded in it?

It might also help for me to understand what actions you are taking with the bucket in question to give a recommendation

Brent Garber avatar
Brent Garber

It’s really just mostly get/put operations. Wanting to make sure that regardless of what code gets uploaded to a lambda, make sure from a policy perspective. That the trigger can only operate on the bucket that it was triggered from

Warren Parad avatar
Warren Parad

I’d probably create an intermediary that can do the validation and generate a presigned get/post to pass to the lambda to trigger. Then it doesn’t even need any credentials

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

anyone using the terraform-aws-eks-cluster and terraform-aws-eks-node-group modules setting the ENABLE_POD_ENI for the aws-node to tell the CNI to utilize pod security groups?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not use it (looks like a new feature), but it looks like it requires two steps to enable this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
The following command adds the policy AmazonEKSVPCResourceController to a cluster role.

aws iam attach-role-policy \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
    --role-name ${ROLE_NAME}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
variable "node_role_policy_arns" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or could be added here as another policy attachment (requires module modifications) https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/iam.tf#L39

resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

steps #2 to execute

kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Yeah I saw there qas an additional IAM policy needed to the role which I didn’t see as hard to accomplish, as you said it could be an additional policy attached to the role not needed to be done in the module per se. I was however not seeing anything apparent to set the necessary env variable to ‘true’ as I can see node groups deployed via the module have it set to ‘false’ bit that seems like just default values

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

This was more an exploratory inquiry but I have been asked to deploy out a Windows node group to our EKS cluster and preferably via TF

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you mean you want to set ENABLE_POD_ENI via TF and not calling kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true ?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

@Andriy Knysh (Cloud Posse) Yes, I was curious if the TF module already supported a way to set this or if otherwise possible to set using the TF as if we went with using it would like to deploy out with the TF not execute additional CLI commands. Right now without it you end up with node level security groups which are fine if you trust all pods running in the cluster on those nodes, I’m just looking into LOE to enable pod level security groups with existing deployment method that could reduce the effective blast radius.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

k8s resources can be provisioned using terraform kubernetes provider, but I’m not sure what can be used to set env

2022-09-03

2022-09-04

Niv Weiss avatar
Niv Weiss

We are uploading our product to AWS marketplace. Where do I need to provide this one license secret key? Thanks!

Nyshawn Burton avatar
Nyshawn Burton

Not entirely sure but you will need to provide the license secret key in the AWS marketplace under the product listing.

https://www.buymeacoffee.com/nyshawnb

Nyshawn BURTONattachment image

Whatever you can spare would help tremendously! 

2022-09-05

kirupakaran avatar
kirupakaran

Hi, everyone aware sitemap.xml, my problem is ngnix will take sometime to load the proxy pass.

2022-09-06

idan levi avatar
idan levi

Hey all! I’m using route53 as my DNS provider and Nginx-ingress-controller as ingress in my k8s env. I want to redirect between 2 ingresses, for example, all request that go to app.x.io will redirect to app.x.com. tried to create an CName alias but it doesn’t work. Does someone have an idea?

Tommy avatar

try A alias instead of CNAME

idan levi avatar
idan levi

Cannot cause the original record (app.x.io) is a CNAME and A alias is looking for A recored

managedkaos avatar
managedkaos

This is a really oddball solution but, if you have the stomach for it:

  1. create an S3 bucket website with 0 content and a rule to redirect requests to app.x.com
  2. create a route 53 entry for app.x.io and add the S3 bucket as the target.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html

(Optional) Configuring a webpage redirect - Amazon Simple Storage Service

Configure your bucket as a website by setting up redirect locations where you can redirect requests for an object to another object.

Routing traffic to a website that is hosted in an Amazon S3 bucket - Amazon Route 53

Route traffic using Route 53 to a website that is hosted in an Amazon S3 bucket.

managedkaos avatar
managedkaos

Note that this solution is the most cost effective (compared to running a webserver on EC2/ECS or using an ALB).

managedkaos avatar
managedkaos

Before you create the bucket, keep these points in mind (since this is the only way it will work)
Value/Route traffic to
Choose Alias to S3 website endpoint, then choose the Region that the endpoint is from.
Choose the bucket that has the same name that you specified for Record name.
The list includes a bucket only if the bucket meets the following requirements:
• The name of the bucket is the same as the name of the record that you’re creating.
• The bucket is configured as a website endpoint.
• The bucket was created by the current AWS account.

managedkaos avatar
managedkaos

This one it the most important:
The name of the bucket is the same as the name of the record that you’re creating.

kirupakaran avatar
kirupakaran

Hey all, can we have same size of cpu and memory in ecs fargate. ex: cpu=2048 and memory = 2048 ?

Tommy avatar

CPU value Memory value (MiB) 256 (.25 vCPU) 512 (0.5GB), 1024 (1GB), 2048 (2GB) 512 (.5 vCPU) 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) 1024 (1 vCPU) 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) 4096 (4 vCPU) Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB)

Tommy avatar

that are the allowed combinations (copied from the documentation)

kirupakaran avatar
kirupakaran

Thank you

Tommy avatar

so, in your case: 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB)

1
Jonas Steinberg avatar
Jonas Steinberg

Curious what tags people think are critical? Here’s a list of the ones I think are generally useful, but would sure love to learn more:

environment: [dev, qa, staging, prod, whatever]version control: [github, gitlab, whatever]cicd: [circle, github, gitlab, whatever]needs-to-stay-on-24hours: [true, false]various-can-cannot-be-public: true, false]chargeback_id: 123456789department: [finance, it, eng, whatever]repo: some-github-repoproduct_owner: [[email protected]](mailto:[email protected]) still thinking

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

We have tags that specify:

• Owner (business unit, service name)

• Source (source repo and path)

• Environment

Zoltan K avatar
Zoltan K

we called CostCenter what you have as chargeback I guess. I would use camelCase or similar naming for all tags but not mixed dash or underscore. we had additional info on classification e.g data classification for s3 bucket also service tier could be a good addition imo. plus I see you have product owner but I would add product as well just for grouping, tech contact also missing… e.g. LauchedBy / OwnerTeam etc

1

2022-09-07

kirupakaran avatar
kirupakaran

can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.

deniz gökçin avatar
deniz gökçin

Hello I am having problems with Cloudmap + ecs service discovery. I am not able to ping or dig a container from another container(using ecs exec) in the same ecs fargate cluster(awsvpc mode). Anyone had a similar problem? Looking forward for replies. Thanks!

2022-09-08

Eric Berg avatar
Eric Berg

When my AWS managed node groups (created with terraform-aws-modules/eks/aws//modules/eks-managed-node-group) change using Terraform (or related launch configs, security groups, etc.), and the MNG’s ASG is recycled, I have a min/max/desired or 1/2/1, and during the recycling, it spins up up to 7 additional EC2 instances, before settling down on a single one.

Anybody else see this and/or know how to manage it?

Eric Berg avatar
Eric Berg

This is expected behavior, and it’s based on the number of subnets. For example, we’re deployed in us-east-2, so there are 3 subnets, and our MNG is set to 1/1/1, so it spins up 2 new nodes in each AZ, before settling on one.

Managed node update behavior - Amazon EKS

The Amazon EKS managed worker node upgrade strategy has four different phases described in the following sections.

2022-09-09

2022-09-10

Taner avatar

Hello all, I am having trouble with terraform . Basically the problem is somewhat related to unreadable vpc_id although I can see it gets read on the state file. Anybody has similar error before?

Alexandr Bortnik avatar
Alexandr Bortnik

Hello!

I would like to clarify about cloudposse/eks-node-group/aws, so is it possible to disable random_pet ?

2022-09-11

idan levi avatar
idan levi

Hey all Small question about Route53, I’m using Kinsta as my domain host and Route53 as my DNS mgmt. i need to renew my SSL Certificate in my domain. I didn’t understand to the end what is the process to do it with the TXT record on Route53, someone is able to few questions?

venkata.mutyala avatar
venkata.mutyala

Hi you likely just need to add the text record to route53

venkata.mutyala avatar
venkata.mutyala

Basically go into route53 and create the record they tell you with the value they provided you with

venkata.mutyala avatar
venkata.mutyala

^ not sure if that helps or not. The txt records allows them to verify that they can give you a cert for the domain. Otherwise you could request a cert for any domain and easily get a cert

2022-09-12

kirupakaran avatar
kirupakaran

Hi all, Our database has been attacked by sql injection, we are using aurora mysql and cpu utiliztion almost 100%, how can i stop this any suggestions ?

akhan4u avatar
akhan4u

Maybe list all active connections and verify if its the same IP for attack and block them with a security_group rule

Darren Cunningham avatar
Darren Cunningham

doesn’t sound like IP restrictions would help if the attack is SQL injection, you’ll want to kill the processes that are eating the CPU then patch the application(s) ASAP. If this is going to take “too long” you might choose to make your application connection to the DB read-only and/or potentially take an outage. but these are all considerations for the business team.

How to find MySQL process list and to kill those processes?

The MySQL database hangs, due to some queries. How can I find the processes and kill them?

jedineeper avatar
jedineeper

anyone got an advice on how I could better present a service in EKS as an origin for a cloudfront distribution? I’m currently just going through my ingress controller to a domain name that the distribution reads, but that means I have an intermediate domain name for the ingress as well as a public origin that I’d rather secure down to just cloudfront.

Denis avatar

I don’t think I can help you on this front but I am genuinely curious about your use case here. What kind of an EKS service is it that you need the CloudFront to deliver? I’ve only used CF for static websites and presenting static large media files, so that’s why I’m asking.

jedineeper avatar
jedineeper

running a nodejs app, it has some static elements but a bunch of dynamic stuff as well. using cache-headers per endpoint to dictate to CF when stuff should be cached or not but it gives me a single endpoint for all the content.

Matt Gowie avatar
Matt Gowie

Hey folks — Quick AWS Route53 question I have while migrating a client’s DNS architecture:

Is it possible to have two Route53 Hosted Zones control the same domain (e.g. *.example.com) across separate accounts? In that I have some records for www.example.com and *.example.com on Hosted Zone #1 and then I have similar records for *.example.com on Hosted Zone #2 as well?

I am hoping so if they both point their NS records at the correct, authoritative nameservers, but I figured I’d check here before I tested this out.

loren avatar

If they are public hosted zones, then yes it’s easy. In the zone hosting example.com, create ns records for subdomain.example.com and you’re golden

Darren Cunningham avatar
Darren Cunningham

you can have [example.com](http://example.com) set up in Account A and [subdomain.example.com](http://subdomain.example.com) in Account B — you would just setup the NS from Account A to point the Nameservers for Account B

1
loren avatar

If they are private hosted zones, you need to do magic with route53 resolver rules, since private zones do not honor ns records

Darren Cunningham avatar
Darren Cunningham

but yeah great callout @loren this assumes publicly hosted zones

1
Matt Gowie avatar
Matt Gowie

Ah no — Sorry misunderstanding. I’ve done a hosted zone delegation like example.com in one account and then *.subdomain.example.com in another account.

What I’m trying to do is:

Account One (Legacy) — Existing Hosted Zone for example.com Account Two (New) — New Hosted Zone for example.com

I want records that are created in both Hosted Zones to work. And then I’ll be creating delegated (e.g. *.subdomain.example.com) hosted zones in other accounts.

I don’t think the account boundaries actually matter, but it’s just to illustrate the point: This is because I’m working with a client who has all of their resources in one account right now and we’re building out a proper account hierarchy for them now.

Matt Gowie avatar
Matt Gowie

I’m re-reading my initial question and I see how I made that confusing, my bad.

loren avatar

No, I don’t think you can do that? I’m trying to think how the ns records would look… You might be able to create the zones and records, but at some point you have to transfer the public ns records so public name servers resolve from the new zone… It’s basically a zone transfer

Matt Gowie avatar
Matt Gowie

Ah this is from the AWS Route53 FAQs:
Q. Can I create multiple hosted zones for the same domain name?

Yes. Creating multiple hosted zones allows you to verify your DNS setting in a “test” environment, and then replicate those settings on a “production” hosted zone. For example, hosted zone Z1234 might be your test version of example.com, hosted on name servers ns-1, ns-2, ns-3, ns-4, ns-5, ns-6. Similarly, hosted zone Z5678 might be your production version of example.com, hosted on ns-7, ns-8, ns-9, ns-10, ns-11 and ns-12. Since each hosted zone has a virtual set of name servers associated with that zone, Route 53 will answer DNS queries for example.com differently depending on which name server you send the DNS query to.

Matt Gowie avatar
Matt Gowie

But that doesn’t sound like what I would want…

Darren Cunningham avatar
Darren Cunningham

what’s the goal of having both zones handling queries? not doubting you, just making sure I’m not recommending something that breaks the goal

Matt Gowie avatar
Matt Gowie

I don’t want to touch the client’s existing Hosted Zone in their legacy all-in-one account. I’d rather leave that alone as it is and then manage a new hosted zone for all new records and delegated zones.

ghostface avatar
ghostface

Hi all,

I have a pod in EKS configured with a ServiceAccount which configures a role for the pod to use. so AWS_ROLE_ARN=arn:aws:sts::000000000:assumed-role/podrole

 aws sts get-caller-identity
{
    "UserId": "0000E:botocore-session-0000000",
    "Account": "000000",
    "Arn": "arn:aws:sts::000000000:assumed-role/podrole/botocore-session-222222222"
}

i want to allow this role to assume another role in a different account via a profile in ~/.aws/config

[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
credential_source = Environment

this is an example from the docs here. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html

i was hoping credential source would pick up the AWS_ROLE_ARN env vars set by the service account.

aws sts get-caller-identity --profile marketingadmin
Error when retrieving credentials from Environment: No credentials found in credential_source referenced in profile marketingadmin

does anyone have a work around?

2022-09-13

deniz gökçin avatar
deniz gökçin

Hi all! A quick aws security question. Is there anyone who is using aws security hub and aws config with aws organizations? I am not able to see the resources from member accounts and I have “Config.1 AWS Config should be enabled” error. Do I need to enable aws config in each member account manually?

Darren Cunningham avatar
Darren Cunningham

you can setup a delegated administrator account from your org settings and within that account you can configure security hub to automatically enroll all member accounts

deniz gökçin avatar
deniz gökçin

@Darren Cunningham from security hubs side, everything looks fine. I can see the accounts in my organization. I believe my problem is with aws config. I am not sure on how to enable it in member accounts. Does delegated administrator account handle enabling aws config?

Darren Cunningham avatar
Darren Cunningham

ah sorry, IIRC AWS config has “delegated admin” but the rollout of enabling AWS Config in all accounts/regions is not something that’s integrated into the product but there is a CF StackSet that’s provided: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html#config-how-to-enable

Enabling and configuring AWS Config - AWS Security Hub

Learn about the requirements to enable and configure AWS Config before you enable Security Hub.

deniz gökçin avatar
deniz gökçin

I only have 3 accounts per environment(qa, prod, staging, management) what do you think is the difference between enabling the app config manually vs deploying the stackset?

Darren Cunningham avatar
Darren Cunningham

well AWS Config also needs to be deployed per region so it’s accounts x regions which doesn’t sound fun to do manually

deniz gökçin avatar
deniz gökçin

thank you after enabling config in the regions that my resources are in and after 24 hours, I was able to see the security scores in the management account’s security hub dashboard.

1
Adnan avatar

I am trying to get the aws-ebs-csi-driver helm chart working on a EKS 1.23 cluster.

The message I am getting from PVC events

failed to provision volume with StorageClass "gp2": error generating accessibility requirements: no topology key found on CSINode

The CSI topology feature docs say that:

• The PluginCapability must support VOLUME_ACCESSIBILITY_CONSTRAINTS. • The plugin must fill in accessible_topology in NodeGetInfoResponse. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object. • During CreateVolume, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements. I am not sure how to configure these points.

Adnan avatar

I looked at the worker nodes (ec2) launch template / user data. The kubelet root path was not the standard /var/lib/kubelet. Instead it was a different one. I fixed the missing CSINode driver information by updating the volumes host paths with the correct kubelet root path.

Balazs Varga avatar
Balazs Varga

hello. what is the limit of the subaccounts ? If I would like to run customer cluster in separate subaccount is that possible? Or i have a limit ?

Darren Cunningham avatar
Darren Cunningham

there’s a soft limit of 10 accounts but that can be increased with a service request - largest org I’ve seen was ~220 accounts but I’m sure there are larger ones

Balazs Varga avatar
Balazs Varga

thanks

2022-09-14

Soren Jensen avatar
Soren Jensen

One thing to be aware of is it takes a lot more effort to delete an account than creating one. So depending on how long engagement you expect from your users it might not be worth the hassle.

Darren Cunningham avatar
Darren Cunningham
Darren Cunningham avatar
Darren Cunningham

but still has it’s limits

2022-09-15

Bogdan avatar

cross-posting from hangops since I’m really looking for a solution:
does anyone know if there’s an automatic way to block pulling/consuming of a Docker image from AWS ECR if the said image has been discovered to have vulnerabilities? By automatic here I am thinking of even updating IAM policies with a DENY statement…

Bogdan avatar

good find @Maciek Strömich - it’s what I was looking for

Balazs Varga avatar
Balazs Varga

Hello all, I am testing aws organization with SSO with extrnal IDP. Is that possible that only saml is the possible option and no oidc ?

loren avatar

Correct, only saml

1
Balazs Varga avatar
Balazs Varga
Other identity providers - AWS IAM Identity Center (successor to AWS Single Sign-On)

Learn about how other external identity providers work with IAM Identity Center.

Balazs Varga avatar
Balazs Varga

• IAM Identity Center requires a SAML nameID format of email address (that is, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress).

Balazs Varga avatar
Balazs Varga

solved.

2022-09-16

Balazs Varga avatar
Balazs Varga

can I find a good nodejs module for scim api for autoprovisioning ?

1

2022-09-17

sohaibahmed98 avatar
sohaibahmed98

Hello all, Can I store docker images into S3 instead of ECR in order to optimize cost?

For example: If I use ECR with VPC endpoints (ecr.dkr, ecr.api), then Pricing will be, per VPC endpoint per AZ ($/hour) which is costly but If I store docker images in S3 with gateway VPC endpoint for S3 which is free and use S3 docker image path inside tasks definition then cost might be less.

What is the best practice? What would be the disadvantages of storing docker images into S3 instead of ECR? Is this correct approach to store docker images in S3?

Darren Cunningham avatar
Darren Cunningham

you probably could rig up a solution to publish images to S3 and pull them via s3, but the cost of all that complexity (and likely marooning yourself from the integrations with image scanning, eks, fargate, etc) just isn’t worth it.

this1
sohaibahmed98 avatar
sohaibahmed98

I am using ECS fargate and I think I can integrate it with S3 and could point S3 inside task definitation instead of ECR?

sohaibahmed98 avatar
sohaibahmed98

because there is lot of cost with ECR private endpoints. ECR use S3 internally then why not use S3 instead of ECR with free s3 gateway endpoint.

sohaibahmed98 avatar
sohaibahmed98

could you please elaborate on why isn’t worth it?

Darren Cunningham avatar
Darren Cunningham

I don’t have all the data points about your situation, so I could be wrong. but, in general (IMO) the more off script you go, the more complexity you have. complexity has operational costs (more difficult to onboard other team members to home grown solution), takes longer to modify your solution in the event that it breaks or needs upgraded as service change and you end up finding yourself on the outside looking in when improvements to integrations are made by AWS.

jsreed avatar

Agree… increasing complexity to save a buck is a bad idea, at a minimum use an EC2 with S3 backed storage and create a private registry via docker. Again costs may be a wash in that scenario

sohaibahmed98 avatar
sohaibahmed98

Thanks guys

2022-09-18

2022-09-19

kirupakaran avatar
kirupakaran

Hi everyone, when i restored aws aurora instance , i have to create reader instance manually or else it will create reader instances manually??

Karim Benjelloun avatar
Karim Benjelloun

Hello, any alternatives to run a Managed Private CA? I feel AWS Pricing is quite expensive (400$ per month + 0,75$ per certificate)

RB avatar

Could you self sign a cert and then any certs could then just be imported into ACM ?

RB avatar

The drawback with that is that you’d have to renew your certs manually I believe

Karim Benjelloun avatar
Karim Benjelloun

We need to be able to easily create/revoke SSL certificates, since these need to be deployed on IoT devices

RB avatar

any reason you need to use a private ca instead of amazon’s ca ?

Karim Benjelloun avatar
Karim Benjelloun

We need to create and deploy Private certificates, we will use these to connect to an MQTT broker

loren avatar

Hashicorp vault can work as a Private CA, I think. Not sure how much cheaper it would be, especially if you need HA

Warren Parad avatar
Warren Parad

How does the broker know the cert is valid?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can revoke public ACM certificates. It requires a support request so the viability of this depends on how many certs you revoke

Alanis Swanepoel avatar
Alanis Swanepoel

Private CA - look at easyrsa (this is used by openVPN for their certificates) https://github.com/OpenVPN/easy-rsa

OpenVPN/easy-rsa

easy-rsa - Simple shell based CA utility

2022-09-20

Balazs Varga avatar
Balazs Varga

hello, can I create a 4eyes solution with aws resources for aws switch role ? idea is to give read permission to user and give the admin role with switch role but only with approval

Warren Parad avatar
Warren Parad

AWS doesn’t natively support step up authorization for multiparty, you would need a dedicated solution for that. Or find a provider that offers off the shelf support

1
Balazs Varga avatar
Balazs Varga

Do you know any?

Warren Parad avatar
Warren Parad

I think I would need to know more about your specific use case to make a suggestion. Would you be able to add more color to it?

Balazs Varga avatar
Balazs Varga

We just would like to add another layer to security. So on base step. Everybody would get read only access to aws console and/or progmatic access, and few person ( admins) can get admin access if there is any issue in the system, but for the access they would need to get an approvement. Using IAM roles

Warren Parad avatar
Warren Parad

Usually I’ve found if you need this, something organizationally has gone wrong. Like more than ~8 people using one AWS account. The ROI on segregating AWS accounts at the team boundary is sooo high compared to implementing something like this. Also it won’t work with the console, you’d need something custom outside and it would only support API interactions.

Balazs Varga avatar
Balazs Varga

what do you meant ~8 people using one aws account. you mean 8 admin in the account? we will implement organization and separate deployments and infra part to separate UO and subaccount. This 4eyes is a big dream from one of the manager and that’s why I am trying to find something to implement.

Warren Parad avatar
Warren Parad

No I mean 8 people per account

1
Balazs Varga avatar
Balazs Varga

yeah that is correct currently. My idea and goal is at the end to create account per customer. ok for dev part maybe we will have more than 8 users, but per customer hope it will be 1 / account

Warren Parad avatar
Warren Parad

I’m not sure what “customer” is here

Alex Jurkiewicz avatar
Alex Jurkiewicz

How does “8 people per account” work with EKS?

Balazs Varga avatar
Balazs Varga

we don’t use eks. we create cluster with kops and using spot instances. the clusters are not too big. we did not have any issue with that currently. customer = companies that bought our product. we have an isolation by design request.

Warren Parad avatar
Warren Parad

Don’t use EKS. I’ll let you know when I encounter a problem at scale that requires its usage

1
Balazs Varga avatar
Balazs Varga

And what is your advice to fullfill manager’s request?:)

Warren Parad avatar
Warren Parad

I tell them to read Turn The Ship Around

1
Zoltan K avatar
Zoltan K

let me add my 2 cents to the conversation above. I would grant STS AssumeRole access to the users. so when you add a user to a specific role you can get approval for that… But when a user uses the role all you can do is log the activities… so no approval in this granularity is possible and this would make approver’s life a nightmare… logging should be more than enough…

1

2022-09-21

MJD avatar

looking for a bit of inspiration, I want to walk my AWS Accounts, on a regular basis (say hourly) and catalogue all EC2 instances, that meet a certain set of tag conditions and display details in a ‘status’ type way, eg: filter all EC2 where tag1=false, tag2!=bob print {tag3, tag4, tag5} in a nice dashboard type table, I thought this would be easy to do with datadog and tags, but because it’s using just tags or conditional tag searches, it’s bad

Alex Jurkiewicz avatar
Alex Jurkiewicz

AWS has several products that can do this

Alex Jurkiewicz avatar
Alex Jurkiewicz

AWS Systems Manager Inventory is specifically for EC2

MJD avatar

Hmm, I’d not looked at it that way,

MJD avatar

that’s useful to think about

Alex Jurkiewicz avatar
Alex Jurkiewicz

AWS Config may also be appropriate

MJD avatar

config can get me the data, but it’s not great at displaying it

Alex Jurkiewicz avatar
Alex Jurkiewicz

Why do you want to display it? A wall dashboard?

MJD avatar

something like that yes

MJD avatar

see the status of a specific deployed fleet

Alex Jurkiewicz avatar
Alex Jurkiewicz

It might be reasonable to get data off the AWS event bus and push it to your own system then (like datadog)

MJD avatar

I can ‘get’ the data with ease, visualising it in a human format that’s simple to read is where I’m looking for inspiriation

Joe Niland avatar
Joe Niland

cli or web interface?

kirupakaran avatar
kirupakaran

Hi everyone, how can I automate…AWS aurora automate backup , partial data should be exported into S3 ??

Vinko Vrsalovic avatar
Vinko Vrsalovic

Can I ask architecture questions? I want to deploy a dotnet 6 application that is backed by PostgreSQL. The application exposes a REST API and also has an internally scheduled process that runs batch processing. I’m torn between splitting up the batch processing from the REST API, using Lambda+API Gateway for the API and a simple ECS container for the batch processing. OR, having containers for both things. I’m thinking about provisioned Aurora for PostgreSQL (serverless v2 seems really pricey for now)

I’m also torn between ECS and EKS, I feel that EKS might be overkill for now.

Any other options I’m missing?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Might be an easier question to ask during #office-hours

Alex Jurkiewicz avatar
Alex Jurkiewicz

ECS fargate is probably the lowest complexity approach. Use the same compute type for all your apps for simplicity

sohaibahmed98 avatar
sohaibahmed98

I think EKS will increase cost and complexity. ECS Fargate would be best option.

yegorski avatar
yegorski

Resurrecting an old topic. With aws-okta no longer maintained and no longer installable via Homebrew, what are folks using to grant CLI access to AWS via Okta? We use Okta SSO as a SAML provider for our AWS org.

yegorski avatar
yegorski

Scouting around here an in other slack orgs, so far I’ve gathered (in this order of preference):

https://github.com/godaddy/aws-okta-processor

https://github.com/Nike-Inc/gimme-aws-creds

https://github.com/Versent/saml2aws

loren avatar

I use the first one and like it a lot. It has the best model for understanding that there are two tokens/credentials with different expirations (one for okta, one or more per aws role) and managing them separately

1
cool-doge1
loren avatar

The other option I’ve used is AWS SSO, with okta as the external IDP for AWS SSO, and SCIM syncing all users and groups… Then you can use anything that understands AWS SSO

loren avatar

When doing that, I like granted a lot, https://github.com/common-fate/granted

common-fate/granted

The easiest way to access your cloud.

loren avatar

Or leaap, but leaap has problems with govcloud and other non-aws partitions

yegorski avatar
yegorski

I’m looking into AWS SSO, thanks

yegorski avatar
yegorski

Looks like granted is geared towards browser access?

yegorski avatar
yegorski

I’m looking for a pure CLI tool (if I can login to the browser via CLI that’s a bonus but not the main goal)

loren avatar

Not necessarily, with granted, assume will export the creds to your env, and assume -c will open the console in a browser container-tab, and assume exec will just run the command with the credential

loren avatar

They’re also working on a credential_process version that won’t muck with the env and will support a refreshable credential

yegorski avatar
yegorski

Nice

Rohit S avatar
Rohit S

Versent/saml2aws is neat and supports Okta pretty well.

1
Larry Gadallah avatar
Larry Gadallah

Another vote for saml2aws

yegorski avatar
yegorski

I tried out aws-okta-processor. So far so good. I just don’t see a way to switch between roles. I have to run rm -rf ~/.aws/boto/cache/ every time and have to do another eval $(aws-okta-processor authenticate --environment)

loren avatar

Oh I use it with credential_process, to avoid polluting my env and get a refreshable credential for free. So I have a different aws-cli profile for every role

loren avatar

I think for your use case, there is a cli option to disable the cache

loren avatar

–no-aws-cache

1
yegorski avatar
yegorski

Cheers, gonna keep exploring

Andrea Cavagna avatar
Andrea Cavagna
Configure AWS IAM Role Federated - Leapp - Docs

Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally

Andrea Cavagna avatar
Andrea Cavagna

for AWS SSO it integrates automatically,

what are the issues with Leapp and cloudgov @loren ? I would love to solve those issues.

Let me know if this docs are solving your problem @yegorski

Configure AWS Single Sign-On integration - Leapp - Docs

Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally

Andrea Cavagna avatar
Andrea Cavagna

Thank you so much, I will look to solve those bug asap

1
yegorski avatar
yegorski

Leapp definitely the “wow factor” looks great

Andrea Cavagna avatar
Andrea Cavagna

let me know if you have any question on this, or need any help

1
yegorski avatar
yegorski

I see, I need a “valid SSO portal URL” so to use this tool I need to set up AWS SSO. This requires changing how I currently have our 6 AWS accounts connected, via federated login

Andrea Cavagna avatar
Andrea Cavagna

What do you mean? Your accounts are connected via SAML with Okta?

yegorski avatar
yegorski

Yeah right all accounts are connected with Okta SAML, going through the main AWS org account.

Isaac avatar

Old topic but I’ve used https://github.com/dowjones/tokendito in the past.

dowjones/tokendito

Generate temporary AWS credentials via Okta.

yegorski avatar
yegorski

Thanks! Yep, that’s on my radar

yegorski avatar
yegorski

We decided to stick with aws-okta

2022-09-22

fotag avatar

Does anyone know why (technically) you can’t delete/modify an RDS instance that’s at stopped state?

JoseF avatar

Does it has activated deletion protection in his configs? Unless this should not stop you from modified it.

fotag avatar

thanks, I’ll check it

fotag avatar

Nope, even deletion protection has been removed, when you try to delete an instance it asks to start cluster first

Vinko Vrsalovic avatar
Vinko Vrsalovic

I’ve seen that, for both serverless v2 and provisioned RDS instances

Vinko Vrsalovic avatar
Vinko Vrsalovic

I can’t tell you why, maybe it is to run the last snapshot? I do have last snapshot turned on and haven’t tried it without

2022-09-23

Aritra Banerjee avatar
Aritra Banerjee

Hi, does AWS Database Migration Service work between RDS to RDS transfer. We have a new site going live and we want to sync prod database with a new database and after everything is verified we will change the rds from the old one to the new one

Paula avatar

Hi! im not an expert, i used the service once or twice but with other porpouse, i think DMS have to work for that use case (that is the original use case i guess). You can test it setting the source endpoint the RDS N°1 and for the target endpoint the RDS N°2. Be carefoul if you are using secrets manager for the passwords and make a diferent secret for each RDS or you can accidentally replicate the data in the original RDS

mikesew avatar
mikesew

@Aritra Banerjee: DMS indeed does support RDS like any other source or destination. When you choose ‘rds’ as a source, you can now select it from a drop-down list instead of having to manually enter host/port/service details, so that’s a bit nicer. in the docs, you’ll notice that both source and targets both list RDS as a potential.

https://medium.com/team-pratilipi/how-to-migrate-rds-to-rds-via-dms-b8f9b86f23c

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Targets.html

Aritra Banerjee avatar
Aritra Banerjee

wow, great thank you

2022-09-24

2022-09-27

Brent Garber avatar
Brent Garber

Is there a way to force iam_policy_document to output the principals as a list even if there’s a single element?

Brent Garber avatar
Brent Garber
    principals {
      identifiers = ["arn:aws:sts::${local.account_id}:assumed-role/task-role/*"]
      type        = "AWS"
    }

gets spit out to

 "Principal": { "AWS": "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" } 

but OpenSearch wants that as

"Principal": { "AWS": [ "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" ] }
Alex Jurkiewicz avatar
Alex Jurkiewicz

What do you mean, open search “wants” the latter format? The two forms are functionally identical.

Perhaps you could specify the statement as raw JSON rather than using the Terraform data source

Brent Garber avatar
Brent Garber

Terraform Version

Terraform v0.6.15

Affected Resource(s)

• aws_elasticsearch_domain

Terraform Configuration Files

provider "aws" {
  region = "us-east-1"
}

resource "aws_iam_user" "es" {
  name = "srv_user1"
}

resource "aws_iam_access_key" "es" {
  user = "${aws_iam_user.es.name}"
}

resource "aws_elasticsearch_domain" "es" {

  domain_name = "es1"

  advanced_options {
    "rest.action.multi.allow_explicit_index" = true
  }

  snapshot_options {
    "automated_snapshot_start_hour" = 23
  }

  access_policies = <<CONFIG
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": "es:*",
        "Principal": {
          "AWS": "${aws_iam_user.es.arn}"
        }
      }
    ]
  }
CONFIG
}

Debug Output

https://gist.github.com/jritsema/8d4060e703c9a287753e1e0db5c41afd

Panic Output

none

Expected Behavior

An Elasticsearch domain should be created with a policy that grants access to the newly created user.

Actual Behavior

Throws the following error

Error applying plan:

1 error(s) occurred:

* aws_elasticsearch_domain.es: InvalidTypeException: Error setting policy: [  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": "es:*",
        "Principal": {
          "AWS": "arn:aws:iam::xxxxxxxx:user/srv_user1"
        }
      }
    ]
  }
]
    status code: 409, request id: 5ce1b757-1060-11e6-800a-c363f7f5dcbd

Steps to Reproduce

Please list the steps required to reproduce the issue

  1. terraform apply

Important Factoids

none

References

GH-4485

Notes

• if I run terraform apply twice, it works the second time

Brent Garber avatar
Brent Garber

If you have a statement with a wildcard and add a second statement AWS will barf if you don’t listify the principals in the second statement

Brent Garber avatar
Brent Garber

Even trying to adding it through the AWS Console will fail until you add the []s

loren avatar

try using just jsonencode instead of iam_policy_document?

2022-09-28

Balazs Varga avatar
Balazs Varga

hello,

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MyOrgOnly",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::thebucketofmydreams",
        "arn:aws:s3:::thebucketofmydreams/*"
      ],
      "Condition": {
        "ForAnyValue:StringLike": {
          "aws:PrincipalOrgPaths": ["o-funny/r-stuff/ou-path"]
        }
      }
    }
  ]
}

what is the issue with this? My goal is give access to subaccount in organiaztion under an OU to a resource that it is in another account in same organization

Alex Jurkiewicz avatar
Alex Jurkiewicz

difficult to answer without knowing what the problem is

Balazs Varga avatar
Balazs Varga

I have account under infra ou and account under dev ou. I have s3 bucket in infra that ai would like to access from account under dev ou. But only one bucket and only from that account. I get 403 when I try to download a file from that bucket

Joe Niland avatar
Joe Niland

When you’re not using a wildcard, pretty sure you should be using the “ForAnyValue:StringEquals” operator

Balazs Varga avatar
Balazs Varga

got same error, when I set it to GetObject and try to download file

Balazs Varga avatar
Balazs Varga

if I add a ‘*/ after r-stuff… it works… I limited the access to getobject and if I add the condition like this:

 o-funny/r-stuff/*/ou-path/*

can somebody explain why I need the * in the condition? second i think I know, but the 1st ?

2022-09-29

Balazs Varga avatar
Balazs Varga

hello, another day, another question I have vpc in account A and private hosted zone in account B I would like to associate them, but don1t want to use creds from a. I created a role, in a that can call from B, but how could I call it? I need to automate this

Balazs Varga avatar
Balazs Varga

solved

2022-09-30

vicentemanzano6 avatar
vicentemanzano6

Hello! I have a dedicated connection with direct connect. According to the engineer who setup direct connect on their end, I should be able to Telnet a host on port 53. He told me that I need to set the primary and backup DNS to x.x.x.1 and x.x.x.2 (I guess this is done by changing the DHCP option sets in the VPC but I am not sure). Is that the right approach to set DNS as per the engineer’s requests? If so how can I reach the instance via RDP on the private subnet? I think a RD Gateway could help but I am a bit lost, changing DHCP make the instance unreachable via vpc endpoints and SSM session

Sudhakar Isireddy avatar
Sudhakar Isireddy

port 53 is the DNS port. Why are you telnetting a host of port 53?

Sudhakar Isireddy avatar
Sudhakar Isireddy

If so how can I reach the instance via RDP on the private subnet?
This depends on where are you connecting from? Are you connecting from another Windows instance in one of your private subnets?

Sudhakar Isireddy avatar
Sudhakar Isireddy

If yes, then simply, go into your hosts server manager > IPV4 > Go into the respective network interface setting > Advanced > DNS

Sudhakar Isireddy avatar
Sudhakar Isireddy

set your DNS entries there.

vicentemanzano6 avatar
vicentemanzano6

Yes that’s right it’s a windows vm on a private subnet

Sudhakar Isireddy avatar
Sudhakar Isireddy

change your DNS on that Windows server. If you change AWS DHCP options, then you will have wider issues

vicentemanzano6 avatar
vicentemanzano6

The engineer sent me a screenshot of what I should see when doing telnet (53), he claims I should be able to connect

vicentemanzano6 avatar
vicentemanzano6

Inside the vm? Alright thank you so much!

vicentemanzano6 avatar
vicentemanzano6

Once I change the dns on the windows server, what would be the easiest way to rdp into it?

Sudhakar Isireddy avatar
Sudhakar Isireddy

Simply launch RDP from your machine and connect to the other machine

vicentemanzano6 avatar
vicentemanzano6

I am a bit confused, the vm is inside a private subnet and it only has a private ip, can I still access it just with rdp without any vpn or bastion host?

Sudhakar Isireddy avatar
Sudhakar Isireddy

ok, tell me this…

From where are you trying to access the VM? From your laptop? Or from another Windows host in your VPC?

vicentemanzano6 avatar
vicentemanzano6

From my laptop

Sudhakar Isireddy avatar
Sudhakar Isireddy

Are you conencting your laptop to a VPN?

vicentemanzano6 avatar
vicentemanzano6

Not at the moment, I used to use SSM Sessions and RDP into it but changing the dns inside the vm makes the host unreachable

Sudhakar Isireddy avatar
Sudhakar Isireddy

in all, your laptop and the vm you are trying to connect must be in the network that have a route between each other. Currently how is this routing established?

vicentemanzano6 avatar
vicentemanzano6

Via ssm, it opens a port and it allows me to access the vm on that given port in localhost

Sudhakar Isireddy avatar
Sudhakar Isireddy

if you are able to access the VM via SSM what is the issue that you want resolved?

Sudhakar Isireddy avatar
Sudhakar Isireddy

you can also use FLEET MANAGER

Sudhakar Isireddy avatar
Sudhakar Isireddy

to directly RDP from AWS console

Sudhakar Isireddy avatar
Sudhakar Isireddy
Sudhakar Isireddy avatar
Sudhakar Isireddy

I think I get you

Sudhakar Isireddy avatar
Sudhakar Isireddy

you log into that VM using SSM, and on the VM, launch server manager > > IPV4 > Go into the respective network interface setting > Advanced > DNS and set the DNS on the VM

Sudhakar Isireddy avatar
Sudhakar Isireddy

after changing DNS on the VM as described above, you can still connect to it via SSM

vicentemanzano6 avatar
vicentemanzano6

Fleet manager? That sounds good I will definitely give it a go, thank you so much!

1
Balazs Varga avatar
Balazs Varga

I did not find any info about transit gateway modify. My question is, will there be any outage if I modify the tgw to enable the cross account auto accept shared attachment

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

So I’m looking at being prepared to upgrade AWS EKS cluster to 1.23+ which requires the EBS CSI driver. Currently using the cloudposse/eks-cluster/aws module and looking to see if anyone else has already attempted this and if so what changes are needed

venkata.mutyala avatar
venkata.mutyala

Hi did you find a solution for this?

Jeremy (UnderGrid Network Services) avatar
Jeremy (UnderGrid Network Services)

Not one I particularly like… I manually upgraded the cluster and node group through console and then updated terraform version to match. I found if I changed the version in the terraform then the plan would fail.

1
    keyboard_arrow_up