#aws (2022-11)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-11-01
i’m trying to create an alarm that is sort of like a heartbeat for cloudtrail logs. right now i’m trying to force it into an alarm state but can’t seem to get it to work
my filter_pattern is { ($.eventVersion = "1.08") }
,i’ve also tried it with a blank
when i test the filter pattern in the aws console, it works fine
but when i create the metric filter, it does not work
fwiw, i’m using the https://github.com/cloudposse/terraform-aws-cloudtrail-cloudwatch-alarms repo
Terraform module for creating alarms for tracking important changes and occurrences from cloudtrail.
and i’ve updated the custom_alerts.yaml
2022-11-03
How to construct a trust policy for allowing role assumption from multiple / all clusters in one account?
This is the docs example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:default:my-service-account",
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
}
}
}
]
}
This is coupled to one particular OIDC provider i.e. one cluster.
I there are a way to make it cluster independent?
I don’t think I would recommend doing this at all. The cluster nodes should have a direct IAM role already assigned
Not possible at the moment but might change in the future: https://github.com/aws/containers-roadmap/issues/1408
Community Note
• Please vote on this issue by adding a :+1: reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
IAM Roles for Service Accounts (IRSA) enables you to associate an IAM role with a Kubernetes service account, and follow the principle of least privilege by giving pods only the AWS API permissions they need, without sharing permissions to all pods running on the same node. This feature works well with a smaller number of clusters, but becomes more difficult to manage as the number of EKS clusters grows, notably:
• Needing to create an IAM OIDC provider for each cluster • Updating the trust policy of every IAM role needed by that cluster with cluster’s OIDC provider URL that maps to a particular namespace and service account name. • Given the 2048 character limit of IAM trust policies, only around 5 clusters can be trusted per role. • After 5 clusters, you need to duplicate IAM roles to be used by additional clusters. • Coordination between devs and admins - Development teams that own IAM roles often need to re-run Terraform apply or CloudFormation scripts to update trust policies for new clusters.
Given these pain points, EKS is considering a change to the way IRSA works, moving credential vending to the EKS control plane (similar to how ECS and Lambda works). With this change, a trust policy would only need to be updated once to trust a service principal like [eks-pods.amazonaws.com](http://eks-pods.amazonaws.com)
, then you would call an EKS API to provide the IAM role to service account mapping, ex.
aws eks associate-role \
--cluster-name $CLUSTER_NAME \
--role-arn $ROLE_ARN \
--namespace $KUBERNETES_NAMESPACE \
--service-account $SERVICE_ACCOUNT
We are looking for your feedback on this proposal, and to hear any additional pain points encountered with IRSA today that would not be solved by such a solution.
Are you currently working around this issue?
Creating and managing duplicate roles to be used across multiple clusters
2022-11-05
2022-11-10
What’s the AWS best practice to move to once you start running into SG rule limits?
https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-security-groups — it’s a soft limit so you can put in a quota increase
Request increases to the following default Amazon VPC component quotas as needed.
Btw may I know what is the current quota limit for SG Rules in AWS?
I think the main issue is due to the (lack of) process for offboarding things, at some point we’ll hit that 1k limit of cidrs and are wondering what’s the step after that
SG’s are very limited, i would caution against using them in any extended capacity.. unless you’re doing things like self-referencing SG’s, you’ll quickly hit the limit.. increasing the limit is also not generally advised as there have been cases where it’s increased latency to the host/resource.. in our environment, we use SG’s for ingress (albeit in limited capacity) and basically allow any for egress, with a next-gen firewall doing any sort of DNS/URL based filtering.. YMMV
2022-11-11
HI, I am creating an Ingress with kubectl using a yaml file. The ingress in cluster gets created but I get no ALB got created in AWS. Does anyone have idea how can I get alb got created in aws when i create an ingress in eks cluster? Thanks
Can you share your ingress manifest?
are there [kubernetes.io/ingress.class](http://kubernetes.io/ingress.class): alb
present ?
I’ve been pondering a thought lately but I’m not sure it’s a good one… When setting up new AWS account set for a product team (prod, staging, sandbox, etc…) our current go-to is for the terraform state to reside a bucket that is part of the account it controls. Given that the terraform state can contain sensetive information and all that, would it not be sensible to setup a separate account per account set where the product team has limited permissions to read and write state in a designated bucket and which is otherwise separated from the actual product account. Good idea or complete overkill ? Looking forward to your thoughts :)
Please accept my apology in advance if this should have gone to #terraform
I vote overkill. I also use same account bucket for state and dymanodb state lock. I don’t know the type of users that are allowed to access your AWS accounts, but I am going by the rule of thumb, if I give a person permission to see the actual infra, I can allow it to see the state file for it. Those are just my $0.02.
We usually put the tfstate bucket in the root account and have some clients that would prefer per account tfstate buckets. We’ve done both. It does seem like overkill but our model using atmos allows us to do that automatically since the backend is generated
I almost always have state in a different account. But then again, I’m generally managing multi-account designs and CI/CD pipelines, so it just makes sense that way. If I were just deploying an app to a single account, I’d probably keep state in the same account
Thanks for the input, guys. Food for thought
2022-11-12
2022-11-14
Hi, I’m trying to figure out how to create a dashboard showing the most used API gateway keys over time. I can’t find these metrics in CloudWatch and looking through the API reference for a solution, anyone knows what’s the best way to approach this?
Edit: I guess I’ll be using the /usageplans/usageplanId/usage
endpoint for all usage plans without specifying a key
2022-11-15
Not sure if there is a better channel for sharing this, but AWS SAM CLI now supports Terraform (not just CloudFormation): https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform
This post is written by Suresh Poopandi, Senior Solutions Architect and Seb Kasprzak, Senior Solutions Architect. Today, AWS is announcing the public preview of AWS Serverless Application Model CLI (AWS SAM CLI) support for local development, testing, and debugging of serverless applications defined using HashiCorp Terraform configuration. AWS SAM and Terraform are open-source frameworks for […]
2022-11-16
hi folks, first of all sorry if this is not a best place to ask this topic. I’ve been working on AWS SSM and AWS Config to design a FIM (File Integrity Monitoring) solution on AWS. I am collecting AWS::SSM::FileData
without any problems and seeing ManagedInventories on AWS Config side.
I failed to create an aws config rule to detect changes on FileData resources. I suspect I might have to develop a custom lambda rule but wanted to ask here first to see if anyone dealed with this before. any feedbacks, thoughts appreciated.
2022-11-17
Hi everyone! Sorry for missing this community in the last weeks!
Today I’m here to share an automation I’ve added to improve my work with AWS:
Ability to open multiple AWS consoles concurrently in the same browser, directly from a central point
https://twitter.com/a_cava94/status/1593235007828422656
I hope this is something that can help you and I hope to hear as much feedback as possible
@Erik Osterman (Cloud Posse) see you at the re:invent?
this looks like granted.dev
@Alex Jurkiewicz probably going to skip re:Invent this year… have a great time!
I never go
It manage both programmatically and console access, and it have a Chrome based extension too @Alex Jurkiewicz, but it’s a browser extension like granted but fully integrated with the app and CLI
Link to download?
his dev.to write up has all the deets (link in the Tweet thread too)
2022-11-18
when you desing a new whole structure. what apps/tools do you use to draw it ?
https://www.cloudcraft.co/ — feature rich paid option https://excalidraw.com/ - (there is a library you can add) quick and free
Has anyone ever used transit gateway to allow access to a vendor bucket in another region? I’m trying to build something that does this over privatelink … and finding it a little tricky to implement … VPC peering is undesirable here, as a matter of sec policy. … I’m looking for an example … preferably for a TF implementation, but any IAC would be helpful. I’m hoping for something that I can run in a lab environment to browse through how all the pieces fit together.
Would be easier for the vendor to put it behind an ssl secured/Authed cloud front distro that’s accessed via https
We don’t have any control on how we can access the vendor bucket, in this case.
what vendor? just curious…forcing customers to open their networks to them just for downloads is pretty shitty, invites security issues and complexity
are you running a multi account setup?
If so, I would spin up a new account, pair the VPC to theirs, download what you need and then nuke the account… keeps your network isolated
Clarifications:
Transit gateway is between VPCs in the same account (ours), and from there, we’ve been given access to their S3 bucket via credentials. ( Not our preference, but will work well enough for now. )
… And the infra I’m building is actually like ‘toolkit infra’ for internal teams
… it’s infra for abstracting away from other departments the particulars of how to implement certain aws managed services within our company
Hi Everyone! Amazon opened their online training and certifications for free. You can find the course list here Enhance your skills with this opportunity .
2022-11-21
At Amazon Web Services (AWS), security is our top priority, and configuring multi-factor authentication (MFA) on accounts is an important step in securing your organization. Now, you can add multiple MFA devices to AWS account root users and AWS Identity and Access Management (IAM) users in your AWS accounts. This helps you to raise the […]
Hi Guys, I am trying to open my startup but don’t have much money to host my all services on aws billing. So aws provide free credit ? AWS Activate Build your startup on AWS
Have you tried looking into https://aws.amazon.com/startups/startup-programs/
Please let me know if It’s beneficial AWS Activate Build your startup on AWS.
Activate is great if you can get it, they recently opened it up to the world to apply (used to be that you had to have a code from a partner) so I’m sure their swamped with requests. However the credits are infamous for expiring. IMO it’s best to try to implement your solution within the free tier limits as long as you can and then apply for Activate when you’ve grown to the point where you have enough monthly usage to really benefit from the credits.
Hm Maybe but I am also not sure about it but I applied.
Thanks
I’m curious what other fully remote teams do to secure their root AWS account? Every piece of advice I’ve seen about keeping it secure is to have an MFA device and store it in a safe in the office or something similar. What if you don’t have an office?
With the news of multiple MFA devices being supported for IAM accounts, I’m thinking maybe you do a zoom with whoever should have access to the root account and have them each associate an MFA device with it?
Is there something better for teams that don’t have a physical location?
You may want to throw this question into #office-hours#office-hours
Sadly I can’t make office hours as it’s at the same time as a weekly team meeting we have.
I’ve worked with 3 fully remote orgs now. All of them disabled the hardware rule for security monitoring (Security Hub, Config, etc) and documented the explanation for compliance and went with a virtual MFA that was restricted to key stakeholders and rotated it if one them left the org.
now that you can have multiple MFAs, I would recommend against that and just assign them a token and send them a key. they leave, delete that token. much easier.
So one person gets yubikeys (or similar), sets them up for everyone who needs one and mails them out? Seems simple enough.
though with the recent update where you can grant SSO roles access to update account AWS support enrollments, we really don’t have much of a need to give out root access anymore.
Assuming you’re using sso (which yeah, should probably be the default but small team + other priorities = )
But either way it seems it would be a good idea to have at least two people with root account access with MFA enabled for security + redundancy.
two people and 1 in a safe somewhere
you don’t want to have to go through the MFA removal process with AWS if you don’t have to…though I honestly wish that process was more painful than it is…but that’s a different topic
Before they supported multiple MFA devices, we had 1 admin with the root MFA device, all other details in 1password. Should another admin require root while admin 1 is unavailable we go through aws business support.
I can also highly recommend 1password as a MFA device, only the HW requirement broke that setup.
2022-11-22
Hi folks
Sorry if this is not the right forum for my question. I have doubts between this and #kubernetes channel.
I am using https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/ in order to create internet-facing NLB through a service (UDP) in Kubernetes.
I am using following annotations
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-xxx, eipalloc-yyy, eipalloc-zzz,"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxx, subnet-yyy, subnet-zzz,"
I wonder whether is possible reusing same eip-allocation for all my other different Kubernetes services with same requirement
If I try to reuse them, I get Failed deploy model due to ResourceInUse: ││ The allocation IDs are not available for use
EIP (public IPs) allocation would occur per NLB and cannot be shared across multiple NLB’s.
If you wish to have a shared common NLB to multiple Ingress manifests (and also save some money), perhaps consider ingress-nginx or traefik.
Hi Chris, thanks a lot for your input. I will give it a try using an ingress resource.
2022-11-23
2022-11-24
Hey, we’re using eks fargate and monitoring it using cloudwatch in the meantime.
- On which metrics do you monitor?
- Are you using any other observability tools other than cloudwatch that works well with eks fargate nodes?
2022-11-25
2022-11-27
Hi everyone, I’m working on a VPN tunnel to a third party and I was given an IP to NAT with which is outside my network CIDR for the VPC I’m connecting.
What I’ve tried:
• I created AWS site-to-site VPN by using a transit gateway to connect a VPC I created with a CIDR to get the NAT IP that was shared and tried to use that to direct traffic but I realise the AWS VPN doesn’t use the NAT so I’m unable to route traffic through it as I had thought.
• I also thought about creating NAT instances in the shared VPC and sharing that through the other VPC by vpc peering but didn’t work either.
• I’m currently considering going with a software VPN on an EC2 instance (something like Strongswan) to do this and do the NAT on there. Any ideas on how to go about this? Or anything I might be missing? Or any guidance will be much appreciated. Thank you.
AFAIK, AWS VPN doesn’t provide a managed option to apply NAT to VPN traffic. As an ex aws network engineer, i would also discourage using NAT in AWS, as debugging it is going to cause endless amount of hasles
For a deeper understannding of traffic flow within AWS, i would suggest taking a look at this - https://www.youtube.com/watch?v=Zd5hsL-JNY4 It still mostly holds true even till today
Thank you. I will take a look. While away I configured Strongswan on an EC2 and I’m able to have the tunnel connection and status to be up but I’m unable to ping any instances via the VPN connection
2022-11-28
Will anyone from the SweetOps community be in Las Vegas next week at AWS re:Invent? Spacelift will have a few folks there and there are a few ways to connect with us if you want to meet formally or casually. https://events.spacelift.io/aws-reinvent-2022
can you have a aws VPC with a CIDR like 10.0.0.0/16 for private subnets and a 192.168.x.x/16 for the public subnets?
like a vpc with two CIDRs but one per type of subnet
that would be 2 vpc’s
no, on the same vpc
you can have up to 5 different CIDRs per VPC
I’ve never done it, but since you can have multiple CIDRs and you select the CIDR when creating a subnet it seems like you should be able to
me neither, and I wonder what problems I could have
Now there’s an experienced aws user. “How will this thing break?”
lol
there are some interesting restrictions around secondary cidrs… i don’t think it will let you associate two different rfc 1918 classful ranges with the same vpc… https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html#add-cidr-block-restrictions
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.
but you can use 100.64.0.0/10 if you want
it does
on the same range
not a 10.10 and a 192.168
that’s what i said, yeah
“different rfc 1918 classful ranges”
at least, that’s what i meant.
so the question as phrased i am pretty sure is not allowed…
can you have a aws VPC with a CIDR like 10.0.0.0/16 for private subnets and a 192.168.x.x/16 for the public subnets?
correct
but is there anything that might not work if you have two cidrs , one for public and one for private on the same range ( 10.x.x.x/x)?
anything that might not work as expected?
nothing i know of offhand
long as the route tables have the necessary entries, i’d expect it to work fine
ok, cool
I did this in the past, had no issues that I can remember. These days, I routinely do one /16
for EKS EC2 nodes and another /16
for k8s pods without any issues
2022-11-29
2022-11-30
Stretching out to my communities in hopes someone may have insight.
Anyone familiar with windows nodes on EKS? I’ve thrown in a gitlab-runner on a windows node. It struggles to clone the repo with failure to resolve.
12:23:40.247417 exec-cmd.c:237 trace: resolved executable dir:
C:/Program Files/git/mingw64/bin
12:23:52.415550 http.c:703 == Info: Could not resolve host:
gitlab.com
12:23:52.415550 http.c:703 == Info: Closing connection 0
fatal: unable to access
'<https://gitlab.com/[MASKED]/sandbox/test-windows.git/>': Could not resolve
host: gitlab.com
maybe run an nslookup from the container and see what DNS it is using and why that DNS is not resolving the host?
Hi all
I have a question about rightsizing ecs tasks. I have a few task definitions where each contains an app and an nginx sidecar. I noticed that, although the task cpu and task memories are the smallest values possible(256 cpu 512 memory) the waste of cpu and memory is around 99 percent. What adjustments can I make to reduce my costs since I feel like I am paying a lot for resources that I am not utilizing.
Thanks
It looks like you’re not the only one: https://github.com/aws/containers-roadmap/issues/79
Tell us about your request
Allow lower CPU and Memory resource allocation for Fargate tasks.
Which service(s) is this request for?
Fargate
Tell us about the problem you’re trying to solve. What are you trying to do, and why is it hard?
Fargate is very appealing because it removes the requirement to run Container Instances. However, we have a lot of microservices whose CPU and Memory requirements are lower than the minimums allowed on Fargate. Fargate is already more expensive than running on-demand instances of families like m5
and c5
with equivalent vCPU and Memory configurations. For tasks that require less than 0.25 vCPU and 0.5GB Memory, the cost is even higher compared to running those tasks binpacked with others on an ECS Container Instance.
Are you currently working around this issue?
We’re using ECS with the EC2 launch type.
One of the comments:
Now, to reduce costs, I have to convert my work from Fargate to a T3 nano instance or similar.
Thanks! I really enjoy fargate but the more micro my services are the more I feel like I am wasting money
Lambda w/API Gateway might be a better fit if you’r breaking things up that small
Or maybe even a lambda with a function url
Hello All,
We have to clean up the AWS Active Directory service which is created in 2017 by an old employee. but there is no documentation on where it is used/or if is it just created for the test. I do not see any ec2 instance in the VPC where Active Directory exists it configured like this Directory type: Simple AD Directory size: Small
I don’t know the password and there is no proper documentation. I checked that are all the CLI commands I can use to list users in this AD https://docs.aws.amazon.com/cli/latest/reference/ds/index.html I couldn’t find any
is there any way to know who are the users part of this AD, and how to know this AD is in use thanks in advance
I would block access and wait for someone to complain. If no one makes any noise it’s probably not in use.
If you know the username of the admin who setup the AD domain, you can actually reset that users credentials in the domain from the UI. From there you can login to the jump box that has RSAT tools installed and connect to the domain as that user. This will give you visibility into the domain objects (machines, users, etc)
UI = AWS Console
thank you @Chandler Forrest, Unfortunately I don’t have the AD password