#aws (2022-09)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-09-01
This is going to make a lot of people happy: https://aws.amazon.com/about-aws/whats-new/2022/09/aws-iam-identity-center-apis-manage-users-groups-scale/
AWS is launching additional APIs to create, read, update and delete users and groups in AWS IAM Identity Center (successor to AWS Single Sign-On)
first customer managed policies and permission boundaries, now user and group management! hurrah! now if they’d just separate it from the org and make it a standalone service, i’d be ecstatic!
2022-09-02
Anyone else using Account Factory for Terraform and having issues with the CodeBuild job for creating the customization pipeline layer for Lambda looping and being built on every terraform plan and apply?
So right now we have a bunch of S3 buckets and each bucket has their own lambda function and corresponding IAM roles/policies to be sure that said function can 100% only access that bucket. Is there a way to consolidate down to a single policy for all but still enforce that least-access principle? Playing around with conditionals TagKeys
and ResourceKeys
, but can’t seem to find the proper DWIW.
It would be possible but it sounds like a bad idea
Since buckets have a global namespace, there’s no guarantee you will always get the bucket name you want.
But more importantly, complex IAM policies are a special circle of hell all by themselves. Why would you change something that works for something that’s clever?
Because we’re hitting the hard caps
X policies * y customers is approaching 5k, so we’re trying to figure how to cut that down while keeping least access
Makes sense. If you ask AWS support, they will write policies like this for you
Conditions and abac are hard
Could I suggest instead an AWS credentials vending machine, which lambda uses to get credentials that are scoped directly to the relevant bucket via a role that has the customer account imbedded in it?
It might also help for me to understand what actions you are taking with the bucket in question to give a recommendation
It’s really just mostly get/put operations. Wanting to make sure that regardless of what code gets uploaded to a lambda, make sure from a policy perspective. That the trigger can only operate on the bucket that it was triggered from
I’d probably create an intermediary that can do the validation and generate a presigned get/post to pass to the lambda to trigger. Then it doesn’t even need any credentials
anyone using the terraform-aws-eks-cluster
and terraform-aws-eks-node-group
modules setting the ENABLE_POD_ENI
for the aws-node
to tell the CNI to utilize pod security groups?
@Andriy Knysh (Cloud Posse)
we did not use it (looks like a new feature), but it looks like it requires two steps to enable this:
Amazon EKS Workshop
The following command adds the policy AmazonEKSVPCResourceController to a cluster role.
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
--role-name ${ROLE_NAME}
which can be done here w/o modifying the module https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/variables.tf#L98
variable "node_role_policy_arns" {
or could be added here as another policy attachment (requires module modifications) https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/iam.tf#L39
resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
steps #2 to execute
kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
Yeah I saw there qas an additional IAM policy needed to the role which I didn’t see as hard to accomplish, as you said it could be an additional policy attached to the role not needed to be done in the module per se. I was however not seeing anything apparent to set the necessary env variable to ‘true’ as I can see node groups deployed via the module have it set to ‘false’ bit that seems like just default values
This was more an exploratory inquiry but I have been asked to deploy out a Windows node group to our EKS cluster and preferably via TF
you mean you want to set ENABLE_POD_ENI
via TF and not calling kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
?
@Andriy Knysh (Cloud Posse) Yes, I was curious if the TF module already supported a way to set this or if otherwise possible to set using the TF as if we went with using it would like to deploy out with the TF not execute additional CLI commands. Right now without it you end up with node level security groups which are fine if you trust all pods running in the cluster on those nodes, I’m just looking into LOE to enable pod level security groups with existing deployment method that could reduce the effective blast radius.
k8s resources can be provisioned using terraform kubernetes provider, but I’m not sure what can be used to set env
2022-09-03
2022-09-04
We are uploading our product to AWS marketplace. Where do I need to provide this one license secret key
?
Thanks!
Not entirely sure but you will need to provide the license secret key in the AWS marketplace under the product listing.
Whatever you can spare would help tremendously!
2022-09-05
Hi, everyone aware sitemap.xml, my problem is ngnix will take sometime to load the proxy pass.
2022-09-06
Hey all! I’m using route53 as my DNS provider and Nginx-ingress-controller as ingress in my k8s env. I want to redirect between 2 ingresses, for example, all request that go to app.x.io will redirect to app.x.com. tried to create an CName alias but it doesn’t work. Does someone have an idea?
try A alias instead of CNAME
Cannot cause the original record (app.x.io) is a CNAME and A alias is looking for A recored
This is a really oddball solution but, if you have the stomach for it:
- create an S3 bucket website with 0 content and a rule to redirect requests to app.x.com
- create a route 53 entry for app.x.io and add the S3 bucket as the target.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
Configure your bucket as a website by setting up redirect locations where you can redirect requests for an object to another object.
Route traffic using Route 53 to a website that is hosted in an Amazon S3 bucket.
Note that this solution is the most cost effective (compared to running a webserver on EC2/ECS or using an ALB).
Before you create the bucket, keep these points in mind (since this is the only way it will work)
Value/Route traffic to
Choose Alias to S3 website endpoint, then choose the Region that the endpoint is from.
Choose the bucket that has the same name that you specified for Record name.
The list includes a bucket only if the bucket meets the following requirements:
• The name of the bucket is the same as the name of the record that you’re creating.
• The bucket is configured as a website endpoint.
• The bucket was created by the current AWS account.
This one it the most important:
• The name of the bucket is the same as the name of the record that you’re creating.
Hey all, can we have same size of cpu and memory in ecs fargate. ex: cpu=2048 and memory = 2048 ?
CPU value Memory value (MiB) 256 (.25 vCPU) 512 (0.5GB), 1024 (1GB), 2048 (2GB) 512 (.5 vCPU) 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) 1024 (1 vCPU) 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) 4096 (4 vCPU) Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB)
that are the allowed combinations (copied from the documentation)
Thank you
so, in your case: 2048 (2 vCPU) Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB)
Curious what tags people think are critical? Here’s a list of the ones I think are generally useful, but would sure love to learn more:
• environment: [dev, qa, staging, prod, whatever]
• version control: [github, gitlab, whatever]
• cicd: [circle, github, gitlab, whatever]
• needs-to-stay-on-24hours: [true, false]
• various-can-cannot-be-public: true, false]
• chargeback_id: 123456789
• department: [finance, it, eng, whatever]
• repo: some-github-repo
• product_owner: [[email protected]](mailto:[email protected])
still thinking
We have tags that specify:
• Owner (business unit, service name)
• Source (source repo and path)
• Environment
we called CostCenter what you have as chargeback I guess. I would use camelCase or similar naming for all tags but not mixed dash or underscore. we had additional info on classification e.g data classification for s3 bucket also service tier could be a good addition imo. plus I see you have product owner but I would add product as well just for grouping, tech contact also missing… e.g. LauchedBy / OwnerTeam etc
2022-09-07
can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.
Hello I am having problems with Cloudmap + ecs service discovery. I am not able to ping or dig a container from another container(using ecs exec) in the same ecs fargate cluster(awsvpc mode). Anyone had a similar problem? Looking forward for replies. Thanks!
2022-09-08
When my AWS managed node groups (created with terraform-aws-modules/eks/aws//modules/eks-managed-node-group
) change using Terraform (or related launch configs, security groups, etc.), and the MNG’s ASG is recycled, I have a min/max/desired or 1/2/1, and during the recycling, it spins up up to 7 additional EC2 instances, before settling down on a single one.
Anybody else see this and/or know how to manage it?
This is expected behavior, and it’s based on the number of subnets. For example, we’re deployed in us-east-2, so there are 3 subnets, and our MNG is set to 1/1/1, so it spins up 2 new nodes in each AZ, before settling on one.
The Amazon EKS managed worker node upgrade strategy has four different phases described in the following sections.
2022-09-09
2022-09-10
Hello all, I am having trouble with terraform
. Basically the problem is somewhat related to unreadable vpc_id
although I can see it gets read on the state file. Anybody has similar error before?
Hello!
I would like to clarify about cloudposse/eks-node-group/aws, so is it possible to disable random_pet ?
2022-09-11
Hey all Small question about Route53, I’m using Kinsta as my domain host and Route53 as my DNS mgmt. i need to renew my SSL Certificate in my domain. I didn’t understand to the end what is the process to do it with the TXT record on Route53, someone is able to few questions?
Hi you likely just need to add the text record to route53
Basically go into route53 and create the record they tell you with the value they provided you with
^ not sure if that helps or not. The txt records allows them to verify that they can give you a cert for the domain. Otherwise you could request a cert for any domain and easily get a cert
2022-09-12
Hi all, Our database has been attacked by sql injection, we are using aurora mysql and cpu utiliztion almost 100%, how can i stop this any suggestions ?
Maybe list all active connections and verify if its the same IP for attack and block them with a security_group rule
doesn’t sound like IP restrictions would help if the attack is SQL injection, you’ll want to kill the processes that are eating the CPU then patch the application(s) ASAP. If this is going to take “too long” you might choose to make your application connection to the DB read-only and/or potentially take an outage. but these are all considerations for the business team.
The MySQL database hangs, due to some queries. How can I find the processes and kill them?
anyone got an advice on how I could better present a service in EKS as an origin for a cloudfront distribution? I’m currently just going through my ingress controller to a domain name that the distribution reads, but that means I have an intermediate domain name for the ingress as well as a public origin that I’d rather secure down to just cloudfront.
I don’t think I can help you on this front but I am genuinely curious about your use case here. What kind of an EKS service is it that you need the CloudFront to deliver? I’ve only used CF for static websites and presenting static large media files, so that’s why I’m asking.
running a nodejs app, it has some static elements but a bunch of dynamic stuff as well. using cache-headers per endpoint to dictate to CF when stuff should be cached or not but it gives me a single endpoint for all the content.
Hey folks — Quick AWS Route53 question I have while migrating a client’s DNS architecture:
Is it possible to have two Route53 Hosted Zones control the same domain (e.g. *.example.com) across separate accounts? In that I have some records for www.example.com and *.example.com on Hosted Zone #1 and then I have similar records for *.example.com on Hosted Zone #2 as well?
I am hoping so if they both point their NS records at the correct, authoritative nameservers, but I figured I’d check here before I tested this out.
If they are public hosted zones, then yes it’s easy. In the zone hosting example.com, create ns records for subdomain.example.com and you’re golden
you can have [example.com](http://example.com)
set up in Account A and [subdomain.example.com](http://subdomain.example.com)
in Account B — you would just setup the NS from Account A to point the Nameservers for Account B
If they are private hosted zones, you need to do magic with route53 resolver rules, since private zones do not honor ns records
Ah no — Sorry misunderstanding. I’ve done a hosted zone delegation like example.com in one account and then *.subdomain.example.com in another account.
What I’m trying to do is:
Account One (Legacy) — Existing Hosted Zone for example.com Account Two (New) — New Hosted Zone for example.com
I want records that are created in both Hosted Zones to work. And then I’ll be creating delegated (e.g. *.subdomain.example.com) hosted zones in other accounts.
I don’t think the account boundaries actually matter, but it’s just to illustrate the point: This is because I’m working with a client who has all of their resources in one account right now and we’re building out a proper account hierarchy for them now.
I’m re-reading my initial question and I see how I made that confusing, my bad.
No, I don’t think you can do that? I’m trying to think how the ns records would look… You might be able to create the zones and records, but at some point you have to transfer the public ns records so public name servers resolve from the new zone… It’s basically a zone transfer
Ah this is from the AWS Route53 FAQs:
Q. Can I create multiple hosted zones for the same domain name?
Yes. Creating multiple hosted zones allows you to verify your DNS setting in a “test” environment, and then replicate those settings on a “production” hosted zone. For example, hosted zone Z1234 might be your test version of example.com, hosted on name servers ns-1, ns-2, ns-3, ns-4, ns-5, ns-6. Similarly, hosted zone Z5678 might be your production version of example.com, hosted on ns-7, ns-8, ns-9, ns-10, ns-11 and ns-12. Since each hosted zone has a virtual set of name servers associated with that zone, Route 53 will answer DNS queries for example.com differently depending on which name server you send the DNS query to.
But that doesn’t sound like what I would want…
what’s the goal of having both zones handling queries? not doubting you, just making sure I’m not recommending something that breaks the goal
I don’t want to touch the client’s existing Hosted Zone in their legacy all-in-one account. I’d rather leave that alone as it is and then manage a new hosted zone for all new records and delegated zones.
Hi all,
I have a pod in EKS configured with a ServiceAccount which configures a role for the pod to use. so AWS_ROLE_ARN=arn:aws:sts::000000000:assumed-role/podrole
aws sts get-caller-identity
{
"UserId": "0000E:botocore-session-0000000",
"Account": "000000",
"Arn": "arn:aws:sts::000000000:assumed-role/podrole/botocore-session-222222222"
}
i want to allow this role to assume another role in a different account via a profile in ~/.aws/config
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
credential_source = Environment
this is an example from the docs here. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
i was hoping credential source would pick up the AWS_ROLE_ARN
env vars set by the service account.
aws sts get-caller-identity --profile marketingadmin
Error when retrieving credentials from Environment: No credentials found in credential_source referenced in profile marketingadmin
does anyone have a work around?
2022-09-13
Hi all! A quick aws security question. Is there anyone who is using aws security hub and aws config with aws organizations? I am not able to see the resources from member accounts and I have “Config.1 AWS Config should be enabled” error. Do I need to enable aws config in each member account manually?
you can setup a delegated administrator account from your org settings and within that account you can configure security hub to automatically enroll all member accounts
@Darren Cunningham from security hubs side, everything looks fine. I can see the accounts in my organization. I believe my problem is with aws config. I am not sure on how to enable it in member accounts. Does delegated administrator account handle enabling aws config?
ah sorry, IIRC AWS config has “delegated admin” but the rollout of enabling AWS Config in all accounts/regions is not something that’s integrated into the product but there is a CF StackSet that’s provided: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html#config-how-to-enable
Learn about the requirements to enable and configure AWS Config before you enable Security Hub.
I only have 3 accounts per environment(qa, prod, staging, management) what do you think is the difference between enabling the app config manually vs deploying the stackset?
well AWS Config also needs to be deployed per region so it’s accounts x regions which doesn’t sound fun to do manually
thank you after enabling config in the regions that my resources are in and after 24 hours, I was able to see the security scores in the management account’s security hub dashboard.
I am trying to get the aws-ebs-csi-driver helm chart working on a EKS 1.23
cluster.
The message I am getting from PVC events
failed to provision volume with StorageClass "gp2": error generating accessibility requirements: no topology key found on CSINode
The CSI topology feature docs say that:
• The PluginCapability
must support VOLUME_ACCESSIBILITY_CONSTRAINTS
.
• The plugin must fill in accessible_topology
in NodeGetInfoResponse
. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object.
• During CreateVolume
, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements
.
I am not sure how to configure these points.
I looked at the worker nodes (ec2) launch template / user data. The kubelet root path was not the standard /var/lib/kubelet
. Instead it was a different one. I fixed the missing CSINode driver information by updating the volumes host paths with the correct kubelet root path.
hello. what is the limit of the subaccounts ? If I would like to run customer cluster in separate subaccount is that possible? Or i have a limit ?
there’s a soft limit of 10 accounts but that can be increased with a service request - largest org I’ve seen was ~220 accounts but I’m sure there are larger ones
thanks
2022-09-14
One thing to be aware of is it takes a lot more effort to delete an account than creating one. So depending on how long engagement you expect from your users it might not be worth the hassle.
it’s a lot easier now that they introduced https://docs.aws.amazon.com/cli/latest/reference/organizations/close-account.html
but still has it’s limits
2022-09-15
cross-posting from hangops since I’m really looking for a solution:
does anyone know if there’s an automatic way to block pulling/consuming of a Docker image from AWS ECR if the said image has been discovered to have vulnerabilities? By automatic here I am thinking of even updating IAM policies with a DENY statement…
you mean something like this? https://github.com/aws-samples/aws-securityhub-remediations/tree/main/aws-ecr-continuouscompliance
good find @Maciek Strömich - it’s what I was looking for
Hello all, I am testing aws organization with SSO with extrnal IDP. Is that possible that only saml is the possible option and no oidc ?
https://docs.aws.amazon.com/singlesignon/latest/userguide/other-idps.html and it this reqs still valid? 1.1 saml ?
Learn about how other external identity providers work with IAM Identity Center.
• IAM Identity Center requires a SAML nameID format of email address (that is, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
).
solved.
2022-09-16
2022-09-17
Hello all, Can I store docker images into S3 instead of ECR in order to optimize cost?
For example: If I use ECR with VPC endpoints (ecr.dkr, ecr.api), then Pricing will be, per VPC endpoint per AZ ($/hour) which is costly but If I store docker images in S3 with gateway VPC endpoint for S3 which is free and use S3 docker image path inside tasks definition then cost might be less.
What is the best practice? What would be the disadvantages of storing docker images into S3 instead of ECR? Is this correct approach to store docker images in S3?
you probably could rig up a solution to publish images to S3 and pull them via s3, but the cost of all that complexity (and likely marooning yourself from the integrations with image scanning, eks, fargate, etc) just isn’t worth it.
I am using ECS fargate and I think I can integrate it with S3 and could point S3 inside task definitation instead of ECR?
because there is lot of cost with ECR private endpoints. ECR use S3 internally then why not use S3 instead of ECR with free s3 gateway endpoint.
could you please elaborate on why isn’t worth it?
I don’t have all the data points about your situation, so I could be wrong. but, in general (IMO) the more off script you go, the more complexity you have. complexity has operational costs (more difficult to onboard other team members to home grown solution), takes longer to modify your solution in the event that it breaks or needs upgraded as service change and you end up finding yourself on the outside looking in when improvements to integrations are made by AWS.
Agree… increasing complexity to save a buck is a bad idea, at a minimum use an EC2 with S3 backed storage and create a private registry via docker. Again costs may be a wash in that scenario
Thanks guys
2022-09-18
2022-09-19
Hi everyone, when i restored aws aurora instance , i have to create reader instance manually or else it will create reader instances manually??
Hello, any alternatives to run a Managed Private CA? I feel AWS Pricing is quite expensive (400$ per month + 0,75$ per certificate)
Could you self sign a cert and then any certs could then just be imported into ACM ?
The drawback with that is that you’d have to renew your certs manually I believe
We need to be able to easily create/revoke SSL certificates, since these need to be deployed on IoT devices
any reason you need to use a private ca instead of amazon’s ca ?
We need to create and deploy Private certificates, we will use these to connect to an MQTT broker
Hashicorp vault can work as a Private CA, I think. Not sure how much cheaper it would be, especially if you need HA
How does the broker know the cert is valid?
You can revoke public ACM certificates. It requires a support request so the viability of this depends on how many certs you revoke
Private CA - look at easyrsa (this is used by openVPN for their certificates) https://github.com/OpenVPN/easy-rsa
easy-rsa - Simple shell based CA utility
2022-09-20
hello, can I create a 4eyes solution with aws resources for aws switch role ? idea is to give read permission to user and give the admin role with switch role but only with approval
AWS doesn’t natively support step up authorization for multiparty, you would need a dedicated solution for that. Or find a provider that offers off the shelf support
Do you know any?
I think I would need to know more about your specific use case to make a suggestion. Would you be able to add more color to it?
We just would like to add another layer to security. So on base step. Everybody would get read only access to aws console and/or progmatic access, and few person ( admins) can get admin access if there is any issue in the system, but for the access they would need to get an approvement. Using IAM roles
Usually I’ve found if you need this, something organizationally has gone wrong. Like more than ~8 people using one AWS account. The ROI on segregating AWS accounts at the team boundary is sooo high compared to implementing something like this. Also it won’t work with the console, you’d need something custom outside and it would only support API interactions.
what do you meant ~8 people using one aws account. you mean 8 admin in the account? we will implement organization and separate deployments and infra part to separate UO and subaccount. This 4eyes is a big dream from one of the manager and that’s why I am trying to find something to implement.
yeah that is correct currently. My idea and goal is at the end to create account per customer. ok for dev part maybe we will have more than 8 users, but per customer hope it will be 1 / account
I’m not sure what “customer” is here
How does “8 people per account” work with EKS?
we don’t use eks. we create cluster with kops and using spot instances. the clusters are not too big. we did not have any issue with that currently. customer = companies that bought our product. we have an isolation by design request.
Don’t use EKS. I’ll let you know when I encounter a problem at scale that requires its usage
And what is your advice to fullfill manager’s request?:)
let me add my 2 cents to the conversation above. I would grant STS AssumeRole access to the users. so when you add a user to a specific role you can get approval for that… But when a user uses the role all you can do is log the activities… so no approval in this granularity is possible and this would make approver’s life a nightmare… logging should be more than enough…
2022-09-21
looking for a bit of inspiration, I want to walk my AWS Accounts, on a regular basis (say hourly) and catalogue all EC2 instances, that meet a certain set of tag conditions and display details in a ‘status’ type way, eg: filter all EC2 where tag1=false, tag2!=bob print {tag3, tag4, tag5} in a nice dashboard type table, I thought this would be easy to do with datadog and tags, but because it’s using just tags or conditional tag searches, it’s bad
AWS has several products that can do this
AWS Systems Manager Inventory is specifically for EC2
Hmm, I’d not looked at it that way,
that’s useful to think about
AWS Config may also be appropriate
config can get me the data, but it’s not great at displaying it
Why do you want to display it? A wall dashboard?
something like that yes
see the status of a specific deployed fleet
It might be reasonable to get data off the AWS event bus and push it to your own system then (like datadog)
I can ‘get’ the data with ease, visualising it in a human format that’s simple to read is where I’m looking for inspiriation
cli or web interface?
Hi everyone, how can I automate…AWS aurora automate backup , partial data should be exported into S3 ??
Can I ask architecture questions? I want to deploy a dotnet 6 application that is backed by PostgreSQL. The application exposes a REST API and also has an internally scheduled process that runs batch processing. I’m torn between splitting up the batch processing from the REST API, using Lambda+API Gateway for the API and a simple ECS container for the batch processing. OR, having containers for both things. I’m thinking about provisioned Aurora for PostgreSQL (serverless v2 seems really pricey for now)
I’m also torn between ECS and EKS, I feel that EKS might be overkill for now.
Any other options I’m missing?
Might be an easier question to ask during #office-hours
ECS fargate is probably the lowest complexity approach. Use the same compute type for all your apps for simplicity
I think EKS will increase cost and complexity. ECS Fargate would be best option.
Resurrecting an old topic. With aws-okta no longer maintained and no longer installable via Homebrew, what are folks using to grant CLI access to AWS via Okta? We use Okta SSO as a SAML provider for our AWS org.
Scouting around here an in other slack orgs, so far I’ve gathered (in this order of preference):
• https://github.com/godaddy/aws-okta-processor
I use the first one and like it a lot. It has the best model for understanding that there are two tokens/credentials with different expirations (one for okta, one or more per aws role) and managing them separately
The other option I’ve used is AWS SSO, with okta as the external IDP for AWS SSO, and SCIM syncing all users and groups… Then you can use anything that understands AWS SSO
When doing that, I like granted
a lot, https://github.com/common-fate/granted
The easiest way to access your cloud.
Or leaap, but leaap has problems with govcloud and other non-aws
partitions
I’m looking into AWS SSO, thanks
Looks like granted is geared towards browser access?
I’m looking for a pure CLI tool (if I can login to the browser via CLI that’s a bonus but not the main goal)
Not necessarily, with granted, assume
will export the creds to your env, and assume -c
will open the console in a browser container-tab, and assume exec
will just run the command with the credential
They’re also working on a credential_process
version that won’t muck with the env and will support a refreshable credential
Nice
Another vote for saml2aws
I tried out aws-okta-processor
. So far so good. I just don’t see a way to switch between roles. I have to run rm -rf ~/.aws/boto/cache/
every time and have to do another eval $(aws-okta-processor authenticate --environment)
Oh I use it with credential_process, to avoid polluting my env and get a refreshable credential for free. So I have a different aws-cli profile for every role
I think for your use case, there is a cli option to disable the cache
Cheers, gonna keep exploring
https://docs.leapp.cloud/0.14.3/configuring-session/configure-aws-iam-role-federated/
you can add your okta federated role in Leapp
Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally
for AWS SSO it integrates automatically,
what are the issues with Leapp and cloudgov @loren ? I would love to solve those issues.
Let me know if this docs are solving your problem @yegorski
Leapp is a tool for developers to manage, secure, and access the cloud. Manage AWS and Azure credentials centrally
Leapp definitely the “wow factor” looks great
I see, I need a “valid SSO portal URL” so to use this tool I need to set up AWS SSO. This requires changing how I currently have our 6 AWS accounts connected, via federated login
What do you mean? Your accounts are connected via SAML with Okta?
Yeah right all accounts are connected with Okta SAML, going through the main AWS org account.
Old topic but I’ve used https://github.com/dowjones/tokendito in the past.
Generate temporary AWS credentials via Okta.
Thanks! Yep, that’s on my radar
We decided to stick with aws-okta
2022-09-22
Does anyone know why (technically) you can’t delete/modify an RDS instance that’s at stopped
state?
Does it has activated deletion protection in his configs? Unless this should not stop you from modified it.
thanks, I’ll check it
Nope, even deletion protection has been removed, when you try to delete an instance it asks to start cluster first
I’ve seen that, for both serverless v2 and provisioned RDS instances
I can’t tell you why, maybe it is to run the last snapshot? I do have last snapshot turned on and haven’t tried it without
https://www.reddit.com/r/kubernetes/comments/xlfcs2/what_should_make_me_consider_moving_from_ecs_to/
Has lots of good insights. But the one is this only
2022-09-23
Hi, does AWS Database Migration Service work between RDS to RDS transfer. We have a new site going live and we want to sync prod database with a new database and after everything is verified we will change the rds from the old one to the new one
Hi! im not an expert, i used the service once or twice but with other porpouse, i think DMS have to work for that use case (that is the original use case i guess). You can test it setting the source endpoint the RDS N°1 and for the target endpoint the RDS N°2. Be carefoul if you are using secrets manager for the passwords and make a diferent secret for each RDS or you can accidentally replicate the data in the original RDS
@Aritra Banerjee: DMS indeed does support RDS like any other source or destination. When you choose ‘rds’ as a source, you can now select it from a drop-down list instead of having to manually enter host/port/service details, so that’s a bit nicer. in the docs, you’ll notice that both source and targets both list RDS as a potential.
• https://medium.com/team-pratilipi/how-to-migrate-rds-to-rds-via-dms-b8f9b86f23c
• https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html
• https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Targets.html
wow, great thank you
2022-09-24
2022-09-27
Is there a way to force iam_policy_document
to output the principals as a list even if there’s a single element?
principals {
identifiers = ["arn:aws:sts::${local.account_id}:assumed-role/task-role/*"]
type = "AWS"
}
gets spit out to
"Principal": { "AWS": "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" }
but OpenSearch wants that as
"Principal": { "AWS": [ "arn:aws:sts::XXXXXXX:assumed-role/task-role/*" ] }
What do you mean, open search “wants” the latter format? The two forms are functionally identical.
Perhaps you could specify the statement as raw JSON rather than using the Terraform data source
Seems to be an AWS quirk, https://github.com/hashicorp/terraform/issues/6438
Terraform Version
Terraform v0.6.15
Affected Resource(s)
• aws_elasticsearch_domain
Terraform Configuration Files
provider "aws" {
region = "us-east-1"
}
resource "aws_iam_user" "es" {
name = "srv_user1"
}
resource "aws_iam_access_key" "es" {
user = "${aws_iam_user.es.name}"
}
resource "aws_elasticsearch_domain" "es" {
domain_name = "es1"
advanced_options {
"rest.action.multi.allow_explicit_index" = true
}
snapshot_options {
"automated_snapshot_start_hour" = 23
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "es:*",
"Principal": {
"AWS": "${aws_iam_user.es.arn}"
}
}
]
}
CONFIG
}
Debug Output
https://gist.github.com/jritsema/8d4060e703c9a287753e1e0db5c41afd
Panic Output
none
Expected Behavior
An Elasticsearch domain should be created with a policy that grants access to the newly created user.
Actual Behavior
Throws the following error
Error applying plan:
1 error(s) occurred:
* aws_elasticsearch_domain.es: InvalidTypeException: Error setting policy: [ {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "es:*",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxx:user/srv_user1"
}
}
]
}
]
status code: 409, request id: 5ce1b757-1060-11e6-800a-c363f7f5dcbd
Steps to Reproduce
Please list the steps required to reproduce the issue
terraform apply
Important Factoids
none
References
• GH-4485
Notes
• if I run terraform apply
twice, it works the second time
If you have a statement with a wildcard and add a second statement AWS will barf if you don’t listify the principals in the second statement
Even trying to adding it through the AWS Console will fail until you add the []
s
try using just jsonencode instead of iam_policy_document
?
2022-09-28
hello,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MyOrgOnly",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::thebucketofmydreams",
"arn:aws:s3:::thebucketofmydreams/*"
],
"Condition": {
"ForAnyValue:StringLike": {
"aws:PrincipalOrgPaths": ["o-funny/r-stuff/ou-path"]
}
}
}
]
}
what is the issue with this? My goal is give access to subaccount in organiaztion under an OU to a resource that it is in another account in same organization
difficult to answer without knowing what the problem is
I have account under infra ou and account under dev ou. I have s3 bucket in infra that ai would like to access from account under dev ou. But only one bucket and only from that account. I get 403 when I try to download a file from that bucket
When you’re not using a wildcard, pretty sure you should be using the “ForAnyValue:StringEquals” operator
got same error, when I set it to GetObject and try to download file
if I add a ‘*/ after r-stuff… it works… I limited the access to getobject and if I add the condition like this:
o-funny/r-stuff/*/ou-path/*
can somebody explain why I need the * in the condition? second i think I know, but the 1st ?
2022-09-29
hello, another day, another question I have vpc in account A and private hosted zone in account B I would like to associate them, but don1t want to use creds from a. I created a role, in a that can call from B, but how could I call it? I need to automate this
solved
2022-09-30
Hello! I have a dedicated connection with direct connect. According to the engineer who setup direct connect on their end, I should be able to Telnet a host on port 53. He told me that I need to set the primary and backup DNS to x.x.x.1 and x.x.x.2 (I guess this is done by changing the DHCP option sets in the VPC but I am not sure). Is that the right approach to set DNS as per the engineer’s requests? If so how can I reach the instance via RDP on the private subnet? I think a RD Gateway could help but I am a bit lost, changing DHCP make the instance unreachable via vpc endpoints and SSM session
port 53 is the DNS port. Why are you telnetting a host of port 53?
If so how can I reach the instance via RDP on the private subnet?
This depends on where are you connecting from? Are you connecting from another Windows instance in one of your private subnets?
If yes, then simply, go into your hosts server manager > IPV4 > Go into the respective network interface setting > Advanced > DNS
set your DNS entries there.
Yes that’s right it’s a windows vm on a private subnet
change your DNS on that Windows server. If you change AWS DHCP options, then you will have wider issues
The engineer sent me a screenshot of what I should see when doing telnet (53), he claims I should be able to connect
Inside the vm? Alright thank you so much!
Once I change the dns on the windows server, what would be the easiest way to rdp into it?
Simply launch RDP from your machine and connect to the other machine
I am a bit confused, the vm is inside a private subnet and it only has a private ip, can I still access it just with rdp without any vpn or bastion host?
ok, tell me this…
From where are you trying to access the VM? From your laptop? Or from another Windows host in your VPC?
From my laptop
Are you conencting your laptop to a VPN?
Not at the moment, I used to use SSM Sessions and RDP into it but changing the dns inside the vm makes the host unreachable
in all, your laptop and the vm you are trying to connect must be in the network that have a route between each other. Currently how is this routing established?
Via ssm, it opens a port and it allows me to access the vm on that given port in localhost
if you are able to access the VM via SSM what is the issue that you want resolved?
you can also use FLEET MANAGER
to directly RDP from AWS console
I think I get you
you log into that VM using SSM, and on the VM, launch server manager > > IPV4 > Go into the respective network interface setting > Advanced > DNS and set the DNS on the VM
after changing DNS on the VM as described above, you can still connect to it via SSM
Fleet manager? That sounds good I will definitely give it a go, thank you so much!
I did not find any info about transit gateway modify. My question is, will there be any outage if I modify the tgw to enable the cross account auto accept shared attachment
So I’m looking at being prepared to upgrade AWS EKS cluster to 1.23+ which requires the EBS CSI driver. Currently using the cloudposse/eks-cluster/aws
module and looking to see if anyone else has already attempted this and if so what changes are needed
Hi did you find a solution for this?
Not one I particularly like… I manually upgraded the cluster and node group through console and then updated terraform version to match. I found if I changed the version in the terraform then the plan would fail.