#aws (2022-10)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2022-10-02
Hi everyone,
I am trying to get the output of a command using aws sdk javascript v3. I am having trouble understanding how to get the output.
const client = new SSMClient({ region: "us-west-2" });
const command = new SendCommandCommand(SSM_Command_Parameters);
const response = await client.send(command);
ssm_output_id = response.Command?.CommandId
I am getting the command id from this but I am unable to figure out how to get the actual output of the command. Any help will be appreciated
I’d say call listcommandinvocations. https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListCommandInvocations.html
An invocation is copy of a command sent to a specific managed node. A command can apply to one or more managed nodes. A command invocation applies to one managed node. For example, if a user runs SendCommand against three managed nodes, then a command invocation is created for each requested managed node ID.
didn’t send the output unfortunately
Have you investigated all of the data structure?
yes, output is not there, at this moment thinking about just using the cli to perform the task
2022-10-03
2022-10-05
I have a question about ACM Private CA pricing if anyone knows: it’s $400 per month per CA, right, but does that mean they charge you for each subordinate CA you add to your structure even in the same account/region? So if you have a security account with a root CA and 4 subordinate CAs that’s $2000 per month? The pricing just gets worse and worse the more you try to follow general or PCA best practices…
every subordinate is another 400/mo
Thanks, needed a sanity check on that
the least expensive w/ security is a root ca to sign a subordinate cert and then ram share the single subordinate cert.
that would be 800/mo without getting additional granularity.
i found this out the hard way…
you can also see this using infracost
Yes looks like that’s the route, just one subordinate per region with a ram share
Oh you mean have the root CA in PCA as well, but you still need a subordinate in each region right?
yep you still need a subordinate per region
min best practices is to have 1 root and 1 shared sub per region
if you get more creative, it will cost you
I suppose using one root CA for all regions, in your primary region, raises DR problems? and having the root outside of PCA is its own security challenge
If you have something else managing the root outside AWS PCA, I suppose you wouldn’t need to worry about that part
Companies love extorting for security features (see SSO as well)
I don’t yet have a root so it’s still hypothetical where we’d put it
I’ve also heard of hashicorp vault being used fairly often as CA solution. Not sure it ends up any cheaper
Is there a way to effectively use cloudtrail logs to alert on suspicious logins or monitor login activity to console?
you probably could cobble something together, but AWS GuardDuty is the AWS recommended way to solve that
Guardduty does not look at historical data. I want to look at the historical data.
reasons cloud trail sucks for 1000 alex…
best way to track the logins is via your directory providor and logs there in, be AD, or other cloud directory service
and a good log parser… splunk/elk/grey log/etc…
you can use cloudwatch logs insights for this using a query like the one below against the cloudtrail log group:
fields @timestamp, @message
| filter eventName = "ConsoleLogin"
| filter errorMessage = "Failed authentication"
| sort @timestamp desc
| limit 20
2022-10-06
Hi! im using this module https://github.com/cloudposse/terraform-aws-ecs-codepipeline with codestar_connection_arn to use Github v2 and not the deprecated version, when i apply the changes, it always try to create 2 pipelines and fails because it cant create 2 pipelines with the same name… it is a bug or there is a way to fix it?
I ran into this ages ago. I think it is because the state thinks your resource exists. You can get around it by renaming it to something else.
i already destroyed de entire infrastructure and when i run the pipeline it still trying to create 2 pipelines, i renamed it twice
im going to refactor the variables later, as you can see im not calling the module with for-each or count
looking at the graph, is trying to create something about bitbucket… (i just want to use github v2 )
well, the solution then: if you wanna use github v2 you just need to put codestar_connection_arn normally but suppressing github oauth token, if you put both the module tries to create 2 pipelines
2022-10-07
Does anybody have experience with AWS EKS using AWS EFS?
I need a place to store/read some data (5-10MB file) very fast and have it available consistently on multiple pods.
S3?
S3 latency is too high
Can you talk more to the requirements? e.g. How often does the file change? What’s the need for speed?
If you have your heart set on efs, fwiw we usually set efs as the default storage after creating new eks clusters
https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
but i do not want to take away from Sono’s question. Understanding the problem is more important here
there’s also https://aws.amazon.com/about-aws/whats-new/2022/09/amazon-file-cache-generally-available/
2022-10-10
what is the most elegant way to work with private hosted zones in organization? let’s say we have a tool in account that needs to access resources in account b where account b uses private hosted zone. is the only way is the following?
• authorize from account b so account a can add the vpc to the hosted zone
• add vcp to the hosted zone using account a iam role
That works. Or a ram share, and route53 resolver rules can be pretty flexible. Depends what you need
@RB
The way we’ve done it is the way you’ve described in your above 2 bullets. We’ve done it using a couple terraform components.
We haven’t explored doing the RAM share and r53 resolver rules but that should work too.
thanks
What are some considerable parameters for tuning the PostgreSQL performance on RDS? Any suggestions based on implementations?
Nothing really, definitely nothing specific to RDS. What problems are you having?
I want to optimize the PostGres performance, so looking for inputs on what are the parameters that people usually start tuning the values of, eg: wal_buffers
, work_mem
, maintenance_work_mem
and so on
Our workload is write and read intensive. We have a replica setup but at times there is a huge replica_lag
hi, have you seen these two? https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server + https://github.com/jfcoz/postgresqltuner
Simple script to analyse your PostgreSQL database configuration, and give tuning advice
Thanks @Zoltan K, this looks promising!
2022-10-11
is there any issue with ohio a zone ? us-east-2a currently
and their status page shows nothing… where can we get correct status reports ?
This particular notification was sent to the root account email. Not sure if it was on the AWS Status Page or not.
thanks
I did not find any email in my account, but will check it .
2022-10-12
2022-10-13
I have a prometheus on a cluster and would like to monitor cluster b, when I create cluster b currently I modify the configmap and reload the prometheus to able to monitor the new cluster. since we are moving to organization based accounts, I need to do this modification from subaccount. My idea is to move the configmap to s3 and share it between accounts, so I can modify from account b without a permission request to the cluster A or account A … Do you know anything about how could I check the s3 modification. I found only to mount the s3 and use inotify. … any other direct way ?:D
Hi, I’m receiving the following error -> Packet for query is too large (5,739,780 > 4,194,304).
When checking MySQL side, I saw the allowed values is between: 1024-1073741824 When I’m trying to increase it over this limit, it is not letting me. Any suggestions?
You need to mention which service you are asking for help about. I guess this is RDS?
You can configure MySQL settings with a parameter group.
Not letting as in, is any particular error shown?
Thank you all, I solve it. I configure the wrong parameter group this is way when I increased the value I still had this error
2022-10-14
2022-10-18
Couple EKS permission questions:
- The IAM user which created the EKS cluster is given special permissions outside of aws-auth. Where can I observe that permission assignment?
- What happens when that original IAM user is deleted , can that be done? (And if so, without anything else in aws-auth: presumably one completely loses access to the cluster, can it be regained?)
Do you know how the cluster was provisioned?
you can try checking for an oidc provider
oidc_id=$(aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
Determine whether an IAM OIDC provider with your cluster’s ID is already in your account.
aws iam list-open-id-connect-providers | grep $oidc_id
If output is returned from the previous command, then you already have a provider for your cluster and you can skip the next step. If no output is returned, then you must create an IAM OIDC provider for your cluster.
eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve
Thanks @Alanis Swanepoel - I’ll be able to obtain this information tomorrow morning.
When you say must - must to achieve/allow what?
the oidc provider provisions a role in you can use to access the cluster
@Alanis Swanepoel I can see there is no OIDC provider configured
Am I right in thinking that I only need to use an OIDC provider for IRSA (IAM Roles for Service Accounts)? (having looked at AWS docs)
The only immediate requirement is allowing a couple IAM roles to be mapped to a cluster role in aws-auth, so I’m thinking OIDC isn’t immediately required
A “must” always alarms me somewhat when I don’t (yet) see it needed (more from the perspective of worrying that I am missing something)
let me clarify the Must
in order to associate an OIDC priver with your cluster, one MUST exist
you cant associate a non existant provider with the cluster
those steps are to check if an oidc provider has been (1) provisioned in your aws account, and (2) mapped to your cluster
Learn how to create an AWS Identity and Access Management OpenID Connect provider for your cluster.
I wasn’t specifically looking to associate an OIDC provider, though - that’s what you suggested
My current understanding is that I’ll need an OIDC provider once I need to start using IRSA, but for the purposes of allowing another role access to the cluster in the aws-auth
configmap, I’ll be fine without
Assuming that’s correct, and that the cluster creator’s IAM user can be safely deleted after adding a different role to the aws-auth
configmap, I think I’m good. Thanks!
gotcha - i may have misunderstood your question then - i interpreted it as i have an eks cluster, and i want to give others in my org access to it through an iam role, without messing with config map
2022-10-19
2022-10-20
Hey all!
I need to create ReadWriteMany
volume in my EKS env, I tried with gp2/gp3
StorageClass but I’m getting that error:
Warning ProvisioningFailed 9s (x6 over 76s) persistentvolume-controller Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
Does someone know how to create one?
If memory serves, EBS doesnt supprt ReadWriteMany
, you need to use EFS to get that working
Correct you need an EFS volume
2022-10-21
Hello. Anyone know how to modify an existing S3 policy using the cloudfront-s3-cdn module?
I’m trying to use the cloudfront-s3-cdn module to create two CloudFront distros - pointing at different directories in the same S3 bucket.
I have successfully created the two CF distros, and have them pointing at the correct origins, and can see that the Response Header Policies are working correctly. The problem I am running into is I cannot figure out how to modify the existing S3 policy to allow the second CF distro access.
When I set override_origin_bucket_policy to true
and run terraform plan
it looks like the existing policy will be wiped out and automatically replaced (which would break the integration between the first CF distro and the bucket).
When I set additional_bucket_policy and run terraform plan
it appears to have no effect.
See example code in thread
Update: I was using the wrong syntax.
You need to do something like this:
terraform
data "aws_iam_policy_document" "overrides" {
statement {
sid = "S3GetObjectForCloudFront"
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"$${cloudfront_origin_access_identity_iam_arn}"
]
}
actions = ["s3:GetObject"]
resources = ["arn:aws:s3:::$${bucket_name}$${origin_path}*"]
}
statement {
sid = "S3ListBucketForCloudFront"
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"$${cloudfront_origin_access_identity_iam_arn}"
]
}
actions = ["s3:ListBucket"]
resources = ["arn:aws:s3:::$${bucket_name}"]
}
}
module "cdn" {
source = "cloudposse/cloudfront-s3-cdn/aws"
version = "0.83.0"
name = "Policy Overrides Example"
origin_bucket = "foobar"
origin_path = "/bazqux"
override_origin_bucket_policy = true
additional_bucket_policy = data.aws_iam_policy_document.overrides.json
}
Resources
2022-10-24
Until ECR gets native cache manifest support the recently-launched & experimental S3 cache is worth a shot
Aaron used it to build Carbon (the OSS pretty terminal code screenshot thingie) and the build time went down from 106 seconds to just 11 seconds Massive improvement! https://twitter.com/aaronbatilo/status/1584233678850269187
How to use the S3 for your docker layer cache on A slice of experiments https://sliceofexperiments.substack.com/p/how-to-use-the-s3-for-your-docker?utm_source=twitter&utm_campaign=auto_share&r=ir09e
(I posted this in #aws and not #github-actions cause folks using other CIs can use the S3 caching too)
Hi,
I verified a domain initially a long time back in SES using 1024 bit dkim key. I now updated the key to 2048 bit. The issue is that out of three cname records, only one is showing key length as 2048 bit, another is showing length as 1024 bit and another is showing empty. The 1024 bit key is flagged by a bitsight report. Any help with this will be appreciated
I’ve got a weird one. We’re using terraform-aws-eks-cluster (2.3.0), and terraform-aws-eks-iam-role (0.10.3). During an upgrade operation, the cluster wanted to update it’s OIDC provider thumbprint.
# module.eks_cluster.aws_iam_openid_connect_provider.default[0] will be updated in-place
~ resource "aws_iam_openid_connect_provider" "default" {
id = "arn:aws:iam::276255499768:oidc-provider/[REDACTED]
tags = {
"Attributes" = "cluster"
"Environment" = "[REDACTED]"
"Name" = "[REDACTED]"
}
~ thumbprint_list = [
- "9e99a48a9960b14926bb7f3b02e22da2b0ab7280",
] -> (known after apply)
# (4 unchanged attributes hidden)
}
Didn’t seem like a big deal, but plans were failing with the following:
Error: Invalid for_each argument
on .terraform/modules/dev_services.eks_iam_role/main.tf line 79, in resource "aws_iam_policy" "service_account":
79: for_each = var.aws_iam_policy_document != null ? toset(compact([module.service_account_label.id])) : []
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
It looks like eks-iam-role v1.0.0 might fix the for_each situation, is anyone able to confirm? What’s really got me puzzled is, why is the thumbprint update affecting the iam-role module at all? As far as I can tell, a thumbprint list change doesn’t change the value of eks_cluster_identity_oidc_issuer
which passed into the module as eks_cluster_oidc_issuer_url
2022-10-25
2022-10-26
Has anybody observed differences between what IAM Policy Simulator reports as allowed
vs what is truly allowed via the CLI?
Running via CLI:
aws sts assume-role --role-arn arn:aws:iam::MY_ACCOUNT:role/MY_ROLE --role-session-name test --source-identity MY_SOURCE_IDENTITY
Yields:
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::MY_ACCOUNT:user/MY_USER is not authorized to perform: sts:SetSourceIdentity on resource: arn:aws:iam::MY_ACCOUNT:role/MY_ROLE
Whilst IAM Policy Simulator, using exactly the same action (SetSourceIdentity
) and role resource, as the same IAM user, reports allowed
:thinking_face: (A separate AssumeRole action for that same role also shows as allowed
in the simulator)
heh. it probably is evaluating only the identity policy. but sts assume-role
is actually performing the action, and that also requires that the trust-policy on the role allow the action… and if you are passing the source-identity, then you need to allow sts:SetSourceIdentity
in the role trust policy
Right on the money. Well done @loren!
I feel like the UI should shout this clearer (or at all)!
yeah, the interaction between an identity policy and a resource policy (and SCP and KMS policy and VPC Endpoint policy) can get pretty confounding to troubleshoot
2022-10-28
Hello Folks, is there a terraform module that any of you know of/use to create a tunnel via SSM?
Hello, we are planning to replicate our AWS RDS database into Azure for Disaster Recovery purposes, what would be the best service from AWS or Azure to achieve this task effectively?