#aws (2022-10)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2022-10-02

Aritra Banerjee avatar
Aritra Banerjee

Hi everyone,

I am trying to get the output of a command using aws sdk javascript v3. I am having trouble understanding how to get the output.

const client = new SSMClient({ region: "us-west-2" });
  const command = new SendCommandCommand(SSM_Command_Parameters);
  const response = await client.send(command);

  ssm_output_id = response.Command?.CommandId

I am getting the command id from this but I am unable to figure out how to get the actual output of the command. Any help will be appreciated

github140 avatar
github140
ListCommandInvocations - AWS Systems Manager

An invocation is copy of a command sent to a specific managed node. A command can apply to one or more managed nodes. A command invocation applies to one managed node. For example, if a user runs SendCommand against three managed nodes, then a command invocation is created for each requested managed node ID.

Aritra Banerjee avatar
Aritra Banerjee

didn’t send the output unfortunately

github140 avatar
github140

Have you investigated all of the data structure?

Aritra Banerjee avatar
Aritra Banerjee

yes, output is not there, at this moment thinking about just using the cli to perform the task

2022-10-03

2022-10-05

Ray Botha avatar
Ray Botha

I have a question about ACM Private CA pricing if anyone knows: it’s $400 per month per CA, right, but does that mean they charge you for each subordinate CA you add to your structure even in the same account/region? So if you have a security account with a root CA and 4 subordinate CAs that’s $2000 per month? The pricing just gets worse and worse the more you try to follow general or PCA best practices…

RB avatar

every subordinate is another 400/mo

Ray Botha avatar
Ray Botha

Thanks, needed a sanity check on that

RB avatar

the least expensive w/ security is a root ca to sign a subordinate cert and then ram share the single subordinate cert.

that would be 800/mo without getting additional granularity.

RB avatar

i found this out the hard way…

RB avatar

you can also see this using infracost

Ray Botha avatar
Ray Botha

Yes looks like that’s the route, just one subordinate per region with a ram share

Ray Botha avatar
Ray Botha

Oh you mean have the root CA in PCA as well, but you still need a subordinate in each region right?

RB avatar

yep you still need a subordinate per region

RB avatar

min best practices is to have 1 root and 1 shared sub per region

RB avatar

if you get more creative, it will cost you

Ray Botha avatar
Ray Botha

I suppose using one root CA for all regions, in your primary region, raises DR problems? and having the root outside of PCA is its own security challenge

loren avatar

PCA is

1
loren avatar

If you have something else managing the root outside AWS PCA, I suppose you wouldn’t need to worry about that part

Ray Botha avatar
Ray Botha

Companies love extorting for security features (see SSO as well)

Ray Botha avatar
Ray Botha

I don’t yet have a root so it’s still hypothetical where we’d put it

loren avatar

I’ve also heard of hashicorp vault being used fairly often as CA solution. Not sure it ends up any cheaper

srinandu2291 avatar
srinandu2291

Is there a way to effectively use cloudtrail logs to alert on suspicious logins or monitor login activity to console?

Darren Cunningham avatar
Darren Cunningham

you probably could cobble something together, but AWS GuardDuty is the AWS recommended way to solve that

1
srinandu2291 avatar
srinandu2291

Guardduty does not look at historical data. I want to look at the historical data.

jsreed avatar

reasons cloud trail sucks for 1000 alex…

jsreed avatar

best way to track the logins is via your directory providor and logs there in, be AD, or other cloud directory service

jsreed avatar

and a good log parser… splunk/elk/grey log/etc…

Brian Ji avatar
Brian Ji

you can use cloudwatch logs insights for this using a query like the one below against the cloudtrail log group:

fields @timestamp, @message
| filter eventName = "ConsoleLogin"
| filter errorMessage = "Failed authentication"
| sort @timestamp desc
| limit 20

2022-10-06

Paula avatar

Hi! im using this module https://github.com/cloudposse/terraform-aws-ecs-codepipeline with codestar_connection_arn to use Github v2 and not the deprecated version, when i apply the changes, it always try to create 2 pipelines and fails because it cant create 2 pipelines with the same name… it is a bug or there is a way to fix it?

Dean Lofts avatar
Dean Lofts

I ran into this ages ago. I think it is because the state thinks your resource exists. You can get around it by renaming it to something else.

Paula avatar

i already destroyed de entire infrastructure and when i run the pipeline it still trying to create 2 pipelines, i renamed it twice

Paula avatar

im going to refactor the variables later, as you can see im not calling the module with for-each or count

Paula avatar

looking at the graph, is trying to create something about bitbucket… (i just want to use github v2 )

Paula avatar

well, the solution then: if you wanna use github v2 you just need to put codestar_connection_arn normally but suppressing github oauth token, if you put both the module tries to create 2 pipelines

2022-10-07

Gabriel avatar
Gabriel

Does anybody have experience with AWS EKS using AWS EFS?

I need a place to store/read some data (5-10MB file) very fast and have it available consistently on multiple pods.

Alex Jurkiewicz avatar
Alex Jurkiewicz

S3?

Gabriel avatar
Gabriel

S3 latency is too high

Sono Chhibber avatar
Sono Chhibber

Can you talk more to the requirements? e.g. How often does the file change? What’s the need for speed?

RB avatar

If you have your heart set on efs, fwiw we usually set efs as the default storage after creating new eks clusters

https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

but i do not want to take away from Sono’s question. Understanding the problem is more important here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2022-10-10

Balazs Varga avatar
Balazs Varga

what is the most elegant way to work with private hosted zones in organization? let’s say we have a tool in account that needs to access resources in account b where account b uses private hosted zone. is the only way is the following?

• authorize from account b so account a can add the vpc to the hosted zone

• add vcp to the hosted zone using account a iam role

loren avatar

That works. Or a ram share, and route53 resolver rules can be pretty flexible. Depends what you need

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@RB

RB avatar

The way we’ve done it is the way you’ve described in your above 2 bullets. We’ve done it using a couple terraform components.

We haven’t explored doing the RAM share and r53 resolver rules but that should work too.

1
Balazs Varga avatar
Balazs Varga

thanks

akhan4u avatar
akhan4u

What are some considerable parameters for tuning the PostgreSQL performance on RDS? Any suggestions based on implementations?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Nothing really, definitely nothing specific to RDS. What problems are you having?

akhan4u avatar
akhan4u

I want to optimize the PostGres performance, so looking for inputs on what are the parameters that people usually start tuning the values of, eg: wal_buffers, work_mem, maintenance_work_mem and so on

akhan4u avatar
akhan4u

Our workload is write and read intensive. We have a replica setup but at times there is a huge replica_lag

Zoltan K avatar
Zoltan K
jfcoz/postgresqltuner

Simple script to analyse your PostgreSQL database configuration, and give tuning advice

1
akhan4u avatar
akhan4u

Thanks @Zoltan K, this looks promising!

2022-10-11

Balazs Varga avatar
Balazs Varga

is there any issue with ohio a zone ? us-east-2a currently

Hugo Samayoa avatar
Hugo Samayoa

Yes EC2 instance availability

1
Balazs Varga avatar
Balazs Varga

and their status page shows nothing… where can we get correct status reports ?

Hugo Samayoa avatar
Hugo Samayoa

This particular notification was sent to the root account email. Not sure if it was on the AWS Status Page or not.

Balazs Varga avatar
Balazs Varga

thanks

Balazs Varga avatar
Balazs Varga

I did not find any email in my account, but will check it .

2022-10-12

2022-10-13

Balazs Varga avatar
Balazs Varga

I have a prometheus on a cluster and would like to monitor cluster b, when I create cluster b currently I modify the configmap and reload the prometheus to able to monitor the new cluster. since we are moving to organization based accounts, I need to do this modification from subaccount. My idea is to move the configmap to s3 and share it between accounts, so I can modify from account b without a permission request to the cluster A or account A … Do you know anything about how could I check the s3 modification. I found only to mount the s3 and use inotify. … any other direct way ?:D

Shlomo Daari avatar
Shlomo Daari

Hi, I’m receiving the following error -> Packet for query is too large (5,739,780 > 4,194,304).

When checking MySQL side, I saw the allowed values is between: 1024-1073741824 When I’m trying to increase it over this limit, it is not letting me. Any suggestions?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You need to mention which service you are asking for help about. I guess this is RDS?

You can configure MySQL settings with a parameter group.

Aritra Banerjee avatar
Aritra Banerjee

Not letting as in, is any particular error shown?

Shlomo Daari avatar
Shlomo Daari

Thank you all, I solve it. I configure the wrong parameter group this is way when I increased the value I still had this error

2022-10-14

2022-10-18

Herman Smith avatar
Herman Smith

Couple EKS permission questions:

  1. The IAM user which created the EKS cluster is given special permissions outside of aws-auth. Where can I observe that permission assignment?
  2. What happens when that original IAM user is deleted , can that be done? (And if so, without anything else in aws-auth: presumably one completely loses access to the cluster, can it be regained?)
Allan Swanepoel avatar
Allan Swanepoel

Do you know how the cluster was provisioned?

Allan Swanepoel avatar
Allan Swanepoel

you can try checking for an oidc provider

Allan Swanepoel avatar
Allan Swanepoel
oidc_id=$(aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
Allan Swanepoel avatar
Allan Swanepoel

Determine whether an IAM OIDC provider with your cluster’s ID is already in your account.

aws iam list-open-id-connect-providers | grep $oidc_id
Allan Swanepoel avatar
Allan Swanepoel

If output is returned from the previous command, then you already have a provider for your cluster and you can skip the next step. If no output is returned, then you must create an IAM OIDC provider for your cluster.

eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve
Herman Smith avatar
Herman Smith

Thanks @Allan Swanepoel - I’ll be able to obtain this information tomorrow morning.

When you say must - must to achieve/allow what?

Allan Swanepoel avatar
Allan Swanepoel

the oidc provider provisions a role in you can use to access the cluster

Herman Smith avatar
Herman Smith

@Allan Swanepoel I can see there is no OIDC provider configured

Am I right in thinking that I only need to use an OIDC provider for IRSA (IAM Roles for Service Accounts)? (having looked at AWS docs)

The only immediate requirement is allowing a couple IAM roles to be mapped to a cluster role in aws-auth, so I’m thinking OIDC isn’t immediately required

Herman Smith avatar
Herman Smith

A “must” always alarms me somewhat when I don’t (yet) see it needed (more from the perspective of worrying that I am missing something)

Allan Swanepoel avatar
Allan Swanepoel

let me clarify the Must

Allan Swanepoel avatar
Allan Swanepoel

in order to associate an OIDC priver with your cluster, one MUST exist

Allan Swanepoel avatar
Allan Swanepoel

you cant associate a non existant provider with the cluster

Allan Swanepoel avatar
Allan Swanepoel

those steps are to check if an oidc provider has been (1) provisioned in your aws account, and (2) mapped to your cluster

Allan Swanepoel avatar
Allan Swanepoel
Creating an IAM OIDC provider for your cluster - Amazon EKS

Learn how to create an AWS Identity and Access Management OpenID Connect provider for your cluster.

Herman Smith avatar
Herman Smith

I wasn’t specifically looking to associate an OIDC provider, though - that’s what you suggested

Herman Smith avatar
Herman Smith

My current understanding is that I’ll need an OIDC provider once I need to start using IRSA, but for the purposes of allowing another role access to the cluster in the aws-auth configmap, I’ll be fine without

Herman Smith avatar
Herman Smith

Assuming that’s correct, and that the cluster creator’s IAM user can be safely deleted after adding a different role to the aws-auth configmap, I think I’m good. Thanks!

Allan Swanepoel avatar
Allan Swanepoel

gotcha - i may have misunderstood your question then - i interpreted it as i have an eks cluster, and i want to give others in my org access to it through an iam role, without messing with config map

2022-10-19

2022-10-20

idan levi avatar
idan levi

Hey all! I need to create ReadWriteMany volume in my EKS env, I tried with gp2/gp3 StorageClass but I’m getting that error:

  Warning  ProvisioningFailed    9s (x6 over 76s)     persistentvolume-controller  Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported

Does someone know how to create one?

Allan Swanepoel avatar
Allan Swanepoel

If memory serves, EBS doesnt supprt ReadWriteMany, you need to use EFS to get that working

Soren Jensen avatar
Soren Jensen

Correct you need an EFS volume

2022-10-21

Gary Cuga-Moylan avatar
Gary Cuga-Moylan

Hello. Anyone know how to modify an existing S3 policy using the cloudfront-s3-cdn module?

I’m trying to use the cloudfront-s3-cdn module to create two CloudFront distros - pointing at different directories in the same S3 bucket.

I have successfully created the two CF distros, and have them pointing at the correct origins, and can see that the Response Header Policies are working correctly. The problem I am running into is I cannot figure out how to modify the existing S3 policy to allow the second CF distro access.

When I set override_origin_bucket_policy to true and run terraform plan it looks like the existing policy will be wiped out and automatically replaced (which would break the integration between the first CF distro and the bucket).

When I set additional_bucket_policy and run terraform plan it appears to have no effect.

See example code in thread

1
Gary Cuga-Moylan avatar
Gary Cuga-Moylan
Gary Cuga-Moylan avatar
Gary Cuga-Moylan

Update: I was using the wrong syntax.

You need to do something like this:

terraform
data "aws_iam_policy_document" "overrides" {
  statement {
    sid    = "S3GetObjectForCloudFront"

    effect = "Allow"
    principals {
      type        = "AWS"
      identifiers = [
        "$${cloudfront_origin_access_identity_iam_arn}"
      ]
    }
    actions   = ["s3:GetObject"]
    resources = ["arn:aws:s3:::$${bucket_name}$${origin_path}*"]
  }

  statement {
    sid    = "S3ListBucketForCloudFront"

    effect = "Allow"
    principals {
      type        = "AWS"
      identifiers = [
        "$${cloudfront_origin_access_identity_iam_arn}"
      ]
    }
    actions   = ["s3:ListBucket"]
    resources = ["arn:aws:s3:::$${bucket_name}"]
  }
}

module "cdn" {
  source  = "cloudposse/cloudfront-s3-cdn/aws"
  version = "0.83.0"
  name    = "Policy Overrides Example"

  origin_bucket                      = "foobar"
  origin_path                        = "/bazqux"
  override_origin_bucket_policy      = true
  additional_bucket_policy           = data.aws_iam_policy_document.overrides.json
}

Resources

2022-10-24

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Until ECR gets native cache manifest support the recently-launched & experimental S3 cache is worth a shot

Aaron used it to build Carbon (the OSS pretty terminal code screenshot thingie) and the build time went down from 106 seconds to just 11 seconds Massive improvement! https://twitter.com/aaronbatilo/status/1584233678850269187

3
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

(I posted this in #aws and not #github-actions cause folks using other CIs can use the S3 caching too)

Aritra Banerjee avatar
Aritra Banerjee

Hi,

I verified a domain initially a long time back in SES using 1024 bit dkim key. I now updated the key to 2048 bit. The issue is that out of three cname records, only one is showing key length as 2048 bit, another is showing length as 1024 bit and another is showing empty. The 1024 bit key is flagged by a bitsight report. Any help with this will be appreciated

Mike Robinson avatar
Mike Robinson

I’ve got a weird one. We’re using terraform-aws-eks-cluster (2.3.0), and terraform-aws-eks-iam-role (0.10.3). During an upgrade operation, the cluster wanted to update it’s OIDC provider thumbprint.

  # module.eks_cluster.aws_iam_openid_connect_provider.default[0] will be updated in-place
  ~ resource "aws_iam_openid_connect_provider" "default" {
        id              = "arn:aws:iam::276255499768:oidc-provider/[REDACTED]
        tags            = {
            "Attributes"  = "cluster"
            "Environment" = "[REDACTED]"
            "Name"        = "[REDACTED]"
        }
      ~ thumbprint_list = [
          - "9e99a48a9960b14926bb7f3b02e22da2b0ab7280",
        ] -> (known after apply)
        # (4 unchanged attributes hidden)
    }

Didn’t seem like a big deal, but plans were failing with the following:

Error: Invalid for_each argument

  on .terraform/modules/dev_services.eks_iam_role/main.tf line 79, in resource "aws_iam_policy" "service_account":
  79:   for_each    = var.aws_iam_policy_document != null ? toset(compact([module.service_account_label.id])) : []

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

It looks like eks-iam-role v1.0.0 might fix the for_each situation, is anyone able to confirm? What’s really got me puzzled is, why is the thumbprint update affecting the iam-role module at all? As far as I can tell, a thumbprint list change doesn’t change the value of eks_cluster_identity_oidc_issuer which passed into the module as eks_cluster_oidc_issuer_url

2022-10-25

2022-10-26

Herman Smith avatar
Herman Smith

Has anybody observed differences between what IAM Policy Simulator reports as allowed vs what is truly allowed via the CLI? Running via CLI:

aws sts assume-role --role-arn arn:aws:iam::MY_ACCOUNT:role/MY_ROLE --role-session-name test --source-identity MY_SOURCE_IDENTITY

Yields:

An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::MY_ACCOUNT:user/MY_USER is not authorized to perform: sts:SetSourceIdentity on resource: arn:aws:iam::MY_ACCOUNT:role/MY_ROLE

Whilst IAM Policy Simulator, using exactly the same action (SetSourceIdentity) and role resource, as the same IAM user, reports allowed :thinking_face: (A separate AssumeRole action for that same role also shows as allowed in the simulator)

loren avatar

heh. it probably is evaluating only the identity policy. but sts assume-role is actually performing the action, and that also requires that the trust-policy on the role allow the action… and if you are passing the source-identity, then you need to allow sts:SetSourceIdentity in the role trust policy

1
Herman Smith avatar
Herman Smith

Right on the money. Well done @loren!

Herman Smith avatar
Herman Smith

I feel like the UI should shout this clearer (or at all)!

loren avatar

yeah, the interaction between an identity policy and a resource policy (and SCP and KMS policy and VPC Endpoint policy) can get pretty confounding to troubleshoot

2022-10-28

Amrutha Sunkara avatar
Amrutha Sunkara

Hello Folks, is there a terraform module that any of you know of/use to create a tunnel via SSM?

vicentemanzano6 avatar
vicentemanzano6

Hello, we are planning to replicate our AWS RDS database into Azure for Disaster Recovery purposes, what would be the best service from AWS or Azure to achieve this task effectively?

2022-10-31

Josh B. avatar
Josh B.

FYI us-east-2 is having network issues

1
    keyboard_arrow_up