#aws (2021-03)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2021-03-01

Troy Taillefer avatar
Troy Taillefer

I am trying to make a kinesis autoscaler lambda based on existing code basically update shard count based on incoming records alarm metric. During testing I notice something odd in using aws cli commands to get the number shards shown above. Basically describe-stream-summary says the OpenShardCount is one this seems like the right answer but describe-stream and list-shards report there are 4 shards. Which is correct ? Why are they not consistent ? Hope there is a kinesis expert here who can explain what is going on thanks

Troy Taillefer avatar
Troy Taillefer

I think I understand the shards are not yet expired and are still readable but not writable because of retention period

Alex Jurkiewicz avatar
Alex Jurkiewicz

Right. Not all shards are open. Anyway, there are off-the-shelf solutions for auto-scaling Kinesis streams, I would highly recommend using them instead of writing your own: https://aws.amazon.com/blogs/big-data/scaling-amazon-kinesis-data-streams-with-aws-application-auto-scaling/

Scale Amazon Kinesis Data Streams with AWS Application Auto Scaling | Amazon Web Servicesattachment image

Recently, AWS launched a new feature of AWS Application Auto Scaling that let you define scaling policies that automatically add and remove shards to an Amazon Kinesis Data Stream. For more detailed information about this feature, see the Application Auto Scaling GitHub repository. As your streaming information increases, you require a scaling solution to accommodate […]

Troy Taillefer avatar
Troy Taillefer

@Alex Jurkiewicz Thanks I based my solution on that code https://github.com/aws-samples/aws-application-auto-scaling-kinesis from the article you linked to but found issues with both the cloudformation and the python lambda code. So I am improving it to make it more production ready.

aws-samples/aws-application-auto-scaling-kinesis

Leveraging Amazon Application Auto Scaling you have now the possibility to interact to custom resources in order to automatically handle infrastructure or service resize. You will find a demo regar…

1

2021-03-03

Pavel avatar

i have CF with S3 origin, the origin has origin_path = “/build”, CF has its first behavior as “/url/path/*”. I get the The specified key does not exist error and the Key ends up being /build/url/path/index.html

Pavel avatar

I can access the files from root of the cdn but not from my path pattern

Pavel avatar

do i have to have the origin folder structure (s3) match my behavior path?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yes

Pavel avatar

this client jenkins s3 plugin does not allow me to do that

Alex Jurkiewicz avatar
Alex Jurkiewicz

You could rewrite the path with a lambda@edge function

Pavel avatar

thats where im at now

Pavel avatar

still need a lambda because if its in the folder it has no idea what to do with index.html

RB avatar

can you folks use threads plz

1
Pavel avatar

its fine im done

2021-03-04

Bart Coddens avatar
Bart Coddens

I am a bit puzzled by a network issue

Bart Coddens avatar
Bart Coddens

Machine has two firewall groups assigned: outbound = all open, inboud = ssh open from my ip

Bart Coddens avatar
Bart Coddens

I can access ssh from my workstation

Bart Coddens avatar
Bart Coddens

Making a ssh connection FROM the instance does not work

Bart Coddens avatar
Bart Coddens

when I tcpdump my traffic, I can see traffic going out of the machine

Bart Coddens avatar
Bart Coddens

ha I found it, the configuration of the firewall group changed a bit

Maycon Santos avatar
Maycon Santos
New – VPC Reachability Analyzer | Amazon Web Servicesattachment image

With Amazon Virtual Private Cloud (VPC), you can launch a logically isolated customer-specific virtual network on the AWS Cloud. As customers expand their footprint on the cloud and deploy increasingly complex network architectures, it can take longer to resolve network connectivity issues caused by misconfiguration. Today, we are happy to announce VPC Reachability Analyzer, a […]

Bart Coddens avatar
Bart Coddens

ha interesting

Jonathan Le avatar
Jonathan Le

I checked it out recently. Worked pretty well.

Marcin Brański avatar
Marcin Brański

Woooohooo! So simple and now it’s there bananadance I shouldn’t be that much happy for it but everytime I setup ELK on AWS (soo many times) I check if it’s available and here it is. Amazon Elasticsearch Service now supports rollups, reducing storage costs for extended retention*

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can we use this instead of the lambda we have to purge old log indexes?

Marcin Brański avatar
Marcin Brański

Hmm, rollups would be something different. Aggregating old data into new index with lower data resolution. I think you mean curator lambda. Recently they also introduced Index State Manager ISM , I haven’t used it but seems it’s possible with that although it’s not as robust as curator. This policy from docs remove replicas and later remove index after 21d

{
  "policy": {
    "description": "Changes replica count and deletes.",
    "schema_version": 1,
    "default_state": "current",
    "states": [{
        "name": "current",
        "actions": [],
        "transitions": [{
          "state_name": "old",
          "conditions": {
            "min_index_age": "7d"
          }
        }]
      },
      {
        "name": "old",
        "actions": [{
          "replica_count": {
            "number_of_replicas": 0
          }
        }],
        "transitions": [{
          "state_name": "delete",
          "conditions": {
            "min_index_age": "21d"
          }
        }]
      },
      {
        "name": "delete",
        "actions": [{
          "delete": {}
        }],
        "transitions": []
      }
    ]
  }
}
1

2021-03-05

Takan avatar

hi guys, anyone knows how to create “trusted advisor” in terraform?

Mohammed Yahya avatar
Mohammed Yahya

see this https://github.com/aws/Trusted-Advisor-Tools and implement it using Terraform, you can create a module and publish it also

aws/Trusted-Advisor-Tools

The sample functions provided help to automate AWS Trusted Advisor best practices using Amazon Cloudwatch events and AWS Lambda. - aws/Trusted-Advisor-Tools

Mohammed Yahya avatar
Mohammed Yahya
04:16:27 PM

you need to define these in Terraform

Mohammed Yahya avatar
Mohammed Yahya

FYI Trusted Advisor is not supported as resource in Terraform

1
Takan avatar

thanks a lot for your help bro!

Takan avatar

can we upgrade the version of Cloud Front’s security policy in terraform?

Ofir Rabanian avatar
Ofir Rabanian

Hi everyone, Let’s say that I have a terraform setup with an rds instance. After a while, I want to restore to a given point in time through a snapshot that i’m creating every day. Given that AWS limits to restoring the snapshot to a NEW instance, how can I still control this new instance using terraform? what’s the correct process to have here?

jose.amengual avatar
jose.amengual

you can use the cloudposse terraform-aws-rds-cluster module an just create a clone cluster from a snapshot

jose.amengual avatar
jose.amengual

or the rds instance module

jose.amengual avatar
jose.amengual

and then you switch endpoints in your app or use route 53 records that are cname yo the real endpoints

jose.amengual avatar
jose.amengual

or use rds proxy in front and change the endpoints of the proxy to point to the new instance/cluster

Pavel avatar

We have a task that saves the db to a secure S3 bucket so we don’t have to rely on those rules. Once you set it up, its really not that hard to maintain.

Pavel avatar

(in sql format)

Alex Jurkiewicz avatar
Alex Jurkiewicz

Restore manually and then import the resource

Ofir Rabanian avatar
Ofir Rabanian

Is there a guide online on maintaining such a database in a production environment?

Ofir Rabanian avatar
Ofir Rabanian

I feel that saving to s3 can cause data failures, depending on what happened during the backup/reatore in the db

Ofir Rabanian avatar
Ofir Rabanian

Like- lets say that you get 24/7 traffic consistently, hundreds of operations a seconds

Ofir Rabanian avatar
Ofir Rabanian

How can I restore to a point in time without losing info?

Pavel avatar

it should do it in a transaction

Pavel avatar

and notify if it fails

Pavel avatar

i dunno its based on usecase and also you gotta weigh convenience (s3) over reliability (images)

Pavel avatar

we maintain a large production application for a very large car company and we save db backups to s3

Pavel avatar

never had issues

Ofir Rabanian avatar
Ofir Rabanian

And in order to restore the db in place, you delete everything and pg_restore it?

jose.amengual avatar
jose.amengual

have you tried a clone?

jose.amengual avatar
jose.amengual

a 600 GB db takes about 5 min to clone

jose.amengual avatar
jose.amengual

in Aurora

jose.amengual avatar
jose.amengual

it is pretty fast, now from a snapshot takes longer

Ofir Rabanian avatar
Ofir Rabanian

What i’m thinking about is the mutations that happen during this kind of backup. Where do they go? Assuming that I dont have a hot backup or some complicated setup

jose.amengual avatar
jose.amengual

what do you mean? when a snapshot is issued any new transaction after the snapshot is not recorded in the snapshot

Ofir Rabanian avatar
Ofir Rabanian

Maybe I missed on the “clone” part. Is that a feature of rds?

Alex Jurkiewicz avatar
Alex Jurkiewicz

The snapshot is an instant of time from when the snapshot started. If your snapshot starts at 10am it will be a copy of your database as of 10am. Even if the snapshot takes 15 mins to create there will be no data from after 10am

Ofir Rabanian avatar
Ofir Rabanian

@Alex Jurkiewicz thats also the case for a pg_dump?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sure but I wouldn’t use that for a real database. The restore time is unworkably slow

Ofir Rabanian avatar
Ofir Rabanian

On the same note, is there a good reason to use aurora compatible with postgres over simply rds?

Pavel avatar

im curious what is are you trying to guard against on a production database having these backups run so often?

Ofir Rabanian avatar
Ofir Rabanian

Given a single instance setup, if there’s an issue that requires a restore of a backup, the data between the last snapshot and current time will be lost. The solution is obviously to use some sort of cluster but I’m trying to see all options in asvance

jose.amengual avatar
jose.amengual

aurora storage layer is the magic behind aurora and is VERY fast

jose.amengual avatar
jose.amengual

if you need replication of transaction then you need a cluster and a replica cluster

Ofir Rabanian avatar
Ofir Rabanian

Maybe aurora solves it for me.. seems like it stores data on s3 and enables in place restore to a point in time

Alex Jurkiewicz avatar
Alex Jurkiewicz

Most companies/products find the trade-off of risk of data loss low enough that losing some data due to periodic backups to be acceptable. I’m not saying your product is also like this. But if you are looking for higher availability/disaster recovery guarantees, it is going to cost you a lot, in both time and operational complexity. I suggest you consider carefully how important going above and beyond the standard tooling is for your product.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Also, if you are a company with these higher than normal requirements, you would have an Amazon account rep, and they would be very happy to organise you many presentations from the RDS team about all the many ways to give them more money. You should take advantage of that

Ofir Rabanian avatar
Ofir Rabanian

Might as well just use https://litestream.io huh

Litestreamattachment image

Litestream is an open-source, real-time streaming replication tool that lets you safely run SQLite applications on a single node.

Ofir Rabanian avatar
Ofir Rabanian

I love the simplicity behind it

RB avatar

shamelessly asking for upvotes here https://github.com/99designs/aws-vault/pull/740

tldr, we figured out a way to plist the aws-vault –server https://gist.github.com/nitrocode/cd864db74a29ea52c7b36977573d01cb

Allow --no-daemonize for the ec2 metadata server by nitrocode · Pull Request #740 · 99designs/aws-vault

Closes #735 Thanks to @myoung34 for most of the help in adding the –no-daemonize switch. This allows the –server to be nohupped. $ make aws-vault-darwin-amd64 $ nohup ./aws-vault-darwin-amd64 \ …

3
MrAtheist avatar
MrAtheist
06:34:17 AM

Anyone know why AWS doesnt have a default iam policy for “ecs read only”? i have to create one just for this… ¯_(ツ)_/¯

2021-03-07

msharma24 avatar
msharma24

Hi Guys - Is there a way to get the AWS Organisation ID (unique identifier)) via AWS CLI / API ?

msharma24 avatar
msharma24

Found the answer aws organizations list-accounts The ARN key has the org id

4

2021-03-08

michaelssingh avatar
michaelssingh

I am running into some difficulties with provisioning a Windows EC2 instances with a Powershell script, which is passed into a aws_launch_configuration as such:

user_data_base64 = base64encode(data.template_file.powershell.rendered)

The script is also quite simple, it downloads a .exe from the internet and then starts a silent install with Start-Process:

<powershell>
Start-BitsTransfer -Source ...
Start-Process ..
</powershell>

This is my first time working with Powershell and provisioning Windows EC2s so I may be missing something but when I RDP into the machine the executable is neither downloaded nor installed.

michaelssingh avatar
michaelssingh

If paste the contents of the Powershell script into Powershell on the instance, it works as expected however.

2021-03-10

Patrick Jahns avatar
Patrick Jahns

Does anyone have a list of common dns names for aws services? I am trying to get a feeling for their patterns

Issif avatar
Service endpoints and quotas - AWS General Reference

See the service endpoints and default quotas (formerly known as limits) for AWS services.

Patrick Jahns avatar
Patrick Jahns

That’s already quite useful -thank you! However I was actually wondering on the DNS entries for the instances that customers receive. i.e. for RDS or MSK etc.

Alex Jurkiewicz avatar
Alex Jurkiewicz

the hostnames for resources are all highly service-specific. There are no patterns, really

Patrick Jahns avatar
Patrick Jahns

Do you know of a overview list in general? Working on some dns naming schemes for a service and was thinking of getting inspired from AWS

Issif avatar

I don’t think they provide a list

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
12:23:55 PM
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can anyone help with please its like RDS is partially completed the upgrade from 5.6 to 5.7

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way to force the pending modifications now instead of waiting for the maintenance window?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes, ‘apply immediately’

Alex Jurkiewicz avatar
Alex Jurkiewicz

it’s an option you can pass when modifying an rds instance/cluster. Either via web console or api

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to set that now

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

but it don’t let me set it

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i can see the pending modifications via the API but can’t seem to apply them

Alex Jurkiewicz avatar
Alex Jurkiewicz

are you passing apply immediately and a change to the config? You can’t pass only ‘apply immediately’ with no changes

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i don’t want to make any changes though i want it to apply the pending mods e.g. upgrade to 5.7

Alex Jurkiewicz avatar
Alex Jurkiewicz

you need to re-submit the pending modification with apply_immediately set to true

Alex Jurkiewicz avatar
Alex Jurkiewicz

when you submit a change with that flag, all pending modifications are immediately applied

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: Current Parameter Group (de-prd-yellowfin-01-20210219152210698600000003) is non-default. You need to explicitly specify a new Parameter Group in this case (default or custom)
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

think i may have fixed it

2
jose.amengual avatar
jose.amengual

Has anyone enforced IMDSv2 on their instances and had problems with cloud-init not starting?

1
MattyB avatar

I think a co-worker just hit this

MattyB avatar

Has a fix in similar to this I believe

jose.amengual avatar
jose.amengual

the doc does not describes how cloud-init will deal with the generation of the token, which is the problem

jose.amengual avatar
jose.amengual

in my user data I can modify the script and add those calls

jose.amengual avatar
jose.amengual

but when I was testing it was cloud-init without user-data complaining about it

jose.amengual avatar
jose.amengual

as you can see here :

TOKEN=`curl -X PUT "<http://169.254.169.254/latest/api/token>" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v <http://169.254.169.254/latest/user-data>
jose.amengual avatar
jose.amengual

they call the api to get the user-data and the user data does not have the token call

jose.amengual avatar
jose.amengual

so it is pretty dam confusing

2021-03-11

2021-03-14

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

FYI, https://pages.awscloud.com/pi-week-2021 is happening! It will be a fun one A bunch of S3 + Data in general + some Serverless

AWS Pi Week 2021

Register Now

2021-03-15

Alex Jurkiewicz avatar
Alex Jurkiewicz

I want to host a single file over HTTPS with a custom domain. Is there a simpler solution than S3 bucket + CloudFront + ACM cert? Simpler meaning serverless, no ec2 + nginx in user-data solutions

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Amplify Console which is basically S3+CF+ACM+CI/CD+others? It’s easier to manage, but no Terraform support yet

AWS Amplify - Static Web Hosting - Amazon Web Services

AWS Amplify offers a fully managed static web hosting service that accelerates your application release cycle by providing a simple CI/CD workflow for building and deploying web applications.

pjaudiomv avatar
pjaudiomv

github pages

1
1
1
pjaudiomv avatar
pjaudiomv

as far as aws its probably s3 or ec2 like you mentioned

pjaudiomv avatar
pjaudiomv

you could use a lambda with alb too

roth.andy avatar
roth.andy

GitHub pages is what’d I’d go for at this point. I’ve used Netlify as well, it worked really well and was free (but I still prefer github pages)

Zach avatar

someone appears to have published and then retracted this post (but it popped on my AWS News RSS), so I think we’re going to see Fargate exec tool soon! https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-exec[…]commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/

1
Zach avatar
NEW – Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2 | Amazon Web Servicesattachment image

Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]

2
1
1
Marcin Brański avatar
Marcin Brański

wow, that’s neat feature for debugging ecs

NEW – Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2 | Amazon Web Servicesattachment image

Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]

2021-03-16

walicolc avatar
walicolc

Interesting ended up git cloning requests when virtualenv didn’t work. Anyone encountered this before?

[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'requests'END RequestId: 7ed24ad6-1b95-4600-9a35-d379726f6b47

Alex Jurkiewicz avatar
Alex Jurkiewicz

Your code package didn’t include requests

walicolc avatar
walicolc

I did install it via pip in the virtualenv

walicolc avatar
walicolc

Followed by a pip freeze to generate the requirements file

Alex Jurkiewicz avatar
Alex Jurkiewicz

How did you upload your code to lambda? Directly as a zip?

walicolc avatar
walicolc

zip -r9 fuckingWork.zip .

walicolc avatar
walicolc

aws s3 cp fuckingWork.zip s3://bucketName

walicolc avatar
walicolc

Like dat

Alex Jurkiewicz avatar
Alex Jurkiewicz

Run zipinfo on the zip file And verify it contains requests

walicolc avatar
walicolc

Fuuuuck nice catch man, I’m away rn will do tomo

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

You can also extract the zip in a clean docker image for python and see if”import requests” works

walicolc avatar
walicolc

I’ll go with zipinfo its cleaner

walicolc avatar
walicolc

thanks my man!

Maciek Strömich avatar
Maciek Strömich

FYI previously requests was available via ‘botocore.vendored’ package. It was deprecated in january and removed https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/

Upcoming changes to the Python SDK in AWS Lambda | Amazon Web Servicesattachment image

Update (January 19, 2021): The deprecation date for the Lambda service to bundle the requests module in the AWS SDK is now March 31, 2021. Update (November 23, 2020): For customers using inline code in AWS CloudFormation templates that include the cfn-response module, we have recently removed this module’s dependency on botocore.requests. Customers will need […]

walicolc avatar
walicolc

Cheers man i came across this whilst debugging. I’m not using this module in paticular so dimissing it was easy

walicolc avatar
walicolc

Turned out it was just a recursive issue with the zipped file. Thx all.

2021-03-17

Mohammed Yahya avatar
Mohammed Yahya
salesforce/cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. - salesforce/cloudsplaining

maarten avatar
maarten
11:53:56 AM

new manager with retail background I guess

3
loren avatar

does anyone write their lambdas such that they understand a common, fake “test event”? such that you can invoke it with that event just to validate that the package really has the imports it needs?

Santiago Campuzano avatar
Santiago Campuzano

@loren what do you mean by fake event ?

Santiago Campuzano avatar
Santiago Campuzano

You just need to create an empty test event/message

Santiago Campuzano avatar
Santiago Campuzano

And use it as a test event

loren avatar

something like:

{
  "test_event": "this is a test"
}
Santiago Campuzano avatar
Santiago Campuzano

Right…

loren avatar

yeah, i know how… i’m wondering if it’s a pattern others are using or contemplating

RB avatar

I use that for cloud custodian lambdas

Santiago Campuzano avatar
Santiago Campuzano

In my particular case, I prefer using real events/with real valid payloads

loren avatar

the lambda would check that key and if present run some simple test logic or just return

loren avatar

i also prefer real events, but we have some lambdas where the function makes an aws call using the value from the event. that value is dynamic, and not persistent. for example, at the organization level, a CreateAccountRequest event is generated when a new account is created. i can’t use a “real” event, or i end up doing “real” things to “real” accounts. and i can’t fake the CreateAccountRequest because then the lambda cannot actually get the CreateAccountRequest status

Santiago Campuzano avatar
Santiago Campuzano

@loren Your lambda functions should be idempotent, meaning that if you execute several times the same lambda function with the same payload, you should have the same result

loren avatar

if only life were so simple

Santiago Campuzano avatar
Santiago Campuzano

loren avatar

that CreateAccountRequest actually disappears after some time, so we can be idempotent for a while, but eventually the event itself becomes invalid

loren avatar

we do have a valid-ish payload, with just fake data, and currently we catch the exception in the test. if the lambda gets that far, we know the package is good. and we do unit tests on the code so we’re reasonably confident about the code behavior

Santiago Campuzano avatar
Santiago Campuzano

Ok… that makes sense

loren avatar

but having valid-ish payloads for every event is a real pain to discover and doesn’t scale to hundreds of functions, when the thing i most care about is just validating that the package is actually good

loren avatar

so i was thinking, if i modify every lambda to understand this “fake” test event, and use that to validate the package, i can apply the same test to every lambda

loren avatar

and i can enforce that the lambda understand the test event by running that test for every lambda in CI with localstack

loren avatar

@RB i’m interested in hearing more about your experience with this pattern

RB avatar

i just use a generic test event. the json input doesnt matter with cloud custodian lambdas since they trigger on a cloudwatch cron. so i just use any json to kick off the lambda and see the output to make sure it didn’t throw an error

1
maarten avatar
maarten

I personally think you should take care of this in the build pipeline.

loren avatar

@maarten can you expand? we are running the tests in the build pipeline…

maarten avatar
maarten

right, i meant simply running node_with_version_x index.js, that would find bad imports and doesn’t execute anything. And otherwise I’m thinking of the serverless toolset to invoke locally., or better even, https://www.serverless.com/blog/unit-testing-nodejs-serverless-jest

Unit testing for Node.js Serverless projects with Jest

Create unit tests for Node.js using the Serverless Framework, run tests on CI, and check off our list of serverless testing best practices.

loren avatar

yeah, this is terraform, so i’m using localstack to mock the aws endpoints… and configuring the provider to use the localstack endpoints

loren avatar

run terraform apply, invoke the lambda, inspect the result to determine pass/fail

kalyan M avatar
kalyan M

Hi guys what are some top most/Must use tools in Managing the Kubernetes on aws eks or other clusters. any recommendations on best Practices?

2021-03-18

Maciek Strömich avatar
Maciek Strömich

Anyone using Cognito’s MFA functionality? How do you block ability to disable previous MFA setup by calling associate software token again and again? this call can be sent by anyone (it just requires a valid access token) and if it’s sent for the second time it automatically overrides the previous setup and disables mfa on login.

maarten avatar
maarten

let me know what support says:)

Maciek Strömich avatar
Maciek Strömich

Its not a bug, it’s a feature

maarten avatar
maarten

You should try to get min. TLS1.2 on cognito :’ -)

RB avatar

the author provided some feedback against it. if anyone is interested in daemonizing aws-vault using launchd, please leave some feedback.

aws-vault: Start metadata server without subshell (non-daemonized)

Start metadata server without subshell (non-daemonized) · Issue #735 · 99designs/aws-vault

I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…

jose.amengual avatar
jose.amengual

interesting point of view, a bit close minded

Start metadata server without subshell (non-daemonized) · Issue #735 · 99designs/aws-vault

I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…

Ikana avatar

Is it possible to contract cloudposse’s services through an AWS marketplace private offer?

2
Ikana avatar

Sorry for the spam but I feel this is relevant to you folk @Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s really interesting. We haven’t pursued it yet.

Matt Gowie avatar
Matt Gowie

Hey folks — Is anyone using an external Security Saas product like Fugue or other to replace using AWS Config / SecurityHub? AWS account rep is suggesting we utilize https://www.fugue.co/ and I’d be interested in hearing folks thoughts.

Cloud Security & Compliance for Engineers | Fugue

Fugue puts engineers in command of cloud security and compliance.

jose.amengual avatar
jose.amengual

I play with them before, they partner with Sonatype to create a IaC offering to check TF code

Cloud Security & Compliance for Engineers | Fugue

Fugue puts engineers in command of cloud security and compliance.

jose.amengual avatar
jose.amengual

Fugue create regula and they have some ML/engine to check policies and such and offer IAM management too

jose.amengual avatar
jose.amengual

recently I have been using CloudConformity from Trend micro

jose.amengual avatar
jose.amengual

they all feel similar on what they do an give reports on

jose.amengual avatar
jose.amengual

do they have any value? I do not know, I do not think they add much and over time Security(inspector, config, Guard duty) hub is going to eat them alive I think

jose.amengual avatar
jose.amengual

that is the amazon way

Matt Gowie avatar
Matt Gowie

Got it — Thanks for the perspective Pepe. I’m interested because it looks a bit daunting to implement all those tools: Inspector, Config, GD. And if I can skip that for a slight premium… then that’s of interest.

Zach avatar

we’re using a managed/bundled version of Prisma Cloud, which is similar I guess to Fugue (from a cursory 5 second google)

Zach avatar

primary annoyance is that their rules seem based around using AWS strictly as some sort of internal business network replacement and not running a product

jose.amengual avatar
jose.amengual

one thing to keep in mind is that all the remediation rules/configs you will need to implement to solve the finding is going to be 80% of the work to setup config/cloudwatch/guarduty etc, don’t fool yourself thinking it going to be less work

jose.amengual avatar
jose.amengual

most of this products require config enable etc

jose.amengual avatar
jose.amengual

you will have a warning that will say “Enable guarduty”…..

Zach avatar

haha yes one of the findings I keep suppressing is “enable config recording for all resources”

this1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I may be extreme in my opinion here, but I honestly think the majority of the focus should go towards IaC scanning. Whether it’s Fugure/checkov/tfsec/Cloudrail, the future is in IaC.

The reason is that even when you find something in your live environment, through Fugue, Prisma, Dome9 or AWS’s own tools, no one on your dev team will want to fix it. So you’ll have a nice JIRA ticket sitting there but not moving.

jose.amengual avatar
jose.amengual

and that is why the sift left security is pushing this to change

this1
jose.amengual avatar
jose.amengual

VCs are realizing that there can be billions in fines for bad code and bad security practices

jose.amengual avatar
jose.amengual

remember Equifax fix was like 15 lines of code and one hour of work

jose.amengual avatar
jose.amengual

(it could be even less lines I think)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

If it’s caught during development, it’s one hour. If caught in a vuln scanning in prod…

jose.amengual avatar
jose.amengual

exactly that is why the Sec scanning of code and infra should happen at build time(left side)

Alex Jurkiewicz avatar
Alex Jurkiewicz

We are trialling Laceworks at the moment. It’s quite a heavy solution and very far “to the right”, eg it runs in your prod account and picks up errors post deploy. But the coverage is very comprehensive. Not sure if I’d recommend or not yet

Or Azarzar avatar
Or Azarzar

https://lightspin.io

check us out, you can reach out in a dm if you want more info, i’m the CTO

Lightspin Contextual Cloud Security Solutionattachment image

Lightspin is a contextual cloud security platform that continuously visualizes, detects, prioritizes, and prevents any threat to your cloud stack

MattyB avatar

How does everyone handle MFA for root credentials for your AWS accounts (or whatever). Someone had the idea to just use an OTP device and store it in the safe, but that will take 2h+ for anyone local, and if you’re in another state then you’re screwed. A workaround would be to just open a case with amazon to reset MFA which we’re fine with. Search wasn’t super helpful…help, por favor!

Santiago Campuzano avatar
Santiago Campuzano

We have the QR code for MFA stored at last pass

Santiago Campuzano avatar
Santiago Campuzano

That simple … a few people have access to that QR code

MattyB avatar

LOL of course something that simple would work…thanks

Santiago Campuzano avatar
Santiago Campuzano

Seems like you’d love to involve the CIA/NSA/FBI and the S.H.I.E.L.D agents to safeguard the QR code

MattyB avatar

New to this team but I’ll be sure to find the tinfoiled hat guy. There’s always at least one.

Santiago Campuzano avatar
Santiago Campuzano

LOL

Zach avatar

we have h/w tokens at the moment but due to the shift to remote are going to move them to ‘software tokens’ in a password store service

MattyB avatar

right on, we’re using hashicorp vault for the password part, trying to figure out the second factor https://aws.amazon.com/iam/features/mfa/?audit=2019q1

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can use 1password as an OTP generator by providing it a QR code. Then you can share that OTP generator with your teammates in a shared vault

1
1
MattyB avatar

Ahhh, you can also just grab a physical code from AWS directly… Thanks guys. This was much simpler than I realized it would be. I didn’t know there would be so many options.

2021-03-19

Darren Cunningham avatar
Darren Cunningham

Anybody using datadog and figured out a way to consolidate API Gateway logs? Currently each requests creates 28 log messages. Creating 28 million log messages per million requests is silly. Not a ghastly expense, but one that I’d like to mitigate.

Darren Cunningham avatar
Darren Cunningham

I created a support request too. I’ll update the thread for those that might be interested.

loren avatar
The Missing Guide to AWS API Gateway Access Logs

Learn the what, why, and how of API Gateway access logs.

Darren Cunningham avatar
Darren Cunningham

I have not, thank you

Darren Cunningham avatar
Darren Cunningham


XML: Who hurt you?

1
Darren Cunningham avatar
Darren Cunningham

thanks @loren - I updated my API Gateway and have the desired result now in my Datadog Logs view

Darren Cunningham avatar
Darren Cunningham
deployOptions: {
                loggingLevel: apigateway.MethodLoggingLevel.OFF,
                accessLogDestination: new apigateway.LogGroupLogDestination(logGroup),
                accessLogFormat: apigateway.AccessLogFormat.custom(`{"requestTime":"${apigateway.AccessLogField.contextRequestTime()}","requestId":"${apigateway.AccessLogField.contextRequestId()}","httpMethod":"${apigateway.AccessLogField.contextHttpMethod()}","path":"${apigateway.AccessLogField.contextPath()}","resourcePath":"${apigateway.AccessLogField.contextResourcePath()}","status":${apigateway.AccessLogField.contextStatus()},"responseLatency":${apigateway.AccessLogField.contextResponseLatency()}, "traceId": "${apigateway.AccessLogField.contextXrayTraceId()}"}`),
                dataTraceEnabled: false,
                tracingEnabled: true,
                metricsEnabled: true,
            }
loren avatar

very nice! yeah, alex debrie writes some of the best posts on this stuff. definitely my goto when i’m scratching my head on how it works

mikesew avatar
mikesew

Just curious if anybody has tried to do visualizations of AWS regions visually in something like PowerBI , grafana (or the AWS analog, quicksight?) PowerBI mentions ShapeMaps, but they need something called a shapefile or TopoJSON .. anybody tried this before?

MattyB avatar

I’m not sure I’m following when you say visualizations of AWS regions - do you mean map out AWS resources for individual regions given a data set? I used CloudMapper over a year ago just to get an overview, I’m not sure if it meets your use case. https://github.com/duo-labs/cloudmapper

duo-labs/cloudmapper

CloudMapper helps you analyze your Amazon Web Services (AWS) environments. - duo-labs/cloudmapper

mikesew avatar
mikesew

actually simpler than that - I don’t really need resource listings at all. I already have a table of items + regions they’re in (ap-east-1, us-east-1, ca-central-1, etc.) , but if I plug those items into a powerbi map visual, it doesn’t give out much useful information. I’m hoping somebody has gone ahead and generated a simpilified globe with the various regions or zones out there.

sheldonh avatar
sheldonh

Pretty sure you’d need to get something like zip codes or similar to then map to a specific location on powerbi if the geographic stuff requires that. Eu-west-1 would need to be mapped to something for powerbi

mikesew avatar
mikesew

Thanks, that makes sense. I can certainly go about setting this up - just was curious if there was already a mapShaper file out there that somebody’s already done. =]

2021-03-20

2021-03-21

2021-03-22

Matt Gowie avatar
Matt Gowie

Anyone experiencing DNS issues with the AWS Console today?

Victor Grenu avatar
Victor Grenu

We are not using Console - Nope, no DNS issues in europe.

1
Matt Gowie avatar
Matt Gowie

Hahah try not to use it as well, except when you need screenshots for SOC2 compliance purposes.

1
Matt Gowie avatar
Matt Gowie
05:13:21 PM

I’m oddly getting DNS_PROBE_FINISHED_NXDOMAIN for both signin.aws.amazon.com AND status.aws.amazon.com.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Same error here in Romania, using default ISP DNS, Google, and Cloudflare DNS

Matt Gowie avatar
Matt Gowie

Ah great — Thankful I’m not alone!

mikesew avatar
mikesew

stupid CLI question: I’m creating an SSH key and want to tag my resources in the same commmand.  I do with with the --tag-specifications flag.

aws ec2 create-key-pair  \
  --key-name bastion-ssh-key  \
  --tag-specifications 'ResourceType=key-pair,Tags=[{Key=deployment:environment,Value=sbx},{Key=business:steward,[email protected]},{Key=security:compliance,Value=none}]'
. . . 

How can I split the tags into multiple lines per tag?  I’ve tried a few different ways and the CLI keeps complaining to me. Seems you have to put this in a single line.  Not opposed, but this just makes it unreadable.

managedkaos avatar
managedkaos

@mikesew you may want to consider creating a json template that you can import into the command, it will make things much neater!

$ aws ec2 create-key-pair --generate-cli-skeleton
{
    "KeyName": "",
    "DryRun": true,
    "TagSpecifications": [
        {
            "ResourceType": "snapshot",
            "Tags": [
                {
                    "Key": "",
                    "Value": ""
                }
            ]
        }
    ]
}
1
mikesew avatar
mikesew

fair enough. Was hoping to keep it as a one-liner but I see what you mean.

1
managedkaos avatar
managedkaos

then:

aws ec2 create-key-pair --cli-input-json FILE

2021-03-24

Darren Cunningham avatar
Darren Cunningham

Anybody have an example of a WAF v2 Rule that blocks requests with an http protocol? I’m figuring that I’m looking for SingleHeader but not sure if I should be looking for protocol , http.protocol or X-Forwarded-Proto or if I’m totally off base

Matt Gowie avatar
Matt Gowie

Sorry to ask the dumb question when I’m sure you have already thought about it, but you can’t do that at your LB layer by redirecting or not opening up port 80?

Darren Cunningham avatar
Darren Cunningham

I want it to redirect by default, but I want to drop non-secure requests to specifically an authorization endpoint

Darren Cunningham avatar
Darren Cunningham

you’re good to ask the question, assume nothing

Matt Gowie avatar
Matt Gowie

Gotcha. I believe WAF is typically the first in the chain, so I would assume you wouldn’t want X-Forwarded-Proto.

This might be one where you need to setup rules for all 3 options and make them COUNT instead of BLOCK and then watch your metrics.

Darren Cunningham avatar
Darren Cunningham

I mean the Rule itself is not that hard, it’s just figuring out if there is a header that I can use for protocol otherwise I’m just going to use Origin BEGINS_WITH http:// && Host EQUALS <xxx> && Path EQUALS /auth

Darren Cunningham avatar
Darren Cunningham

I put a support request in, but was hoping somebody might have ran into this

Matt Gowie avatar
Matt Gowie

Ah gotcha. Yeah, I’m not 100% sure. Support should be able to figure that out for you or you can try a few things.

Darren Cunningham avatar
Darren Cunningham

yeah, it’s not a high priority issue so might as well give them a shot rather than keep guessing headers…which I’ll do if I have to

Matt Gowie avatar
Matt Gowie

Yeah this

Darren Cunningham avatar
Darren Cunningham

According to AWS Support blocking by protocol can’t be done at the WAF and should be done at the ALB - so back to your original suggestion. Kinda sucks because of the rule limit, but makes sense.

Matt Gowie avatar
Matt Gowie

Ah that sucks, but at least you know the path forward.

2021-03-25

Mohammed Yahya avatar
Mohammed Yahya
Introducing AWS SSO support in the AWS Toolkit for VS Code | Amazon Web Servicesattachment image

With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]

2
RB avatar

With aws vaults metadata service, not sure how useful this toolkit is in this context

Introducing AWS SSO support in the AWS Toolkit for VS Code | Amazon Web Servicesattachment image

With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]

Mohammed Yahya avatar
Mohammed Yahya

I love this tool to quick check cloudwatch logs from VSCode, a killer feature for me

RB avatar

Ill give it a go then! Thanks for sharing

Mohammed Yahya avatar
Mohammed Yahya

AWS Gurus: How can I read secrets from AWS Secrert manager inside EKS pod ?

Mohammed Yahya avatar
Mohammed Yahya

or parameter store

Mohammed Yahya avatar
Mohammed Yahya

client want that secret must be encrypted with KMS key, they don’t want to use Vault or SecretHub

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Why not use the external secrets operator?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
external-secrets/kubernetes-external-secrets

Integrate external secret management systems with Kubernetes - external-secrets/kubernetes-external-secrets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(It was by godaddy)

Mohammed Yahya avatar
Mohammed Yahya

Thanks will take a look, is the secrets created by this external operator encrypted ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, if you enable encryption at the EKS layer, which our module supports

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(uses KMS)

loren avatar

nice, this might make step functions rather a lot easier to define and use… https://aws.amazon.com/about-aws/whats-new/2021/03/aws-step-functions-adds-tooling-support-for-yaml/

1
RB avatar

edit: nvm, now the link is working

managedkaos avatar
managedkaos
Matt Gowie avatar
Matt Gowie

This isn’t actually made my AWS, correct? I’m confused on that point.

managedkaos avatar
managedkaos

yeah it looks like an independent tool

Matt Gowie avatar
Matt Gowie

Yeah — I can’t see using it. This goes into the same reasons why I wouldn’t build a mobile app using a “build your mobile app using this fancy UI” tool: Things break down once you want to make things unique for the product or business.

Matt Gowie avatar
Matt Gowie

Basically my thought is cool idea, impossible to implement.

this1
managedkaos avatar
managedkaos

yeah. i have seen the same for cirucit design tools… basically you layout the blocks and the connections and the tool generates the code. but then to really tweak things you have to tweak the code…which breaks the connection to the UI stuff.

this1
managedkaos avatar
managedkaos

i would rather go the other way: here’s my code, please diagram it

Matt Gowie avatar
Matt Gowie

Yeah that’s the more possible approach.

Matt Gowie avatar
Matt Gowie

Regardless, I think we need to accept that machines don’t know enough about what we’re going to be building to ever do these types of jobs well enough beyond simple examples / the most boilerplate usage.

1
managedkaos avatar
managedkaos

i know there’s an AWS tool that kinda does that… can’t recall the name off the top but last i looked at it, you had to spin up some cloudformation and point it at the account to read the resources.

I guess there is also terraformer but that is a resource -> code tool.

Matt Gowie avatar
Matt Gowie

Yeah, AWS has a CloudFormation builder tool that I’ve seen once that is somewhat UI driven, but then you’re dealing with CF and

1
Darren Cunningham avatar
Darren Cunningham

I think if your organization was going to stick with provided solutions constructs something like this might be possible…but if your org was that simple then there’s probably a SaaS solution out there for you

1
Mahmoud avatar
Mahmoud

Would anyone happen to have come CLI command for getting all load balancers with target groups with no listeners? We’re looking to clean up our dangling LBs, but I’m not super experienced with AWS CLI https://www.cloudconformity.com/knowledge-base/aws/ELBv2/unused-load-balancers.html# I’ve mostly been following the steps here, but I’m attempting to create some kind of command to get all the ones with no target instances

Unused ELBv2 Load Balancers

Identify unused Elastic Load Balancers (ELBv2) and delete them in order to reduce AWS costs.

managedkaos avatar
managedkaos

@Mahmoud this worked for me:

echo -e "LoadBalancer\tListenerCount"

for i in $(aws elbv2 describe-load-balancers --query="LoadBalancers[].LoadBalancerArn" --output=text);
do
    echo -e "$(echo ${i} | cut -d/ -f 3)\t$(aws elbv2 describe-listeners --load-balancer-arn=${i} --query='length(Listeners[*])')"
done | tee report.txt
2
Mahmoud avatar
Mahmoud

Thank you, this is perfect!

managedkaos avatar
managedkaos

ahh reviewing your request and this counts the listeners… not the target groups. let me update in a sec

1
managedkaos avatar
managedkaos

what you want is much easier! just one CLI call:

aws elbv2 describe-target-groups --query="TargetGroups[].{Name:TargetGroupName, LoadBalancerCount:length(LoadBalancerArns[*])}" --output=table
Mahmoud avatar
Mahmoud

This is really good and also outputs a bunch of resources we need to clean up, but I think I miswrote my initial question I need all load balancers with target groups that have no registered targets Basically, looking to clean up LBs that are in front of nothing

managedkaos avatar
managedkaos

hmmm yeah you can extend to see what’s in the target group i think…

managedkaos avatar
managedkaos

I’ve tried this command

aws elbv2 describe-target-groups --target-group-arn "arrnnnnn"

and it does not show the targets

Mahmoud avatar
Mahmoud

I think you have to aws describe-target-health and pass in the target group ARN to see the targets You can’t get it from describe-target-groups

managedkaos avatar
managedkaos

yeah was just seeing that

1
managedkaos avatar
managedkaos

try this one:

echo -e "TargetGroup\tAttachmentCount"

for i in $(aws elbv2 describe-target-groups --query="TargetGroups[].TargetGroupArn" --output=text);
do
    echo -e "$(echo ${i} | cut -d/ -f 2)\t$(aws elbv2 describe-target-health --target-group-arn=${i} --query='length(TargetHealthDescriptions[*])')"
done | tee report.txt
managedkaos avatar
managedkaos

i will leave it to the reader as an exercise to find the load balancer as well

Mahmoud avatar
Mahmoud

haha thanks for the assistance

2021-03-26

Fabian avatar

Hi. We have a daily process running, of which some jobs started failing about two days ago. Does anybody have an idea what might cause this? We’ve already followed the steps https://aws.amazon.com/premiumsupport/knowledge-center/batch-job-failure-disk-space/ The AWS Batch is failing with the error message “CannotPullContainerError: failed to register layer..: no space left on device”. This happens for only some jobs, not all. I have already created a Launch template and given it a 500G storage and in user data, have set :

cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=300G"'
Joe Hosteny avatar
Joe Hosteny

I assume you are attaching that to /dev/xvda?

Fabian avatar

let me double check

Fabian avatar

“/dev/xvdcz”

Fabian avatar

“our AMIs have root as /dev/xvda”

Fabian avatar

mh

Fabian avatar

one of our engs set that up

Fabian avatar

do you think that might be the issue?

Joe Hosteny avatar
Joe Hosteny

It could be. IIRC, batch used to have a default root parrtition size of 10 GB (it may have been bumped to 20 GB). If you have a large container (we have an unusually large one), it is possible you are running out of space on the root partition.

Joe Hosteny avatar
Joe Hosteny

We attach our larger disk directly to xvda

Fabian avatar

Ok, got it. I’ll double check our setup here. Thanks for pointing to a direction

1
Fabian avatar

Anyone?

walicolc avatar
walicolc

Check thread @Fabian

1

2021-03-29

Ashish Modi avatar
Ashish Modi

Morning everyone!

I was wondering if it is possible to control recently announced auto tune feature for aws elasticsearch using terraform? see here - https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/auto-tune.html

Amazon Elasticsearch Service Auto-Tune - Amazon Elasticsearch Service

Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.

Alex Jurkiewicz avatar
Alex Jurkiewicz
Amazon Elasticsearch Service Auto-Tune - Amazon Elasticsearch Service

Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.

Alex Jurkiewicz avatar
Alex Jurkiewicz

generally, for high profile features it takes a few weeks for support to land. For less popular services, updates can take months or even longer

Ashish Modi avatar
Ashish Modi

Thanks for your reply @Alex Jurkiewicz. Looks like it is not supported yet.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Add support for setting auto tune options in aws_elasticsearch_domain · Issue #18421 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

1
Bart Coddens avatar
Bart Coddens

Hi all, a external customer wants us to enable a s3 policy on a bucket:

Bart Coddens avatar
Bart Coddens
{
    "Version": "2012-10-17",
    "Id": "Policy1472487163135",
    "Statement": [
        {
            "Sid": "Stmt1472487132172",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567:root"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        },
        {
            "Sid": "Stmt1472487157700",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567:root"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::BUCKET_NAME"
        }
    ]
}
Bart Coddens avatar
Bart Coddens

I am a bit worried that they could open up the whole bucket as public

Bart Coddens avatar
Bart Coddens

but I am not sure, because this policy is locked to the specific principal

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

You are correct, this policy allows them to modify the policy on the bucket, thereby opening it to the public.

2021-03-30

steve360 avatar
steve360

We’re seeing cloudwatch log group load time on console to fluctuate between 5-60+ secs. This started last couple of weeks. What’s the most likely cause? Large number of log groups? Long retention? What’s the level of performance that can be expected from aws for sub 1k log groups? What can be done to optimize performance?

Darren Cunningham avatar
Darren Cunningham

just some random thoughts that might help :man-shrugging:

what do you mean by “load times”:

• Listing the Log groups – how many Log Groups do you have? # of metric filters, # of subscriptions • Listing the Log streams – how many streams are in the group? • Opening a Log stream and seeing the messages Did you have any spike in usage for any of the above? Aka, did you unintentionally create 1000s of new logs groups?

any chance you are closing in or at a Usage limit for your account for any of the above? Might need to submit a rate limit increase

Darren Cunningham avatar
Darren Cunningham

I just used the Network tab in the Developer Tools in Chrome - took me 35s to reload and I have 401 Log groups in said account – it’s probably been like that and I’ve never noticed.

steve360 avatar
steve360

We haven’t increased log groups significantly. We’re just trying to load a log stream. Performance is erratic. Fast in one instance and slow right after when your refresh. AWS support if working on it now. They say there’s an issue.

steve360 avatar
steve360

Thanks for the quick reply @Darren Cunningham

1
Darren Cunningham avatar
Darren Cunningham

if they give you any meaningful response, please do share!

Darren Cunningham avatar
Darren Cunningham

I wouldn’t count on it though, probably just a generic “our bad”

steve360 avatar
steve360

Will do. Thanks!

steve360 avatar
steve360

no meaningful response as you predicted, but cloudwatch seems to be performing better now. who knows what was the problem.

Darren Cunningham avatar
Darren Cunningham

the typical MO is not to fully disclose…I think it comes from a paranoia of sharing some of the secret sauce

managedkaos avatar
managedkaos

Hey team! Question:

TLDR: is there a good way to connect to a private AMQ without SSL?

Details: I’m setting up an environment that uses Amazon MQ and I’d like to keep the service private (along with the rest of the resources). To that end, everything that needs to be private is sitting in a private subnet of the VPC: ECS, RDS, etc.

Because everything is private, I’m using the IP address to connect to the AMQ endpoint. However when i connect, the app fails with an SSL error:

cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs

This leads me to think that the connection is SSL and the cert that AWS is serving up dosen’t have the IPs in it, but rather the DNS name of the MQ instance.

Googling for a solution, I found one doc that recommended putting an NLB in front of the AMQ and connecting to that but it seems (to me) that the connection might still fail; what about SSL validation between the ALB and the NLB? This solution also seems over engineered and potentially expensive given the addition of the NLB on top of the AMQ instance(s). https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2019/09/10/Solution-overview.png

Anyway, just thought I would share this in the event someone has seen this type of issue before and knows a choose_one(good/reliable/affordable) solution for this problem. Cheers!

attachment image
managedkaos avatar
managedkaos

I should add that unlike the diagram, my connection to AMQ is coming from inside the VPC from the same private subnet as the AMQ.

attachment image
Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t use AMQ, but I am guessing it provides a DNS hostname as the endpoint, rather than a private IP address. Why not use that hostname?

managedkaos avatar
managedkaos

I’ve set the service to not be publicly accessible. Indeed there is a DNS name that comes with it and i was configuring the app to use that but it was timing out. i attributed that to the fact that MQ was not publicly accessible.

Alex Jurkiewicz avatar
Alex Jurkiewicz

this page suggests the hostname is like

<https://b-1234a5b6-78cd-901e-2fgh-3i45j6k178l9-1.mq.us-east-2.amazonaws.com:8162>
managedkaos avatar
managedkaos

I am thinking now, though, that I might have to do some sort of private DNS. but then, if I do that, I’m not sure that the cert would still resolve.

managedkaos avatar
managedkaos

yes, my host name is just like that

Alex Jurkiewicz avatar
Alex Jurkiewicz

ok, well you won’t need private DNS, since that will be a public DNS record. You should be able to resolve it from your PC, with host [blah.amazonaws.com](http://blah.amazonaws.com)

managedkaos avatar
managedkaos

Private, using the DNS gives:

lookup b-7898a321-eac9-4db7-9d25-0ae2f020dabf.mq.us-west-2.amazonaws.com on 10.0.0.2:53: no such host

Private, using the IP gives:

cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs
Alex Jurkiewicz avatar
Alex Jurkiewicz

can you SSH to one of your client EC2 instances?

managedkaos avatar
managedkaos

well the SGs are locked down to those in use by the services. I could do a bastion and connect from there. no EC2s at the moment. Just ECS.

Alex Jurkiewicz avatar
Alex Jurkiewicz

so, I think you need to verify the hostname you are using

managedkaos avatar
managedkaos

yeah. at the moment I’m going to try opening up the AMQ just to see if i can get it connected. putting the MQ in a public subnet. i hate doing that but i just need a POC at the moment.

“I’ll lock it down later” - Famous last words

Alex Jurkiewicz avatar
Alex Jurkiewicz

the fact it can’t resolve suggests a typo to me

Alex Jurkiewicz avatar
Alex Jurkiewicz

because i’m 99% sure it will be a public hostname you can resolve from anywhere on the internet

managedkaos avatar
managedkaos

no typos. i am reading the info from SSM. the TF code writes the MQ hosts DNS name into a variable and the ECS reads it from there.

managedkaos avatar
managedkaos

this is me trying to trying out different methods:

resource "aws_ssm_parameter" "amqp_host" {
  name        = "/${var.name}/${var.environment}/AMQP_HOST"
  description = "${var.name}-${var.environment} AMQP_HOST, set by the resource"
  value#value       = "amqps://${aws_mq_broker.amq.id}.mq.${var.aws_region}.amazonaws.com:5671"
  value     = "amqps://${aws_mq_broker.amq.instances.0.ip_address}:5671"
  type      = "String"
  overwrite = true

....
Alex Jurkiewicz avatar
Alex Jurkiewicz

can you resolve the hostname yourself from your local machine?

managedkaos avatar
managedkaos

no. not when this is set to false:

resource "aws_mq_broker" "amq" {
...
  publicly_accessible        = true                        # false
....
}
managedkaos avatar
managedkaos

i’m going to try a config with it set to true. the MQ resources take 10-15 minutes to destroy and rebuild so it will be a few before i can report back

Alex Jurkiewicz avatar
Alex Jurkiewicz

that’s interesting. I would have thought a hostname ending in amazonaws.com would always be accessible. Maybe you can check the lookup with dig +trace [host.amazonaws.com](http://host.amazonaws.com).

I know that some DNS servers will refuse to resolve a public hostname that points to internal IP addresses, for security reasons. It might be this is why you can’t resolve it from your laptop when publicly_accessible = false. But dig +trace is low-level enough to ignore that rule.

managedkaos avatar
managedkaos

yeah that might make sense. i didn’t check to see the IP that the hostname resolved to when i had it config’d as private.

managedkaos avatar
managedkaos

ahh i did check it. it was a 10. IP address which is indeed not internet routable. so yeah, kinda confusing. I have the DNS name, it points to the right place (internal IP) but the cert has the public name…. which can’t be resolved?

I will check the app as well. there might be a way to get it working from the back end with everything private if I can ignore the SSL connection and just connect.

Alex Jurkiewicz avatar
Alex Jurkiewicz

One other thing the docs suggest:
To ensure that your broker is accessible within your VPC, you must enable the enableDnsHostnames and enableDnsSupport VPC attributes

Creating and connecting to an ActiveMQ broker - Amazon MQ

Explains the workflows of creating and connecting to an ActiveMQ broker

1
managedkaos avatar
managedkaos

cool i will definitely look into that!

2021-03-31

Alencar Junior avatar
Alencar Junior

I’m providing authorization to an API Gateway (proxy integration) with Cognito and I have a Lambda function (dockerized) requesting the API endpoint https://{id}.execute-api.{region}.[amazonaws.com](http://amazonaws.com) . I would like to know if it is possible to allow any resource within AWS including my dockerized lambda functions to access the API without authentication? Currently getting the response {"message":"Unauthorized"} Note: That’s a public API since I have external apps requesting it.

Alencar Junior avatar
Alencar Junior

Thanks @Alex Jurkiewicz!

Darren Cunningham avatar
Darren Cunningham

Does anybody know if there is an AWS provided SSM parameter for the elb-account-id like how they provide SSM parameters for AMI IDs?

Darren Cunningham avatar
Darren Cunningham

I put a support case in too - I’ll update if I hear anything

Alex Jurkiewicz avatar
Alex Jurkiewicz

if you’re using terraform, it provides them as a data source

Darren Cunningham avatar
Darren Cunningham

AWS Support confirmed that there is not currently an SSM Parameter that I could use for this.

My choices are to either create SSM Parameters (which I’m considering) or use a map in my CloudFormation templates.

    keyboard_arrow_up