#aws (2021-03)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2021-03-01
![Troy Taillefer avatar](https://secure.gravatar.com/avatar/c49655ddf3dd01cd93556a4ddb0c6f1d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
I am trying to make a kinesis autoscaler lambda based on existing code basically update shard count based on incoming records alarm metric. During testing I notice something odd in using aws cli commands to get the number shards shown above. Basically describe-stream-summary says the OpenShardCount is one this seems like the right answer but describe-stream and list-shards report there are 4 shards. Which is correct ? Why are they not consistent ? Hope there is a kinesis expert here who can explain what is going on thanks
![Troy Taillefer avatar](https://secure.gravatar.com/avatar/c49655ddf3dd01cd93556a4ddb0c6f1d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
I think I understand the shards are not yet expired and are still readable but not writable because of retention period
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Right. Not all shards are open. Anyway, there are off-the-shelf solutions for auto-scaling Kinesis streams, I would highly recommend using them instead of writing your own: https://aws.amazon.com/blogs/big-data/scaling-amazon-kinesis-data-streams-with-aws-application-auto-scaling/
![attachment image](https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/10/31/kinesis-auto-scaling-1-699x630.gif)
Recently, AWS launched a new feature of AWS Application Auto Scaling that let you define scaling policies that automatically add and remove shards to an Amazon Kinesis Data Stream. For more detailed information about this feature, see the Application Auto Scaling GitHub repository. As your streaming information increases, you require a scaling solution to accommodate […]
![Troy Taillefer avatar](https://secure.gravatar.com/avatar/c49655ddf3dd01cd93556a4ddb0c6f1d.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
@Alex Jurkiewicz Thanks I based my solution on that code https://github.com/aws-samples/aws-application-auto-scaling-kinesis from the article you linked to but found issues with both the cloudformation and the python lambda code. So I am improving it to make it more production ready.
Leveraging Amazon Application Auto Scaling you have now the possibility to interact to custom resources in order to automatically handle infrastructure or service resize. You will find a demo regar…
2021-03-03
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
i have CF with S3 origin, the origin has origin_path = “/build”, CF has its first behavior as “/url/path/*”.
I get the The specified key does not exist
error and the Key ends up being /build/url/path/index.html
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
I can access the files from root of the cdn but not from my path pattern
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
do i have to have the origin folder structure (s3) match my behavior path?
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Yes
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
this client jenkins s3 plugin does not allow me to do that
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
You could rewrite the path with a lambda@edge function
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
thats where im at now
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
still need a lambda because if its in the folder it has no idea what to do with index.html
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
its fine im done
2021-03-04
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I am a bit puzzled by a network issue
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Machine has two firewall groups assigned: outbound = all open, inboud = ssh open from my ip
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I can access ssh from my workstation
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Making a ssh connection FROM the instance does not work
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
when I tcpdump my traffic, I can see traffic going out of the machine
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
ha I found it, the configuration of the firewall group changed a bit
![Maycon Santos avatar](https://secure.gravatar.com/avatar/d24ab7fa13f0865ed3913fb2d69c57c4.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0004-72.png)
Have you checked with this tool?
https://aws.amazon.com/blogs/aws/new-vpc-insights-analyzes-reachability-and-visibility-in-vpcs/
![attachment image](https://d2908q01vomqb2.cloudfront.net/827bfc458708f0b442009c9c9836f7e4b65557fb/2020/06/03/Blog-Post_thumbnail.png)
With Amazon Virtual Private Cloud (VPC), you can launch a logically isolated customer-specific virtual network on the AWS Cloud. As customers expand their footprint on the cloud and deploy increasingly complex network architectures, it can take longer to resolve network connectivity issues caused by misconfiguration. Today, we are happy to announce VPC Reachability Analyzer, a […]
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
ha interesting
![Jonathan Le avatar](https://avatars.slack-edge.com/2022-06-30/3743020264469_11185ecccf85573f89bc_72.jpg)
I checked it out recently. Worked pretty well.
![Marcin Brański avatar](https://secure.gravatar.com/avatar/7f3c56304d6e3adb7658889af56cd171.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0001-72.png)
Woooohooo! So simple and now it’s there I shouldn’t be that much happy for it but everytime I setup ELK on AWS (soo many times) I check if it’s available and here it is.
Amazon Elasticsearch Service now supports rollups, reducing storage costs for extended retention*
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Can we use this instead of the lambda we have to purge old log indexes?
![Marcin Brański avatar](https://secure.gravatar.com/avatar/7f3c56304d6e3adb7658889af56cd171.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0001-72.png)
Hmm, rollups would be something different. Aggregating old data into new index with lower data resolution.
I think you mean curator
lambda. Recently they also introduced Index State Manager ISM
, I haven’t used it but seems it’s possible with that although it’s not as robust as curator
.
This policy from docs remove replicas and later remove index after 21d
{
"policy": {
"description": "Changes replica count and deletes.",
"schema_version": 1,
"default_state": "current",
"states": [{
"name": "current",
"actions": [],
"transitions": [{
"state_name": "old",
"conditions": {
"min_index_age": "7d"
}
}]
},
{
"name": "old",
"actions": [{
"replica_count": {
"number_of_replicas": 0
}
}],
"transitions": [{
"state_name": "delete",
"conditions": {
"min_index_age": "21d"
}
}]
},
{
"name": "delete",
"actions": [{
"delete": {}
}],
"transitions": []
}
]
}
}
2021-03-05
![Takan avatar](https://secure.gravatar.com/avatar/4e8a0cf5b5908e2d294170ac754b247c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
hi guys, anyone knows how to create “trusted advisor” in terraform?
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
see this https://github.com/aws/Trusted-Advisor-Tools and implement it using Terraform, you can create a module and publish it also
The sample functions provided help to automate AWS Trusted Advisor best practices using Amazon Cloudwatch events and AWS Lambda. - aws/Trusted-Advisor-Tools
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
you need to define these in Terraform
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
![Takan avatar](https://secure.gravatar.com/avatar/4e8a0cf5b5908e2d294170ac754b247c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
thanks a lot for your help bro!
![Takan avatar](https://secure.gravatar.com/avatar/4e8a0cf5b5908e2d294170ac754b247c.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
can we upgrade the version of Cloud Front’s security policy in terraform?
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Hi everyone, Let’s say that I have a terraform setup with an rds instance. After a while, I want to restore to a given point in time through a snapshot that i’m creating every day. Given that AWS limits to restoring the snapshot to a NEW instance, how can I still control this new instance using terraform? what’s the correct process to have here?
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
you can use the cloudposse terraform-aws-rds-cluster module an just create a clone cluster from a snapshot
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
or the rds instance module
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
and then you switch endpoints in your app or use route 53 records that are cname yo the real endpoints
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
or use rds proxy in front and change the endpoints of the proxy to point to the new instance/cluster
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
We have a task that saves the db to a secure S3 bucket so we don’t have to rely on those rules. Once you set it up, its really not that hard to maintain.
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
(in sql format)
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Restore manually and then import the resource
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Is there a guide online on maintaining such a database in a production environment?
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
I feel that saving to s3 can cause data failures, depending on what happened during the backup/reatore in the db
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Like- lets say that you get 24/7 traffic consistently, hundreds of operations a seconds
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
How can I restore to a point in time without losing info?
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
it should do it in a transaction
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
and notify if it fails
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
i dunno its based on usecase and also you gotta weigh convenience (s3) over reliability (images)
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
we maintain a large production application for a very large car company and we save db backups to s3
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
never had issues
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
And in order to restore the db in place, you delete everything and pg_restore it?
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
have you tried a clone?
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
a 600 GB db takes about 5 min to clone
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
in Aurora
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
it is pretty fast, now from a snapshot takes longer
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
What i’m thinking about is the mutations that happen during this kind of backup. Where do they go? Assuming that I dont have a hot backup or some complicated setup
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
what do you mean? when a snapshot is issued any new transaction after the snapshot is not recorded in the snapshot
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Maybe I missed on the “clone” part. Is that a feature of rds?
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
The snapshot is an instant of time from when the snapshot started. If your snapshot starts at 10am it will be a copy of your database as of 10am. Even if the snapshot takes 15 mins to create there will be no data from after 10am
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
@Alex Jurkiewicz thats also the case for a pg_dump?
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Sure but I wouldn’t use that for a real database. The restore time is unworkably slow
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
On the same note, is there a good reason to use aurora compatible with postgres over simply rds?
![Pavel avatar](https://secure.gravatar.com/avatar/ec6c73d48fc7a2cbb1852f0952efb1ec.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
im curious what is are you trying to guard against on a production database having these backups run so often?
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Given a single instance setup, if there’s an issue that requires a restore of a backup, the data between the last snapshot and current time will be lost. The solution is obviously to use some sort of cluster but I’m trying to see all options in asvance
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
aurora storage layer is the magic behind aurora and is VERY fast
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
if you need replication of transaction then you need a cluster and a replica cluster
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Maybe aurora solves it for me.. seems like it stores data on s3 and enables in place restore to a point in time
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Most companies/products find the trade-off of risk of data loss low enough that losing some data due to periodic backups to be acceptable. I’m not saying your product is also like this. But if you are looking for higher availability/disaster recovery guarantees, it is going to cost you a lot, in both time and operational complexity. I suggest you consider carefully how important going above and beyond the standard tooling is for your product.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Also, if you are a company with these higher than normal requirements, you would have an Amazon account rep, and they would be very happy to organise you many presentations from the RDS team about all the many ways to give them more money. You should take advantage of that
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
Might as well just use https://litestream.io huh
![attachment image](https://litestream.io/images/twitter-image.png)
Litestream is an open-source, real-time streaming replication tool that lets you safely run SQLite applications on a single node.
![Ofir Rabanian avatar](https://avatars.slack-edge.com/2020-12-08/1563832876004_d3d739393d834a14998a_72.png)
I love the simplicity behind it
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
shamelessly asking for upvotes here https://github.com/99designs/aws-vault/pull/740
tldr, we figured out a way to plist the aws-vault –server https://gist.github.com/nitrocode/cd864db74a29ea52c7b36977573d01cb
Closes #735 Thanks to @myoung34 for most of the help in adding the –no-daemonize switch. This allows the –server to be nohupped. $ make aws-vault-darwin-amd64 $ nohup ./aws-vault-darwin-amd64 \ …
![MrAtheist avatar](https://secure.gravatar.com/avatar/924a357229c69bbf33fea0c9f04505c9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0010-72.png)
Anyone know why AWS doesnt have a default iam policy for “ecs read only”? i have to create one just for this… ¯_(ツ)_/¯
2021-03-07
![msharma24 avatar](https://avatars.slack-edge.com/2021-07-12/2274860926897_140ea0637d985071847a_72.jpg)
Hi Guys - Is there a way to get the AWS Organisation ID (unique identifier)) via AWS CLI / API ?
![msharma24 avatar](https://avatars.slack-edge.com/2021-07-12/2274860926897_140ea0637d985071847a_72.jpg)
2021-03-08
![michaelssingh avatar](https://secure.gravatar.com/avatar/b962c2c6665b86151f6cff2a5b0c34b1.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I am running into some difficulties with provisioning a Windows EC2 instances with a Powershell script, which is passed into a aws_launch_configuration as such:
user_data_base64 = base64encode(data.template_file.powershell.rendered)
The script is also quite simple, it downloads a .exe from the internet and then starts a silent install with Start-Process:
<powershell>
Start-BitsTransfer -Source ...
Start-Process ..
</powershell>
This is my first time working with Powershell and provisioning Windows EC2s so I may be missing something but when I RDP into the machine the executable is neither downloaded nor installed.
![michaelssingh avatar](https://secure.gravatar.com/avatar/b962c2c6665b86151f6cff2a5b0c34b1.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
If paste the contents of the Powershell script into Powershell on the instance, it works as expected however.
2021-03-10
![Patrick Jahns avatar](https://avatars.slack-edge.com/2021-01-07/1617422085682_1417104395b2c9f52fbe_72.png)
Does anyone have a list of common dns names for aws services? I am trying to get a feeling for their patterns
![Issif avatar](https://avatars.slack-edge.com/2019-12-02/848866457345_6b17c415c518a84814ce_72.png)
you mean service endpoints? https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html
See the service endpoints and default quotas (formerly known as limits) for AWS services.
![Patrick Jahns avatar](https://avatars.slack-edge.com/2021-01-07/1617422085682_1417104395b2c9f52fbe_72.png)
That’s already quite useful -thank you! However I was actually wondering on the DNS entries for the instances that customers receive. i.e. for RDS or MSK etc.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
the hostnames for resources are all highly service-specific. There are no patterns, really
![Patrick Jahns avatar](https://avatars.slack-edge.com/2021-01-07/1617422085682_1417104395b2c9f52fbe_72.png)
Do you know of a overview list in general? Working on some dns naming schemes for a service and was thinking of getting inspired from AWS
![Issif avatar](https://avatars.slack-edge.com/2019-12-02/848866457345_6b17c415c518a84814ce_72.png)
I don’t think they provide a list
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
can anyone help with please its like RDS is partially completed the upgrade from 5.6 to 5.7
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
is there a way to force the pending modifications now instead of waiting for the maintenance window?
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
yes, ‘apply immediately’
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
it’s an option you can pass when modifying an rds instance/cluster. Either via web console or api
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
i am trying to set that now
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
but it don’t let me set it
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
i can see the pending modifications via the API but can’t seem to apply them
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
are you passing apply immediately and a change to the config? You can’t pass only ‘apply immediately’ with no changes
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
i don’t want to make any changes though i want it to apply the pending mods e.g. upgrade to 5.7
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
you need to re-submit the pending modification with apply_immediately set to true
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
when you submit a change with that flag, all pending modifications are immediately applied
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: Current Parameter Group (de-prd-yellowfin-01-20210219152210698600000003) is non-default. You need to explicitly specify a new Parameter Group in this case (default or custom)
![Steve Wade (swade1987) avatar](https://avatars.slack-edge.com/2022-12-08/4499411930625_2768e5fdceec550e6669_72.jpg)
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Has anyone enforced IMDSv2 on their instances and had problems with cloud-init not starting?
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I think a co-worker just hit this
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
When working with instance user data, keep the following in mind:
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Has a fix in similar to this I believe
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
the doc does not describes how cloud-init will deal with the generation of the token, which is the problem
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
in my user data I can modify the script and add those calls
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
but when I was testing it was cloud-init without user-data complaining about it
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
as you can see here :
TOKEN=`curl -X PUT "<http://169.254.169.254/latest/api/token>" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v <http://169.254.169.254/latest/user-data>
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
they call the api to get the user-data and the user data does not have the token call
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
so it is pretty dam confusing
2021-03-11
2021-03-14
![Vlad Ionescu (he/him) avatar](https://avatars.slack-edge.com/2020-10-03/1417676895681_ea45b3f22e5fea04f2fc_72.png)
FYI, https://pages.awscloud.com/pi-week-2021 is happening! It will be a fun one A bunch of S3 + Data in general + some Serverless
Register Now
2021-03-15
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
I want to host a single file over HTTPS with a custom domain. Is there a simpler solution than S3 bucket + CloudFront + ACM cert? Simpler meaning serverless, no ec2 + nginx in user-data solutions
![Vlad Ionescu (he/him) avatar](https://avatars.slack-edge.com/2020-10-03/1417676895681_ea45b3f22e5fea04f2fc_72.png)
Amplify Console which is basically S3+CF+ACM+CI/CD+others? It’s easier to manage, but no Terraform support yet
AWS Amplify offers a fully managed static web hosting service that accelerates your application release cycle by providing a simple CI/CD workflow for building and deploying web applications.
![pjaudiomv avatar](https://secure.gravatar.com/avatar/40f13c8f113a13f5b9730c8cd47ec9ee.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
![pjaudiomv avatar](https://secure.gravatar.com/avatar/40f13c8f113a13f5b9730c8cd47ec9ee.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
as far as aws its probably s3 or ec2 like you mentioned
![pjaudiomv avatar](https://secure.gravatar.com/avatar/40f13c8f113a13f5b9730c8cd47ec9ee.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0013-72.png)
you could use a lambda with alb too
![roth.andy avatar](https://avatars.slack-edge.com/2019-09-18/753707271651_6f58c1cbab3c77754f58_72.jpg)
GitHub pages is what’d I’d go for at this point. I’ve used Netlify as well, it worked really well and was free (but I still prefer github pages)
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
someone appears to have published and then retracted this post (but it popped on my AWS News RSS), so I think we’re going to see Fargate exec tool soon! https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-exec[…]commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
![attachment image](https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2021/03/12/image-2021-03-12T040602.361-1260x432.png)
Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]
![Marcin Brański avatar](https://secure.gravatar.com/avatar/7f3c56304d6e3adb7658889af56cd171.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0001-72.png)
wow, that’s neat feature for debugging ecs
![attachment image](https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2021/03/12/image-2021-03-12T040602.361-1260x432.png)
Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]
2021-03-16
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Interesting ended up git cloning requests when virtualenv didn’t work. Anyone encountered this before?
[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'requests'END RequestId: 7ed24ad6-1b95-4600-9a35-d379726f6b47
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Your code package didn’t include requests
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
I did install it via pip in the virtualenv
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Followed by a pip freeze to generate the requirements file
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
How did you upload your code to lambda? Directly as a zip?
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
zip -r9 fuckingWork.zip .
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
aws s3 cp fuckingWork.zip s3://bucketName
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Like dat
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
Run zipinfo on the zip file And verify it contains requests
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
You can also extract the zip in a clean docker image for python and see if”import requests” works
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
I’ll go with zipinfo its cleaner
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
thanks my man!
![Maciek Strömich avatar](https://secure.gravatar.com/avatar/98de12365b633b063e208220100d4594.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0002-72.png)
FYI previously requests was available via ‘botocore.vendored’ package. It was deprecated in january and removed https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/
![attachment image](https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2020/01/02/botocore-requests-2-1260x557.png)
Update (January 19, 2021): The deprecation date for the Lambda service to bundle the requests module in the AWS SDK is now March 31, 2021. Update (November 23, 2020): For customers using inline code in AWS CloudFormation templates that include the cfn-response module, we have recently removed this module’s dependency on botocore.requests. Customers will need […]
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Cheers man i came across this whilst debugging. I’m not using this module in paticular so dimissing it was easy
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Turned out it was just a recursive issue with the zipped file. Thx all.
2021-03-17
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. - salesforce/cloudsplaining
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
new manager with retail background I guess
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
does anyone write their lambdas such that they understand a common, fake “test event”? such that you can invoke it with that event just to validate that the package really has the imports it needs?
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
@loren what do you mean by fake event ?
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
You just need to create an empty test event/message
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
And use it as a test event
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
something like:
{
"test_event": "this is a test"
}
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Right…
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
yeah, i know how… i’m wondering if it’s a pattern others are using or contemplating
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
I use that for cloud custodian lambdas
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
In my particular case, I prefer using real events/with real valid payloads
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
the lambda would check that key and if present run some simple test logic or just return
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
i also prefer real events, but we have some lambdas where the function makes an aws call using the value from the event. that value is dynamic, and not persistent. for example, at the organization level, a CreateAccountRequest event is generated when a new account is created. i can’t use a “real” event, or i end up doing “real” things to “real” accounts. and i can’t fake the CreateAccountRequest because then the lambda cannot actually get the CreateAccountRequest status
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
@loren Your lambda functions should be idempotent, meaning that if you execute several times the same lambda function with the same payload, you should have the same result
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
if only life were so simple
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
that CreateAccountRequest actually disappears after some time, so we can be idempotent for a while, but eventually the event itself becomes invalid
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
we do have a valid-ish payload, with just fake data, and currently we catch the exception in the test. if the lambda gets that far, we know the package is good. and we do unit tests on the code so we’re reasonably confident about the code behavior
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Ok… that makes sense
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
but having valid-ish payloads for every event is a real pain to discover and doesn’t scale to hundreds of functions, when the thing i most care about is just validating that the package is actually good
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
so i was thinking, if i modify every lambda to understand this “fake” test event, and use that to validate the package, i can apply the same test to every lambda
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
and i can enforce that the lambda understand the test event by running that test for every lambda in CI with localstack
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
@RB i’m interested in hearing more about your experience with this pattern
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
i just use a generic test event. the json input doesnt matter with cloud custodian lambdas since they trigger on a cloudwatch cron. so i just use any json to kick off the lambda and see the output to make sure it didn’t throw an error
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
I personally think you should take care of this in the build pipeline.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
@maarten can you expand? we are running the tests in the build pipeline…
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
right, i meant simply running node_with_version_x index.js
, that would find bad imports and doesn’t execute anything. And otherwise I’m thinking of the serverless
toolset to invoke locally., or better even, https://www.serverless.com/blog/unit-testing-nodejs-serverless-jest
Create unit tests for Node.js using the Serverless Framework, run tests on CI, and check off our list of serverless testing best practices.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
yeah, this is terraform, so i’m using localstack to mock the aws endpoints… and configuring the provider to use the localstack endpoints
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
run terraform apply, invoke the lambda, inspect the result to determine pass/fail
![kalyan M avatar](https://avatars.slack-edge.com/2020-10-03/1398345450950_bc7201908657ee843c28_72.png)
Hi guys what are some top most/Must use tools in Managing the Kubernetes on aws eks or other clusters. any recommendations on best Practices?
2021-03-18
![Maciek Strömich avatar](https://secure.gravatar.com/avatar/98de12365b633b063e208220100d4594.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0002-72.png)
Anyone using Cognito’s MFA functionality? How do you block ability to disable previous MFA setup by calling associate software token again and again? this call can be sent by anyone (it just requires a valid access token) and if it’s sent for the second time it automatically overrides the previous setup and disables mfa on login.
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
let me know what support says:)
![Maciek Strömich avatar](https://secure.gravatar.com/avatar/98de12365b633b063e208220100d4594.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0002-72.png)
Its not a bug, it’s a feature
![maarten avatar](https://avatars.slack-edge.com/2020-09-28/1393040065826_b0d13cfde15deff02026_72.png)
You should try to get min. TLS1.2 on cognito :’ -)
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
the author provided some feedback against it. if anyone is interested in daemonizing aws-vault using launchd, please leave some feedback.
aws-vault: Start metadata server without subshell (non-daemonized)
I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
interesting point of view, a bit close minded
I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…
![Ikana avatar](https://secure.gravatar.com/avatar/2d47fa3cccddfb814329e5d07bd1f228.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Is it possible to contract cloudposse’s services through an AWS marketplace private offer?
![Ikana avatar](https://secure.gravatar.com/avatar/2d47fa3cccddfb814329e5d07bd1f228.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Sorry for the spam but I feel this is relevant to you folk @Erik Osterman (Cloud Posse)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
That’s really interesting. We haven’t pursued it yet.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Hey folks — Is anyone using an external Security Saas product like Fugue or other to replace using AWS Config / SecurityHub? AWS account rep is suggesting we utilize https://www.fugue.co/ and I’d be interested in hearing folks thoughts.
Fugue puts engineers in command of cloud security and compliance.
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
I play with them before, they partner with Sonatype to create a IaC offering to check TF code
Fugue puts engineers in command of cloud security and compliance.
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
Fugue create regula and they have some ML/engine to check policies and such and offer IAM management too
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
recently I have been using CloudConformity from Trend micro
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
they all feel similar on what they do an give reports on
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
do they have any value? I do not know, I do not think they add much and over time Security(inspector, config, Guard duty) hub is going to eat them alive I think
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
that is the amazon way
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Got it — Thanks for the perspective Pepe. I’m interested because it looks a bit daunting to implement all those tools: Inspector, Config, GD. And if I can skip that for a slight premium… then that’s of interest.
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
we’re using a managed/bundled version of Prisma Cloud, which is similar I guess to Fugue (from a cursory 5 second google)
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
primary annoyance is that their rules seem based around using AWS strictly as some sort of internal business network replacement and not running a product
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
one thing to keep in mind is that all the remediation rules/configs you will need to implement to solve the finding is going to be 80% of the work to setup config/cloudwatch/guarduty etc, don’t fool yourself thinking it going to be less work
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
most of this products require config enable etc
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
you will have a warning that will say “Enable guarduty”…..
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
haha yes one of the findings I keep suppressing is “enable config recording for all resources”
![this](/assets/images/custom_emojis/this.png)
![Yoni Leitersdorf (Indeni Cloudrail) avatar](https://avatars.slack-edge.com/2020-08-26/1310888406231_2dc8c60843ac09dc06bb_72.jpg)
I may be extreme in my opinion here, but I honestly think the majority of the focus should go towards IaC scanning. Whether it’s Fugure/checkov/tfsec/Cloudrail, the future is in IaC.
The reason is that even when you find something in your live environment, through Fugue, Prisma, Dome9 or AWS’s own tools, no one on your dev team will want to fix it. So you’ll have a nice JIRA ticket sitting there but not moving.
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
VCs are realizing that there can be billions in fines for bad code and bad security practices
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
remember Equifax fix was like 15 lines of code and one hour of work
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
(it could be even less lines I think)
![Yoni Leitersdorf (Indeni Cloudrail) avatar](https://avatars.slack-edge.com/2020-08-26/1310888406231_2dc8c60843ac09dc06bb_72.jpg)
If it’s caught during development, it’s one hour. If caught in a vuln scanning in prod…
![jose.amengual avatar](https://secure.gravatar.com/avatar/32f267b819eac9e0ea6a8324b53064a0.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0024-72.png)
exactly that is why the Sec scanning of code and infra should happen at build time(left side)
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
We are trialling Laceworks at the moment. It’s quite a heavy solution and very far “to the right”, eg it runs in your prod account and picks up errors post deploy. But the coverage is very comprehensive. Not sure if I’d recommend or not yet
![Or Azarzar avatar](https://avatars.slack-edge.com/2021-03-21/1882953126259_c878c6de33781c221069_72.jpg)
check us out, you can reach out in a dm if you want more info, i’m the CTO
![attachment image](https://img.pagecloud.com/xgO5J5MpJAmk90nbhw-serySfq8=/1300x0/filters:no_upscale()/lightspinio/Open_Grap_-_Home-a2bbb.jpg)
Lightspin is a contextual cloud security platform that continuously visualizes, detects, prioritizes, and prevents any threat to your cloud stack
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
How does everyone handle MFA for root credentials for your AWS accounts (or whatever). Someone had the idea to just use an OTP device and store it in the safe, but that will take 2h+ for anyone local, and if you’re in another state then you’re screwed. A workaround would be to just open a case with amazon to reset MFA which we’re fine with. Search wasn’t super helpful…help, por favor!
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
We have the QR code for MFA stored at last pass
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
That simple … a few people have access to that QR code
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
LOL of course something that simple would work…thanks
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
Seems like you’d love to involve the CIA/NSA/FBI and the S.H.I.E.L.D agents to safeguard the QR code
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
New to this team but I’ll be sure to find the tinfoiled hat guy. There’s always at least one.
![Santiago Campuzano avatar](https://secure.gravatar.com/avatar/f8f05f122df51440e3bd79dd0feb089b.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0000-72.png)
LOL
![Zach avatar](https://avatars.slack-edge.com/2020-07-21/1278358623280_e99d673db1471fc93095_72.jpg)
we have h/w tokens at the moment but due to the shift to remote are going to move them to ‘software tokens’ in a password store service
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
right on, we’re using hashicorp vault for the password part, trying to figure out the second factor https://aws.amazon.com/iam/features/mfa/?audit=2019q1
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
You can use 1password as an OTP generator by providing it a QR code. Then you can share that OTP generator with your teammates in a shared vault
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Ahhh, you can also just grab a physical code from AWS directly… Thanks guys. This was much simpler than I realized it would be. I didn’t know there would be so many options.
2021-03-19
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Anybody using and figured out a way to consolidate API Gateway logs? Currently each requests creates 28 log messages. Creating 28 million log messages per million requests is silly. Not a ghastly expense, but one that I’d like to mitigate.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I created a support request too. I’ll update the thread for those that might be interested.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
you happen to see this post already? https://www.alexdebrie.com/posts/api-gateway-access-logs/
Learn the what, why, and how of API Gateway access logs.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I have not, thank you
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
thanks @loren - I updated my API Gateway and have the desired result now in my Datadog Logs view
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
deployOptions: {
loggingLevel: apigateway.MethodLoggingLevel.OFF,
accessLogDestination: new apigateway.LogGroupLogDestination(logGroup),
accessLogFormat: apigateway.AccessLogFormat.custom(`{"requestTime":"${apigateway.AccessLogField.contextRequestTime()}","requestId":"${apigateway.AccessLogField.contextRequestId()}","httpMethod":"${apigateway.AccessLogField.contextHttpMethod()}","path":"${apigateway.AccessLogField.contextPath()}","resourcePath":"${apigateway.AccessLogField.contextResourcePath()}","status":${apigateway.AccessLogField.contextStatus()},"responseLatency":${apigateway.AccessLogField.contextResponseLatency()}, "traceId": "${apigateway.AccessLogField.contextXrayTraceId()}"}`),
dataTraceEnabled: false,
tracingEnabled: true,
metricsEnabled: true,
}
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
very nice! yeah, alex debrie writes some of the best posts on this stuff. definitely my goto when i’m scratching my head on how it works
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Just curious if anybody has tried to do visualizations of AWS regions visually in something like PowerBI , grafana (or the AWS analog, quicksight?) PowerBI mentions ShapeMaps, but they need something called a shapefile or TopoJSON .. anybody tried this before?
![MattyB avatar](https://secure.gravatar.com/avatar/ff034363a31c46cbb9df6b6b2a8c82ae.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I’m not sure I’m following when you say visualizations of AWS regions - do you mean map out AWS resources for individual regions given a data set? I used CloudMapper over a year ago just to get an overview, I’m not sure if it meets your use case. https://github.com/duo-labs/cloudmapper
CloudMapper helps you analyze your Amazon Web Services (AWS) environments. - duo-labs/cloudmapper
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
actually simpler than that - I don’t really need resource listings at all. I already have a table of items + regions they’re in (ap-east-1, us-east-1, ca-central-1, etc.) , but if I plug those items into a powerbi map visual, it doesn’t give out much useful information. I’m hoping somebody has gone ahead and generated a simpilified globe with the various regions or zones out there.
![sheldonh avatar](https://secure.gravatar.com/avatar/b909e5a82474e9853ff6a6c6111cf0cf.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0020-72.png)
Pretty sure you’d need to get something like zip codes or similar to then map to a specific location on powerbi if the geographic stuff requires that. Eu-west-1 would need to be mapped to something for powerbi
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
Thanks, that makes sense. I can certainly go about setting this up - just was curious if there was already a mapShaper file out there that somebody’s already done. =]
2021-03-20
2021-03-21
2021-03-22
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Anyone experiencing DNS issues with the AWS Console today?
![Victor Grenu avatar](https://secure.gravatar.com/avatar/49acff62a9be5ac6709f724dab346909.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0010-72.png)
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Hahah try not to use it as well, except when you need screenshots for SOC2 compliance purposes.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
I’m oddly getting DNS_PROBE_FINISHED_NXDOMAIN
for both signin.aws.amazon.com AND status.aws.amazon.com.
![Vlad Ionescu (he/him) avatar](https://avatars.slack-edge.com/2020-10-03/1417676895681_ea45b3f22e5fea04f2fc_72.png)
Same error here in Romania, using default ISP DNS, Google, and Cloudflare DNS
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah great — Thankful I’m not alone!
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
stupid CLI question: I’m creating an SSH key and want to tag my resources in the same commmand. I do with with the --tag-specifications
flag.
aws ec2 create-key-pair \
--key-name bastion-ssh-key \
--tag-specifications 'ResourceType=key-pair,Tags=[{Key=deployment:environment,Value=sbx},{Key=business:steward,[email protected]},{Key=security:compliance,Value=none}]'
. . .
How can I split the tags into multiple lines per tag? I’ve tried a few different ways and the CLI keeps complaining to me. Seems you have to put this in a single line. Not opposed, but this just makes it unreadable.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
@mikesew you may want to consider creating a json template that you can import into the command, it will make things much neater!
$ aws ec2 create-key-pair --generate-cli-skeleton
{
"KeyName": "",
"DryRun": true,
"TagSpecifications": [
{
"ResourceType": "snapshot",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
]
}
![mikesew avatar](https://secure.gravatar.com/avatar/735f27b55681e06ef0dcbc0ab146cd49.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
then:
aws ec2 create-key-pair --cli-input-json FILE
2021-03-24
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Anybody have an example of a WAF v2 Rule that blocks requests with an http
protocol? I’m figuring that I’m looking for SingleHeader but not sure if I should be looking for protocol
, http.protocol
or X-Forwarded-Proto
or if I’m totally off base
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Sorry to ask the dumb question when I’m sure you have already thought about it, but you can’t do that at your LB layer by redirecting or not opening up port 80?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I want it to redirect by default, but I want to drop non-secure requests to specifically an authorization endpoint
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
you’re good to ask the question, assume nothing
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Gotcha. I believe WAF is typically the first in the chain, so I would assume you wouldn’t want X-Forwarded-Proto.
This might be one where you need to setup rules for all 3 options and make them COUNT instead of BLOCK and then watch your metrics.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I mean the Rule itself is not that hard, it’s just figuring out if there is a header that I can use for protocol
otherwise I’m just going to use Origin BEGINS_WITH http:// && Host EQUALS <xxx> && Path EQUALS /auth
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I put a support request in, but was hoping somebody might have ran into this
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah gotcha. Yeah, I’m not 100% sure. Support should be able to figure that out for you or you can try a few things.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
yeah, it’s not a high priority issue so might as well give them a shot rather than keep guessing headers…which I’ll do if I have to
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Yeah
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
According to AWS Support blocking by protocol can’t be done at the WAF and should be done at the ALB - so back to your original suggestion. Kinda sucks because of the rule limit, but makes sense.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Ah that sucks, but at least you know the path forward.
2021-03-25
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
![attachment image](https://d2908q01vomqb2.cloudfront.net/827bfc458708f0b442009c9c9836f7e4b65557fb/2020/06/03/Blog-Post_thumbnail.png)
With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
With aws vaults metadata service, not sure how useful this toolkit is in this context
![attachment image](https://d2908q01vomqb2.cloudfront.net/827bfc458708f0b442009c9c9836f7e4b65557fb/2020/06/03/Blog-Post_thumbnail.png)
With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
I love this tool to quick check cloudwatch logs from VSCode, a killer feature for me
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
Ill give it a go then! Thanks for sharing
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
AWS Gurus: How can I read secrets from AWS Secrert manager inside EKS pod ?
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
or parameter store
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
client want that secret must be encrypted with KMS key, they don’t want to use Vault or SecretHub
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Why not use the external secrets operator?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Integrate external secret management systems with Kubernetes - external-secrets/kubernetes-external-secrets
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
(It was by godaddy)
![Mohammed Yahya avatar](https://avatars.slack-edge.com/2020-12-17/1590276740676_9fdeb6c9ef89d13e6414_72.png)
Thanks will take a look, is the secrets created by this external operator encrypted ?
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
yes, if you enable encryption at the EKS layer, which our module supports
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
(uses KMS)
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
nice, this might make step functions rather a lot easier to define and use… https://aws.amazon.com/about-aws/whats-new/2021/03/aws-step-functions-adds-tooling-support-for-yaml/
![RB avatar](https://avatars.slack-edge.com/2020-02-26/958727689603_86844033e59114029b3c_72.png)
edit: nvm, now the link is working
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
anyone tried it yet? https://aws-workbench.github.io/
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
This isn’t actually made my AWS, correct? I’m confused on that point.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
yeah it looks like an independent tool
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Yeah — I can’t see using it. This goes into the same reasons why I wouldn’t build a mobile app using a “build your mobile app using this fancy UI” tool: Things break down once you want to make things unique for the product or business.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
yeah. i have seen the same for cirucit design tools… basically you layout the blocks and the connections and the tool generates the code. but then to really tweak things you have to tweak the code…which breaks the connection to the UI stuff.
![this](/assets/images/custom_emojis/this.png)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
i would rather go the other way: here’s my code, please diagram it
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Yeah that’s the more possible approach.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Regardless, I think we need to accept that machines don’t know enough about what we’re going to be building to ever do these types of jobs well enough beyond simple examples / the most boilerplate usage.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
i know there’s an AWS tool that kinda does that… can’t recall the name off the top but last i looked at it, you had to spin up some cloudformation and point it at the account to read the resources.
I guess there is also terraformer but that is a resource -> code
tool.
![Matt Gowie avatar](https://avatars.slack-edge.com/2023-02-06/4762019351860_44dadfaff89f62cba646_72.jpg)
Yeah, AWS has a CloudFormation builder tool that I’ve seen once that is somewhat UI driven, but then you’re dealing with CF and
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I think if your organization was going to stick with provided solutions constructs something like this might be possible…but if your org was that simple then there’s probably a SaaS solution out there for you
![Mahmoud avatar](https://avatars.slack-edge.com/2021-01-27/1680239851285_942a37f151b26ceec5ef_72.jpg)
Would anyone happen to have come CLI command for getting all load balancers with target groups with no listeners? We’re looking to clean up our dangling LBs, but I’m not super experienced with AWS CLI https://www.cloudconformity.com/knowledge-base/aws/ELBv2/unused-load-balancers.html# I’ve mostly been following the steps here, but I’m attempting to create some kind of command to get all the ones with no target instances
Identify unused Elastic Load Balancers (ELBv2) and delete them in order to reduce AWS costs.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
@Mahmoud this worked for me:
echo -e "LoadBalancer\tListenerCount"
for i in $(aws elbv2 describe-load-balancers --query="LoadBalancers[].LoadBalancerArn" --output=text);
do
echo -e "$(echo ${i} | cut -d/ -f 3)\t$(aws elbv2 describe-listeners --load-balancer-arn=${i} --query='length(Listeners[*])')"
done | tee report.txt
![Mahmoud avatar](https://avatars.slack-edge.com/2021-01-27/1680239851285_942a37f151b26ceec5ef_72.jpg)
Thank you, this is perfect!
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
ahh reviewing your request and this counts the listeners… not the target groups. let me update in a sec
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
what you want is much easier! just one CLI call:
aws elbv2 describe-target-groups --query="TargetGroups[].{Name:TargetGroupName, LoadBalancerCount:length(LoadBalancerArns[*])}" --output=table
![Mahmoud avatar](https://avatars.slack-edge.com/2021-01-27/1680239851285_942a37f151b26ceec5ef_72.jpg)
This is really good and also outputs a bunch of resources we need to clean up, but I think I miswrote my initial question I need all load balancers with target groups that have no registered targets Basically, looking to clean up LBs that are in front of nothing
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
hmmm yeah you can extend to see what’s in the target group i think…
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I’ve tried this command
aws elbv2 describe-target-groups --target-group-arn "arrnnnnn"
and it does not show the targets
![Mahmoud avatar](https://avatars.slack-edge.com/2021-01-27/1680239851285_942a37f151b26ceec5ef_72.jpg)
I think you have to aws describe-target-health
and pass in the target group ARN to see the targets
You can’t get it from describe-target-groups
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
try this one:
echo -e "TargetGroup\tAttachmentCount"
for i in $(aws elbv2 describe-target-groups --query="TargetGroups[].TargetGroupArn" --output=text);
do
echo -e "$(echo ${i} | cut -d/ -f 2)\t$(aws elbv2 describe-target-health --target-group-arn=${i} --query='length(TargetHealthDescriptions[*])')"
done | tee report.txt
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
i will leave it to the reader as an exercise to find the load balancer as well
![Mahmoud avatar](https://avatars.slack-edge.com/2021-01-27/1680239851285_942a37f151b26ceec5ef_72.jpg)
haha thanks for the assistance
2021-03-26
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Hi. We have a daily process running, of which some jobs started failing about two days ago. Does anybody have an idea what might cause this? We’ve already followed the steps https://aws.amazon.com/premiumsupport/knowledge-center/batch-job-failure-disk-space/ The AWS Batch is failing with the error message “CannotPullContainerError: failed to register layer..: no space left on device
”. This happens for only some jobs, not all.
I have already created a Launch template and given it a 500G storage and in user data
, have set :
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=300G"'
![Joe Hosteny avatar](https://secure.gravatar.com/avatar/851f2d21e357fbb172c3abfc9860d9c5.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
I assume you are attaching that to /dev/xvda?
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
let me double check
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
“/dev/xvdcz”
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
“our AMIs have root as /dev/xvda”
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
mh
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
one of our engs set that up
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
do you think that might be the issue?
![Joe Hosteny avatar](https://secure.gravatar.com/avatar/851f2d21e357fbb172c3abfc9860d9c5.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
It could be. IIRC, batch used to have a default root parrtition size of 10 GB (it may have been bumped to 20 GB). If you have a large container (we have an unusually large one), it is possible you are running out of space on the root partition.
![Joe Hosteny avatar](https://secure.gravatar.com/avatar/851f2d21e357fbb172c3abfc9860d9c5.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0015-72.png)
We attach our larger disk directly to xvda
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
![Fabian avatar](https://secure.gravatar.com/avatar/308d00e65ca0f4bb1b90b70c591433aa.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Anyone?
![walicolc avatar](https://secure.gravatar.com/avatar/51411c6c528129b21fd44265ec260c01.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0006-72.png)
2021-03-29
![Ashish Modi avatar](https://avatars.slack-edge.com/2021-03-29/1900446955894_1f4adbb03f61a1880324_72.jpg)
Morning everyone!
I was wondering if it is possible to control recently announced auto tune feature for aws elasticsearch using terraform? see here - https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/auto-tune.html
Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
you can check the docs here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain
Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
generally, for high profile features it takes a few weeks for support to land. For less popular services, updates can take months or even longer
![Ashish Modi avatar](https://avatars.slack-edge.com/2021-03-29/1900446955894_1f4adbb03f61a1880324_72.jpg)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
@Tim Malone was first to open issue https://github.com/hashicorp/terraform-provider-aws/issues/18421
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Hi all, a external customer wants us to enable a s3 policy on a bucket:
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
{
"Version": "2012-10-17",
"Id": "Policy1472487163135",
"Statement": [
{
"Sid": "Stmt1472487132172",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
},
{
"Sid": "Stmt1472487157700",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME"
}
]
}
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I am a bit worried that they could open up the whole bucket as public
![Bart Coddens avatar](https://secure.gravatar.com/avatar/2172a7ffce39295e04ea825a5bc9b0b6.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
but I am not sure, because this policy is locked to the specific principal
![Yoni Leitersdorf (Indeni Cloudrail) avatar](https://avatars.slack-edge.com/2020-08-26/1310888406231_2dc8c60843ac09dc06bb_72.jpg)
You are correct, this policy allows them to modify the policy on the bucket, thereby opening it to the public.
2021-03-30
![steve360 avatar](https://secure.gravatar.com/avatar/bcecbe9cd057d1ac405be2ded7ae3aa9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
We’re seeing cloudwatch log group load time on console to fluctuate between 5-60+ secs. This started last couple of weeks. What’s the most likely cause? Large number of log groups? Long retention? What’s the level of performance that can be expected from aws for sub 1k log groups? What can be done to optimize performance?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
just some random thoughts that might help :man-shrugging:
what do you mean by “load times”:
• Listing the Log groups
– how many Log Groups do you have? # of metric filters, # of subscriptions
• Listing the Log streams
– how many streams are in the group?
• Opening a Log stream
and seeing the messages
Did you have any spike in usage for any of the above? Aka, did you unintentionally create 1000s of new logs groups?
any chance you are closing in or at a Usage limit for your account for any of the above? Might need to submit a rate limit increase
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I just used the Network tab in the Developer Tools in Chrome - took me 35s to reload and I have 401 Log groups in said account – it’s probably been like that and I’ve never noticed.
![steve360 avatar](https://secure.gravatar.com/avatar/bcecbe9cd057d1ac405be2ded7ae3aa9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
We haven’t increased log groups significantly. We’re just trying to load a log stream. Performance is erratic. Fast in one instance and slow right after when your refresh. AWS support if working on it now. They say there’s an issue.
![steve360 avatar](https://secure.gravatar.com/avatar/bcecbe9cd057d1ac405be2ded7ae3aa9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
if they give you any meaningful response, please do share!
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I wouldn’t count on it though, probably just a generic “our bad”
![steve360 avatar](https://secure.gravatar.com/avatar/bcecbe9cd057d1ac405be2ded7ae3aa9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Will do. Thanks!
![steve360 avatar](https://secure.gravatar.com/avatar/bcecbe9cd057d1ac405be2ded7ae3aa9.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
no meaningful response as you predicted, but cloudwatch seems to be performing better now. who knows what was the problem.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
the typical MO is not to fully disclose…I think it comes from a paranoia of sharing some of the secret sauce
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Hey team! Question:
TLDR: is there a good way to connect to a private AMQ without SSL?
Details: I’m setting up an environment that uses Amazon MQ and I’d like to keep the service private (along with the rest of the resources). To that end, everything that needs to be private is sitting in a private subnet of the VPC: ECS, RDS, etc.
Because everything is private, I’m using the IP address to connect to the AMQ endpoint. However when i connect, the app fails with an SSL error:
cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs
This leads me to think that the connection is SSL and the cert that AWS is serving up dosen’t have the IPs in it, but rather the DNS name of the MQ instance.
Googling for a solution, I found one doc that recommended putting an NLB in front of the AMQ and connecting to that but it seems (to me) that the connection might still fail; what about SSL validation between the ALB and the NLB? This solution also seems over engineered and potentially expensive given the addition of the NLB on top of the AMQ instance(s). https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2019/09/10/Solution-overview.png
Anyway, just thought I would share this in the event someone has seen this type of issue before and knows a choose_one(good/reliable/affordable)
solution for this problem. Cheers!
![attachment image](https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2019/09/10/Solution-overview.png)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I should add that unlike the diagram, my connection to AMQ is coming from inside the VPC from the same private subnet as the AMQ.
![attachment image](https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2019/09/10/Solution-overview.png)
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
I don’t use AMQ, but I am guessing it provides a DNS hostname as the endpoint, rather than a private IP address. Why not use that hostname?
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I’ve set the service to not be publicly accessible. Indeed there is a DNS name that comes with it and i was configuring the app to use that but it was timing out. i attributed that to the fact that MQ was not publicly accessible.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
this page suggests the hostname is like
<https://b-1234a5b6-78cd-901e-2fgh-3i45j6k178l9-1.mq.us-east-2.amazonaws.com:8162>
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
I am thinking now, though, that I might have to do some sort of private DNS. but then, if I do that, I’m not sure that the cert would still resolve.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
yes, my host name is just like that
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
ok, well you won’t need private DNS, since that will be a public DNS record. You should be able to resolve it from your PC, with host [blah.amazonaws.com](http://blah.amazonaws.com)
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
Private, using the DNS gives:
lookup b-7898a321-eac9-4db7-9d25-0ae2f020dabf.mq.us-west-2.amazonaws.com on 10.0.0.2:53: no such host
Private, using the IP gives:
cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
can you SSH to one of your client EC2 instances?
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
well the SGs are locked down to those in use by the services. I could do a bastion and connect from there. no EC2s at the moment. Just ECS.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
so, I think you need to verify the hostname you are using
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
yeah. at the moment I’m going to try opening up the AMQ just to see if i can get it connected. putting the MQ in a public subnet. i hate doing that but i just need a POC at the moment.
“I’ll lock it down later” - Famous last words
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
the fact it can’t resolve suggests a typo to me
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
because i’m 99% sure it will be a public hostname you can resolve from anywhere on the internet
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
no typos. i am reading the info from SSM. the TF code writes the MQ hosts DNS name into a variable and the ECS reads it from there.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
this is me trying to trying out different methods:
resource "aws_ssm_parameter" "amqp_host" {
name = "/${var.name}/${var.environment}/AMQP_HOST"
description = "${var.name}-${var.environment} AMQP_HOST, set by the resource"
value#value = "amqps://${aws_mq_broker.amq.id}.mq.${var.aws_region}.amazonaws.com:5671"
value = "amqps://${aws_mq_broker.amq.instances.0.ip_address}:5671"
type = "String"
overwrite = true
....
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
can you resolve the hostname yourself from your local machine?
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
no. not when this is set to false:
resource "aws_mq_broker" "amq" {
...
publicly_accessible = true # false
....
}
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
i’m going to try a config with it set to true. the MQ resources take 10-15 minutes to destroy and rebuild so it will be a few before i can report back
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
that’s interesting. I would have thought a hostname ending in amazonaws.com would always be accessible. Maybe you can check the lookup with dig +trace [host.amazonaws.com](http://host.amazonaws.com)
.
I know that some DNS servers will refuse to resolve a public hostname that points to internal IP addresses, for security reasons. It might be this is why you can’t resolve it from your laptop when publicly_accessible = false
. But dig +trace
is low-level enough to ignore that rule.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
yeah that might make sense. i didn’t check to see the IP that the hostname resolved to when i had it config’d as private.
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
ahh i did check it. it was a 10.
IP address which is indeed not internet routable. so yeah, kinda confusing. I have the DNS name, it points to the right place (internal IP) but the cert has the public name…. which can’t be resolved?
I will check the app as well. there might be a way to get it working from the back end with everything private if I can ignore the SSL connection and just connect.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
One other thing the docs suggest:
To ensure that your broker is accessible within your VPC, you must enable the enableDnsHostnames
and enableDnsSupport
VPC attributes
Explains the workflows of creating and connecting to an ActiveMQ broker
![managedkaos avatar](https://secure.gravatar.com/avatar/f7d88a7a95990c984ab107b491b51b3f.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0025-72.png)
cool i will definitely look into that!
2021-03-31
![Alencar Junior avatar](https://avatars.slack-edge.com/2020-10-05/1407799829122_3931e85fd61a9272f913_72.jpg)
I’m providing authorization to an API Gateway (proxy integration) with Cognito and I have a Lambda function (dockerized) requesting the API endpoint https://{id}.execute-api.{region}.[amazonaws.com](http://amazonaws.com)
. I would like to know if it is possible to allow any resource within AWS including my dockerized lambda functions to access the API without authentication? Currently getting the response {"message":"Unauthorized"}
Note: That’s a public API since I have external apps requesting it.
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
yes, you need to use IAM authentication method https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
![Alencar Junior avatar](https://avatars.slack-edge.com/2020-10-05/1407799829122_3931e85fd61a9272f913_72.jpg)
Thanks @Alex Jurkiewicz!
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Does anybody know if there is an AWS provided SSM parameter for the elb-account-id like how they provide SSM parameters for AMI IDs?
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
I put a support case in too - I’ll update if I hear anything
![Alex Jurkiewicz avatar](https://avatars.slack-edge.com/2020-09-08/1346106958085_9b44ddacd6267cc803c8_72.jpg)
if you’re using terraform, it provides them as a data source
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
AWS Support confirmed that there is not currently an SSM Parameter that I could use for this.
My choices are to either create SSM Parameters (which I’m considering) or use a map in my CloudFormation templates.
![Darren Cunningham avatar](https://secure.gravatar.com/avatar/d0ea359c3ff6b8093ae53e57fbbe2570.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
That TF data source just maintains a map https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/data_source_aws_elb_service_account.go