#aws (2021-03)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2021-03-01
I am trying to make a kinesis autoscaler lambda based on existing code basically update shard count based on incoming records alarm metric. During testing I notice something odd in using aws cli commands to get the number shards shown above. Basically describe-stream-summary says the OpenShardCount is one this seems like the right answer but describe-stream and list-shards report there are 4 shards. Which is correct ? Why are they not consistent ? Hope there is a kinesis expert here who can explain what is going on thanks
I think I understand the shards are not yet expired and are still readable but not writable because of retention period
Right. Not all shards are open. Anyway, there are off-the-shelf solutions for auto-scaling Kinesis streams, I would highly recommend using them instead of writing your own: https://aws.amazon.com/blogs/big-data/scaling-amazon-kinesis-data-streams-with-aws-application-auto-scaling/
Recently, AWS launched a new feature of AWS Application Auto Scaling that let you define scaling policies that automatically add and remove shards to an Amazon Kinesis Data Stream. For more detailed information about this feature, see the Application Auto Scaling GitHub repository. As your streaming information increases, you require a scaling solution to accommodate […]
@Alex Jurkiewicz Thanks I based my solution on that code https://github.com/aws-samples/aws-application-auto-scaling-kinesis from the article you linked to but found issues with both the cloudformation and the python lambda code. So I am improving it to make it more production ready.
Leveraging Amazon Application Auto Scaling you have now the possibility to interact to custom resources in order to automatically handle infrastructure or service resize. You will find a demo regar…
2021-03-03
i have CF with S3 origin, the origin has origin_path = “/build”, CF has its first behavior as “/url/path/*”.
I get the The specified key does not exist
error and the Key ends up being /build/url/path/index.html
I can access the files from root of the cdn but not from my path pattern
do i have to have the origin folder structure (s3) match my behavior path?
Yes
this client jenkins s3 plugin does not allow me to do that
You could rewrite the path with a lambda@edge function
thats where im at now
still need a lambda because if its in the folder it has no idea what to do with index.html
its fine im done
2021-03-04
I am a bit puzzled by a network issue
Machine has two firewall groups assigned: outbound = all open, inboud = ssh open from my ip
I can access ssh from my workstation
Making a ssh connection FROM the instance does not work
when I tcpdump my traffic, I can see traffic going out of the machine
ha I found it, the configuration of the firewall group changed a bit
Have you checked with this tool?
https://aws.amazon.com/blogs/aws/new-vpc-insights-analyzes-reachability-and-visibility-in-vpcs/
With Amazon Virtual Private Cloud (VPC), you can launch a logically isolated customer-specific virtual network on the AWS Cloud. As customers expand their footprint on the cloud and deploy increasingly complex network architectures, it can take longer to resolve network connectivity issues caused by misconfiguration. Today, we are happy to announce VPC Reachability Analyzer, a […]
ha interesting
I checked it out recently. Worked pretty well.
Woooohooo! So simple and now it’s there I shouldn’t be that much happy for it but everytime I setup ELK on AWS (soo many times) I check if it’s available and here it is. Amazon Elasticsearch Service now supports rollups, reducing storage costs for extended retention*
Can we use this instead of the lambda we have to purge old log indexes?
Hmm, rollups would be something different. Aggregating old data into new index with lower data resolution.
I think you mean curator
lambda. Recently they also introduced Index State Manager ISM
, I haven’t used it but seems it’s possible with that although it’s not as robust as curator
.
This policy from docs remove replicas and later remove index after 21d
{
"policy": {
"description": "Changes replica count and deletes.",
"schema_version": 1,
"default_state": "current",
"states": [{
"name": "current",
"actions": [],
"transitions": [{
"state_name": "old",
"conditions": {
"min_index_age": "7d"
}
}]
},
{
"name": "old",
"actions": [{
"replica_count": {
"number_of_replicas": 0
}
}],
"transitions": [{
"state_name": "delete",
"conditions": {
"min_index_age": "21d"
}
}]
},
{
"name": "delete",
"actions": [{
"delete": {}
}],
"transitions": []
}
]
}
}
2021-03-05
hi guys, anyone knows how to create “trusted advisor” in terraform?
see this https://github.com/aws/Trusted-Advisor-Tools and implement it using Terraform, you can create a module and publish it also
The sample functions provided help to automate AWS Trusted Advisor best practices using Amazon Cloudwatch events and AWS Lambda. - aws/Trusted-Advisor-Tools
you need to define these in Terraform
thanks a lot for your help bro!
can we upgrade the version of Cloud Front’s security policy in terraform?
Hi everyone, Let’s say that I have a terraform setup with an rds instance. After a while, I want to restore to a given point in time through a snapshot that i’m creating every day. Given that AWS limits to restoring the snapshot to a NEW instance, how can I still control this new instance using terraform? what’s the correct process to have here?
you can use the cloudposse terraform-aws-rds-cluster module an just create a clone cluster from a snapshot
or the rds instance module
and then you switch endpoints in your app or use route 53 records that are cname yo the real endpoints
or use rds proxy in front and change the endpoints of the proxy to point to the new instance/cluster
We have a task that saves the db to a secure S3 bucket so we don’t have to rely on those rules. Once you set it up, its really not that hard to maintain.
(in sql format)
Restore manually and then import the resource
Is there a guide online on maintaining such a database in a production environment?
I feel that saving to s3 can cause data failures, depending on what happened during the backup/reatore in the db
Like- lets say that you get 24/7 traffic consistently, hundreds of operations a seconds
How can I restore to a point in time without losing info?
it should do it in a transaction
and notify if it fails
i dunno its based on usecase and also you gotta weigh convenience (s3) over reliability (images)
we maintain a large production application for a very large car company and we save db backups to s3
never had issues
And in order to restore the db in place, you delete everything and pg_restore it?
have you tried a clone?
a 600 GB db takes about 5 min to clone
in Aurora
it is pretty fast, now from a snapshot takes longer
What i’m thinking about is the mutations that happen during this kind of backup. Where do they go? Assuming that I dont have a hot backup or some complicated setup
what do you mean? when a snapshot is issued any new transaction after the snapshot is not recorded in the snapshot
Maybe I missed on the “clone” part. Is that a feature of rds?
The snapshot is an instant of time from when the snapshot started. If your snapshot starts at 10am it will be a copy of your database as of 10am. Even if the snapshot takes 15 mins to create there will be no data from after 10am
@Alex Jurkiewicz thats also the case for a pg_dump?
Sure but I wouldn’t use that for a real database. The restore time is unworkably slow
On the same note, is there a good reason to use aurora compatible with postgres over simply rds?
im curious what is are you trying to guard against on a production database having these backups run so often?
Given a single instance setup, if there’s an issue that requires a restore of a backup, the data between the last snapshot and current time will be lost. The solution is obviously to use some sort of cluster but I’m trying to see all options in asvance
aurora storage layer is the magic behind aurora and is VERY fast
if you need replication of transaction then you need a cluster and a replica cluster
Maybe aurora solves it for me.. seems like it stores data on s3 and enables in place restore to a point in time
Most companies/products find the trade-off of risk of data loss low enough that losing some data due to periodic backups to be acceptable. I’m not saying your product is also like this. But if you are looking for higher availability/disaster recovery guarantees, it is going to cost you a lot, in both time and operational complexity. I suggest you consider carefully how important going above and beyond the standard tooling is for your product.
Also, if you are a company with these higher than normal requirements, you would have an Amazon account rep, and they would be very happy to organise you many presentations from the RDS team about all the many ways to give them more money. You should take advantage of that
Might as well just use https://litestream.io huh
Litestream is an open-source, real-time streaming replication tool that lets you safely run SQLite applications on a single node.
I love the simplicity behind it
shamelessly asking for upvotes here https://github.com/99designs/aws-vault/pull/740
tldr, we figured out a way to plist the aws-vault –server https://gist.github.com/nitrocode/cd864db74a29ea52c7b36977573d01cb
Closes #735 Thanks to @myoung34 for most of the help in adding the –no-daemonize switch. This allows the –server to be nohupped. $ make aws-vault-darwin-amd64 $ nohup ./aws-vault-darwin-amd64 \ …
Anyone know why AWS doesnt have a default iam policy for “ecs read only”? i have to create one just for this… ¯_(ツ)_/¯
2021-03-07
Hi Guys - Is there a way to get the AWS Organisation ID (unique identifier)) via AWS CLI / API ?
2021-03-08
I am running into some difficulties with provisioning a Windows EC2 instances with a Powershell script, which is passed into a aws_launch_configuration as such:
user_data_base64 = base64encode(data.template_file.powershell.rendered)
The script is also quite simple, it downloads a .exe from the internet and then starts a silent install with Start-Process:
<powershell>
Start-BitsTransfer -Source ...
Start-Process ..
</powershell>
This is my first time working with Powershell and provisioning Windows EC2s so I may be missing something but when I RDP into the machine the executable is neither downloaded nor installed.
If paste the contents of the Powershell script into Powershell on the instance, it works as expected however.
2021-03-10
Does anyone have a list of common dns names for aws services? I am trying to get a feeling for their patterns
you mean service endpoints? https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html
See the service endpoints and default quotas (formerly known as limits) for AWS services.
That’s already quite useful -thank you! However I was actually wondering on the DNS entries for the instances that customers receive. i.e. for RDS or MSK etc.
the hostnames for resources are all highly service-specific. There are no patterns, really
Do you know of a overview list in general? Working on some dns naming schemes for a service and was thinking of getting inspired from AWS
I don’t think they provide a list
can anyone help with please its like RDS is partially completed the upgrade from 5.6 to 5.7
is there a way to force the pending modifications now instead of waiting for the maintenance window?
yes, ‘apply immediately’
it’s an option you can pass when modifying an rds instance/cluster. Either via web console or api
i am trying to set that now
but it don’t let me set it
i can see the pending modifications via the API but can’t seem to apply them
are you passing apply immediately and a change to the config? You can’t pass only ‘apply immediately’ with no changes
i don’t want to make any changes though i want it to apply the pending mods e.g. upgrade to 5.7
you need to re-submit the pending modification with apply_immediately set to true
when you submit a change with that flag, all pending modifications are immediately applied
An error occurred (InvalidParameterCombination) when calling the ModifyDBInstance operation: Current Parameter Group (de-prd-yellowfin-01-20210219152210698600000003) is non-default. You need to explicitly specify a new Parameter Group in this case (default or custom)
Has anyone enforced IMDSv2 on their instances and had problems with cloud-init not starting?
I think a co-worker just hit this
When working with instance user data, keep the following in mind:
Has a fix in similar to this I believe
the doc does not describes how cloud-init will deal with the generation of the token, which is the problem
in my user data I can modify the script and add those calls
but when I was testing it was cloud-init without user-data complaining about it
as you can see here :
TOKEN=`curl -X PUT "<http://169.254.169.254/latest/api/token>" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v <http://169.254.169.254/latest/user-data>
they call the api to get the user-data and the user data does not have the token call
so it is pretty dam confusing
2021-03-11
2021-03-14
FYI, https://pages.awscloud.com/pi-week-2021 is happening! It will be a fun one A bunch of S3 + Data in general + some Serverless
Register Now
2021-03-15
I want to host a single file over HTTPS with a custom domain. Is there a simpler solution than S3 bucket + CloudFront + ACM cert? Simpler meaning serverless, no ec2 + nginx in user-data solutions
Amplify Console which is basically S3+CF+ACM+CI/CD+others? It’s easier to manage, but no Terraform support yet
AWS Amplify offers a fully managed static web hosting service that accelerates your application release cycle by providing a simple CI/CD workflow for building and deploying web applications.
as far as aws its probably s3 or ec2 like you mentioned
you could use a lambda with alb too
GitHub pages is what’d I’d go for at this point. I’ve used Netlify as well, it worked really well and was free (but I still prefer github pages)
someone appears to have published and then retracted this post (but it popped on my AWS News RSS), so I think we’re going to see Fargate exec tool soon! https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-exec[…]commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/
Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]
wow, that’s neat feature for debugging ecs
Today, we are announcing the ability for all Amazon ECS users including developers and operators to “exec” into a container running inside a task deployed on either Amazon EC2 or AWS Fargate. This new functionality, dubbed ECS Exec, allows users to either run an interactive shell or a single command against a container. This was one of […]
2021-03-16
Interesting ended up git cloning requests when virtualenv didn’t work. Anyone encountered this before?
[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'requests'END RequestId: 7ed24ad6-1b95-4600-9a35-d379726f6b47
Your code package didn’t include requests
I did install it via pip in the virtualenv
Followed by a pip freeze to generate the requirements file
How did you upload your code to lambda? Directly as a zip?
zip -r9 fuckingWork.zip .
aws s3 cp fuckingWork.zip s3://bucketName
Like dat
Run zipinfo on the zip file And verify it contains requests
You can also extract the zip in a clean docker image for python and see if”import requests” works
I’ll go with zipinfo its cleaner
thanks my man!
FYI previously requests was available via ‘botocore.vendored’ package. It was deprecated in january and removed https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/
Update (January 19, 2021): The deprecation date for the Lambda service to bundle the requests module in the AWS SDK is now March 31, 2021. Update (November 23, 2020): For customers using inline code in AWS CloudFormation templates that include the cfn-response module, we have recently removed this module’s dependency on botocore.requests. Customers will need […]
Cheers man i came across this whilst debugging. I’m not using this module in paticular so dimissing it was easy
Turned out it was just a recursive issue with the zipped file. Thx all.
2021-03-17
Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. - salesforce/cloudsplaining
new manager with retail background I guess
does anyone write their lambdas such that they understand a common, fake “test event”? such that you can invoke it with that event just to validate that the package really has the imports it needs?
@loren what do you mean by fake event ?
You just need to create an empty test event/message
And use it as a test event
something like:
{
"test_event": "this is a test"
}
Right…
yeah, i know how… i’m wondering if it’s a pattern others are using or contemplating
I use that for cloud custodian lambdas
In my particular case, I prefer using real events/with real valid payloads
the lambda would check that key and if present run some simple test logic or just return
i also prefer real events, but we have some lambdas where the function makes an aws call using the value from the event. that value is dynamic, and not persistent. for example, at the organization level, a CreateAccountRequest event is generated when a new account is created. i can’t use a “real” event, or i end up doing “real” things to “real” accounts. and i can’t fake the CreateAccountRequest because then the lambda cannot actually get the CreateAccountRequest status
@loren Your lambda functions should be idempotent, meaning that if you execute several times the same lambda function with the same payload, you should have the same result
if only life were so simple
that CreateAccountRequest actually disappears after some time, so we can be idempotent for a while, but eventually the event itself becomes invalid
we do have a valid-ish payload, with just fake data, and currently we catch the exception in the test. if the lambda gets that far, we know the package is good. and we do unit tests on the code so we’re reasonably confident about the code behavior
Ok… that makes sense
but having valid-ish payloads for every event is a real pain to discover and doesn’t scale to hundreds of functions, when the thing i most care about is just validating that the package is actually good
so i was thinking, if i modify every lambda to understand this “fake” test event, and use that to validate the package, i can apply the same test to every lambda
and i can enforce that the lambda understand the test event by running that test for every lambda in CI with localstack
@RB i’m interested in hearing more about your experience with this pattern
i just use a generic test event. the json input doesnt matter with cloud custodian lambdas since they trigger on a cloudwatch cron. so i just use any json to kick off the lambda and see the output to make sure it didn’t throw an error
I personally think you should take care of this in the build pipeline.
@maarten can you expand? we are running the tests in the build pipeline…
right, i meant simply running node_with_version_x index.js
, that would find bad imports and doesn’t execute anything. And otherwise I’m thinking of the serverless
toolset to invoke locally., or better even, https://www.serverless.com/blog/unit-testing-nodejs-serverless-jest
Create unit tests for Node.js using the Serverless Framework, run tests on CI, and check off our list of serverless testing best practices.
yeah, this is terraform, so i’m using localstack to mock the aws endpoints… and configuring the provider to use the localstack endpoints
run terraform apply, invoke the lambda, inspect the result to determine pass/fail
Hi guys what are some top most/Must use tools in Managing the Kubernetes on aws eks or other clusters. any recommendations on best Practices?
2021-03-18
Anyone using Cognito’s MFA functionality? How do you block ability to disable previous MFA setup by calling associate software token again and again? this call can be sent by anyone (it just requires a valid access token) and if it’s sent for the second time it automatically overrides the previous setup and disables mfa on login.
let me know what support says:)
Its not a bug, it’s a feature
You should try to get min. TLS1.2 on cognito :’ -)
the author provided some feedback against it. if anyone is interested in daemonizing aws-vault using launchd, please leave some feedback.
aws-vault: Start metadata server without subshell (non-daemonized)
I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…
interesting point of view, a bit close minded
I am using the latest release of AWS Vault $ aws-vault –version I have provided my .aws/config (redacted if necessary) [profile sso_engineer] sso_role_name = snip_engineer sso_start_url = https://…
Is it possible to contract cloudposse’s services through an AWS marketplace private offer?
Sorry for the spam but I feel this is relevant to you folk @Erik Osterman (Cloud Posse)
That’s really interesting. We haven’t pursued it yet.
Hey folks — Is anyone using an external Security Saas product like Fugue or other to replace using AWS Config / SecurityHub? AWS account rep is suggesting we utilize https://www.fugue.co/ and I’d be interested in hearing folks thoughts.
Fugue puts engineers in command of cloud security and compliance.
I play with them before, they partner with Sonatype to create a IaC offering to check TF code
Fugue puts engineers in command of cloud security and compliance.
Fugue create regula and they have some ML/engine to check policies and such and offer IAM management too
recently I have been using CloudConformity from Trend micro
they all feel similar on what they do an give reports on
do they have any value? I do not know, I do not think they add much and over time Security(inspector, config, Guard duty) hub is going to eat them alive I think
that is the amazon way
Got it — Thanks for the perspective Pepe. I’m interested because it looks a bit daunting to implement all those tools: Inspector, Config, GD. And if I can skip that for a slight premium… then that’s of interest.
we’re using a managed/bundled version of Prisma Cloud, which is similar I guess to Fugue (from a cursory 5 second google)
primary annoyance is that their rules seem based around using AWS strictly as some sort of internal business network replacement and not running a product
one thing to keep in mind is that all the remediation rules/configs you will need to implement to solve the finding is going to be 80% of the work to setup config/cloudwatch/guarduty etc, don’t fool yourself thinking it going to be less work
most of this products require config enable etc
you will have a warning that will say “Enable guarduty”…..
haha yes one of the findings I keep suppressing is “enable config recording for all resources”
I may be extreme in my opinion here, but I honestly think the majority of the focus should go towards IaC scanning. Whether it’s Fugure/checkov/tfsec/Cloudrail, the future is in IaC.
The reason is that even when you find something in your live environment, through Fugue, Prisma, Dome9 or AWS’s own tools, no one on your dev team will want to fix it. So you’ll have a nice JIRA ticket sitting there but not moving.
VCs are realizing that there can be billions in fines for bad code and bad security practices
remember Equifax fix was like 15 lines of code and one hour of work
(it could be even less lines I think)
If it’s caught during development, it’s one hour. If caught in a vuln scanning in prod…
exactly that is why the Sec scanning of code and infra should happen at build time(left side)
We are trialling Laceworks at the moment. It’s quite a heavy solution and very far “to the right”, eg it runs in your prod account and picks up errors post deploy. But the coverage is very comprehensive. Not sure if I’d recommend or not yet
check us out, you can reach out in a dm if you want more info, i’m the CTO
Lightspin is a contextual cloud security platform that continuously visualizes, detects, prioritizes, and prevents any threat to your cloud stack
How does everyone handle MFA for root credentials for your AWS accounts (or whatever). Someone had the idea to just use an OTP device and store it in the safe, but that will take 2h+ for anyone local, and if you’re in another state then you’re screwed. A workaround would be to just open a case with amazon to reset MFA which we’re fine with. Search wasn’t super helpful…help, por favor!
We have the QR code for MFA stored at last pass
That simple … a few people have access to that QR code
LOL of course something that simple would work…thanks
Seems like you’d love to involve the CIA/NSA/FBI and the S.H.I.E.L.D agents to safeguard the QR code
New to this team but I’ll be sure to find the tinfoiled hat guy. There’s always at least one.
LOL
we have h/w tokens at the moment but due to the shift to remote are going to move them to ‘software tokens’ in a password store service
right on, we’re using hashicorp vault for the password part, trying to figure out the second factor https://aws.amazon.com/iam/features/mfa/?audit=2019q1
You can use 1password as an OTP generator by providing it a QR code. Then you can share that OTP generator with your teammates in a shared vault
Ahhh, you can also just grab a physical code from AWS directly… Thanks guys. This was much simpler than I realized it would be. I didn’t know there would be so many options.
2021-03-19
Anybody using and figured out a way to consolidate API Gateway logs? Currently each requests creates 28 log messages. Creating 28 million log messages per million requests is silly. Not a ghastly expense, but one that I’d like to mitigate.
I created a support request too. I’ll update the thread for those that might be interested.
you happen to see this post already? https://www.alexdebrie.com/posts/api-gateway-access-logs/
Learn the what, why, and how of API Gateway access logs.
I have not, thank you
thanks @loren - I updated my API Gateway and have the desired result now in my Datadog Logs view
deployOptions: {
loggingLevel: apigateway.MethodLoggingLevel.OFF,
accessLogDestination: new apigateway.LogGroupLogDestination(logGroup),
accessLogFormat: apigateway.AccessLogFormat.custom(`{"requestTime":"${apigateway.AccessLogField.contextRequestTime()}","requestId":"${apigateway.AccessLogField.contextRequestId()}","httpMethod":"${apigateway.AccessLogField.contextHttpMethod()}","path":"${apigateway.AccessLogField.contextPath()}","resourcePath":"${apigateway.AccessLogField.contextResourcePath()}","status":${apigateway.AccessLogField.contextStatus()},"responseLatency":${apigateway.AccessLogField.contextResponseLatency()}, "traceId": "${apigateway.AccessLogField.contextXrayTraceId()}"}`),
dataTraceEnabled: false,
tracingEnabled: true,
metricsEnabled: true,
}
very nice! yeah, alex debrie writes some of the best posts on this stuff. definitely my goto when i’m scratching my head on how it works
Just curious if anybody has tried to do visualizations of AWS regions visually in something like PowerBI , grafana (or the AWS analog, quicksight?) PowerBI mentions ShapeMaps, but they need something called a shapefile or TopoJSON .. anybody tried this before?
I’m not sure I’m following when you say visualizations of AWS regions - do you mean map out AWS resources for individual regions given a data set? I used CloudMapper over a year ago just to get an overview, I’m not sure if it meets your use case. https://github.com/duo-labs/cloudmapper
CloudMapper helps you analyze your Amazon Web Services (AWS) environments. - duo-labs/cloudmapper
actually simpler than that - I don’t really need resource listings at all. I already have a table of items + regions they’re in (ap-east-1, us-east-1, ca-central-1, etc.) , but if I plug those items into a powerbi map visual, it doesn’t give out much useful information. I’m hoping somebody has gone ahead and generated a simpilified globe with the various regions or zones out there.
Pretty sure you’d need to get something like zip codes or similar to then map to a specific location on powerbi if the geographic stuff requires that. Eu-west-1 would need to be mapped to something for powerbi
Thanks, that makes sense. I can certainly go about setting this up - just was curious if there was already a mapShaper file out there that somebody’s already done. =]
2021-03-20
2021-03-21
2021-03-22
Anyone experiencing DNS issues with the AWS Console today?
Hahah try not to use it as well, except when you need screenshots for SOC2 compliance purposes.
I’m oddly getting DNS_PROBE_FINISHED_NXDOMAIN
for both signin.aws.amazon.com AND status.aws.amazon.com.
Same error here in Romania, using default ISP DNS, Google, and Cloudflare DNS
Ah great — Thankful I’m not alone!
stupid CLI question: I’m creating an SSH key and want to tag my resources in the same commmand. I do with with the --tag-specifications
flag.
aws ec2 create-key-pair \
--key-name bastion-ssh-key \
--tag-specifications 'ResourceType=key-pair,Tags=[{Key=deployment:environment,Value=sbx},{Key=business:steward,[email protected]},{Key=security:compliance,Value=none}]'
. . .
How can I split the tags into multiple lines per tag? I’ve tried a few different ways and the CLI keeps complaining to me. Seems you have to put this in a single line. Not opposed, but this just makes it unreadable.
@mikesew you may want to consider creating a json template that you can import into the command, it will make things much neater!
$ aws ec2 create-key-pair --generate-cli-skeleton
{
"KeyName": "",
"DryRun": true,
"TagSpecifications": [
{
"ResourceType": "snapshot",
"Tags": [
{
"Key": "",
"Value": ""
}
]
}
]
}
then:
aws ec2 create-key-pair --cli-input-json FILE
2021-03-24
Anybody have an example of a WAF v2 Rule that blocks requests with an http
protocol? I’m figuring that I’m looking for SingleHeader but not sure if I should be looking for protocol
, http.protocol
or X-Forwarded-Proto
or if I’m totally off base
Sorry to ask the dumb question when I’m sure you have already thought about it, but you can’t do that at your LB layer by redirecting or not opening up port 80?
I want it to redirect by default, but I want to drop non-secure requests to specifically an authorization endpoint
you’re good to ask the question, assume nothing
Gotcha. I believe WAF is typically the first in the chain, so I would assume you wouldn’t want X-Forwarded-Proto.
This might be one where you need to setup rules for all 3 options and make them COUNT instead of BLOCK and then watch your metrics.
I mean the Rule itself is not that hard, it’s just figuring out if there is a header that I can use for protocol
otherwise I’m just going to use Origin BEGINS_WITH http:// && Host EQUALS <xxx> && Path EQUALS /auth
I put a support request in, but was hoping somebody might have ran into this
Ah gotcha. Yeah, I’m not 100% sure. Support should be able to figure that out for you or you can try a few things.
yeah, it’s not a high priority issue so might as well give them a shot rather than keep guessing headers…which I’ll do if I have to
Yeah
According to AWS Support blocking by protocol can’t be done at the WAF and should be done at the ALB - so back to your original suggestion. Kinda sucks because of the rule limit, but makes sense.
Ah that sucks, but at least you know the path forward.
2021-03-25
With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]
With aws vaults metadata service, not sure how useful this toolkit is in this context
With the latest release, you can get connected with AWS SSO in the AWS Toolkit for VS Code. To get started you will need the following prerequisites: Configured single sign-on by enabling AWS SSO, managing your identity source, and assigning SSO access to AWS accounts. For more, information see Using AWS SSO Credentials docs as […]
I love this tool to quick check cloudwatch logs from VSCode, a killer feature for me
Ill give it a go then! Thanks for sharing
AWS Gurus: How can I read secrets from AWS Secrert manager inside EKS pod ?
or parameter store
client want that secret must be encrypted with KMS key, they don’t want to use Vault or SecretHub
Why not use the external secrets operator?
Integrate external secret management systems with Kubernetes - external-secrets/kubernetes-external-secrets
(It was by godaddy)
Thanks will take a look, is the secrets created by this external operator encrypted ?
yes, if you enable encryption at the EKS layer, which our module supports
(uses KMS)
nice, this might make step functions rather a lot easier to define and use… https://aws.amazon.com/about-aws/whats-new/2021/03/aws-step-functions-adds-tooling-support-for-yaml/
edit: nvm, now the link is working
anyone tried it yet? https://aws-workbench.github.io/
This isn’t actually made my AWS, correct? I’m confused on that point.
yeah it looks like an independent tool
Yeah — I can’t see using it. This goes into the same reasons why I wouldn’t build a mobile app using a “build your mobile app using this fancy UI” tool: Things break down once you want to make things unique for the product or business.
yeah. i have seen the same for cirucit design tools… basically you layout the blocks and the connections and the tool generates the code. but then to really tweak things you have to tweak the code…which breaks the connection to the UI stuff.
i would rather go the other way: here’s my code, please diagram it
Yeah that’s the more possible approach.
Regardless, I think we need to accept that machines don’t know enough about what we’re going to be building to ever do these types of jobs well enough beyond simple examples / the most boilerplate usage.
i know there’s an AWS tool that kinda does that… can’t recall the name off the top but last i looked at it, you had to spin up some cloudformation and point it at the account to read the resources.
I guess there is also terraformer but that is a resource -> code
tool.
Yeah, AWS has a CloudFormation builder tool that I’ve seen once that is somewhat UI driven, but then you’re dealing with CF and
I think if your organization was going to stick with provided solutions constructs something like this might be possible…but if your org was that simple then there’s probably a SaaS solution out there for you
Would anyone happen to have come CLI command for getting all load balancers with target groups with no listeners? We’re looking to clean up our dangling LBs, but I’m not super experienced with AWS CLI https://www.cloudconformity.com/knowledge-base/aws/ELBv2/unused-load-balancers.html# I’ve mostly been following the steps here, but I’m attempting to create some kind of command to get all the ones with no target instances
Identify unused Elastic Load Balancers (ELBv2) and delete them in order to reduce AWS costs.
@Mahmoud this worked for me:
echo -e "LoadBalancer\tListenerCount"
for i in $(aws elbv2 describe-load-balancers --query="LoadBalancers[].LoadBalancerArn" --output=text);
do
echo -e "$(echo ${i} | cut -d/ -f 3)\t$(aws elbv2 describe-listeners --load-balancer-arn=${i} --query='length(Listeners[*])')"
done | tee report.txt
Thank you, this is perfect!
ahh reviewing your request and this counts the listeners… not the target groups. let me update in a sec
what you want is much easier! just one CLI call:
aws elbv2 describe-target-groups --query="TargetGroups[].{Name:TargetGroupName, LoadBalancerCount:length(LoadBalancerArns[*])}" --output=table
This is really good and also outputs a bunch of resources we need to clean up, but I think I miswrote my initial question I need all load balancers with target groups that have no registered targets Basically, looking to clean up LBs that are in front of nothing
hmmm yeah you can extend to see what’s in the target group i think…
I’ve tried this command
aws elbv2 describe-target-groups --target-group-arn "arrnnnnn"
and it does not show the targets
I think you have to aws describe-target-health
and pass in the target group ARN to see the targets
You can’t get it from describe-target-groups
try this one:
echo -e "TargetGroup\tAttachmentCount"
for i in $(aws elbv2 describe-target-groups --query="TargetGroups[].TargetGroupArn" --output=text);
do
echo -e "$(echo ${i} | cut -d/ -f 2)\t$(aws elbv2 describe-target-health --target-group-arn=${i} --query='length(TargetHealthDescriptions[*])')"
done | tee report.txt
i will leave it to the reader as an exercise to find the load balancer as well
haha thanks for the assistance
2021-03-26
Hi. We have a daily process running, of which some jobs started failing about two days ago. Does anybody have an idea what might cause this? We’ve already followed the steps https://aws.amazon.com/premiumsupport/knowledge-center/batch-job-failure-disk-space/ The AWS Batch is failing with the error message “CannotPullContainerError: failed to register layer..: no space left on device
”. This happens for only some jobs, not all.
I have already created a Launch template and given it a 500G storage and in user data
, have set :
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=300G"'
I assume you are attaching that to /dev/xvda?
let me double check
“/dev/xvdcz”
“our AMIs have root as /dev/xvda”
mh
one of our engs set that up
do you think that might be the issue?
It could be. IIRC, batch used to have a default root parrtition size of 10 GB (it may have been bumped to 20 GB). If you have a large container (we have an unusually large one), it is possible you are running out of space on the root partition.
We attach our larger disk directly to xvda
Anyone?
2021-03-29
Morning everyone!
I was wondering if it is possible to control recently announced auto tune feature for aws elasticsearch using terraform? see here - https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/auto-tune.html
Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.
you can check the docs here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticsearch_domain
Learn how to use Auto-Tune for Amazon Elasticsearch Service to optimize your cluster performance.
generally, for high profile features it takes a few weeks for support to land. For less popular services, updates can take months or even longer
@Tim Malone was first to open issue https://github.com/hashicorp/terraform-provider-aws/issues/18421
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Hi all, a external customer wants us to enable a s3 policy on a bucket:
{
"Version": "2012-10-17",
"Id": "Policy1472487163135",
"Statement": [
{
"Sid": "Stmt1472487132172",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
},
{
"Sid": "Stmt1472487157700",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET_NAME"
}
]
}
I am a bit worried that they could open up the whole bucket as public
but I am not sure, because this policy is locked to the specific principal
You are correct, this policy allows them to modify the policy on the bucket, thereby opening it to the public.
2021-03-30
We’re seeing cloudwatch log group load time on console to fluctuate between 5-60+ secs. This started last couple of weeks. What’s the most likely cause? Large number of log groups? Long retention? What’s the level of performance that can be expected from aws for sub 1k log groups? What can be done to optimize performance?
just some random thoughts that might help :man-shrugging:
what do you mean by “load times”:
• Listing the Log groups
– how many Log Groups do you have? # of metric filters, # of subscriptions
• Listing the Log streams
– how many streams are in the group?
• Opening a Log stream
and seeing the messages
Did you have any spike in usage for any of the above? Aka, did you unintentionally create 1000s of new logs groups?
any chance you are closing in or at a Usage limit for your account for any of the above? Might need to submit a rate limit increase
I just used the Network tab in the Developer Tools in Chrome - took me 35s to reload and I have 401 Log groups in said account – it’s probably been like that and I’ve never noticed.
We haven’t increased log groups significantly. We’re just trying to load a log stream. Performance is erratic. Fast in one instance and slow right after when your refresh. AWS support if working on it now. They say there’s an issue.
if they give you any meaningful response, please do share!
I wouldn’t count on it though, probably just a generic “our bad”
Will do. Thanks!
no meaningful response as you predicted, but cloudwatch seems to be performing better now. who knows what was the problem.
the typical MO is not to fully disclose…I think it comes from a paranoia of sharing some of the secret sauce
Hey team! Question:
TLDR: is there a good way to connect to a private AMQ without SSL?
Details: I’m setting up an environment that uses Amazon MQ and I’d like to keep the service private (along with the rest of the resources). To that end, everything that needs to be private is sitting in a private subnet of the VPC: ECS, RDS, etc.
Because everything is private, I’m using the IP address to connect to the AMQ endpoint. However when i connect, the app fails with an SSL error:
cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs
This leads me to think that the connection is SSL and the cert that AWS is serving up dosen’t have the IPs in it, but rather the DNS name of the MQ instance.
Googling for a solution, I found one doc that recommended putting an NLB in front of the AMQ and connecting to that but it seems (to me) that the connection might still fail; what about SSL validation between the ALB and the NLB? This solution also seems over engineered and potentially expensive given the addition of the NLB on top of the AMQ instance(s). https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2019/09/10/Solution-overview.png
Anyway, just thought I would share this in the event someone has seen this type of issue before and knows a choose_one(good/reliable/affordable)
solution for this problem. Cheers!
I should add that unlike the diagram, my connection to AMQ is coming from inside the VPC from the same private subnet as the AMQ.
I don’t use AMQ, but I am guessing it provides a DNS hostname as the endpoint, rather than a private IP address. Why not use that hostname?
I’ve set the service to not be publicly accessible. Indeed there is a DNS name that comes with it and i was configuring the app to use that but it was timing out. i attributed that to the fact that MQ was not publicly accessible.
this page suggests the hostname is like
<https://b-1234a5b6-78cd-901e-2fgh-3i45j6k178l9-1.mq.us-east-2.amazonaws.com:8162>
I am thinking now, though, that I might have to do some sort of private DNS. but then, if I do that, I’m not sure that the cert would still resolve.
yes, my host name is just like that
ok, well you won’t need private DNS, since that will be a public DNS record. You should be able to resolve it from your PC, with host [blah.amazonaws.com](http://blah.amazonaws.com)
Private, using the DNS gives:
lookup b-7898a321-eac9-4db7-9d25-0ae2f020dabf.mq.us-west-2.amazonaws.com on 10.0.0.2:53: no such host
Private, using the IP gives:
cannot validate certificate for 10.0.2.29 because it doesn't contain any IP SANs
can you SSH to one of your client EC2 instances?
well the SGs are locked down to those in use by the services. I could do a bastion and connect from there. no EC2s at the moment. Just ECS.
so, I think you need to verify the hostname you are using
yeah. at the moment I’m going to try opening up the AMQ just to see if i can get it connected. putting the MQ in a public subnet. i hate doing that but i just need a POC at the moment.
“I’ll lock it down later” - Famous last words
the fact it can’t resolve suggests a typo to me
because i’m 99% sure it will be a public hostname you can resolve from anywhere on the internet
no typos. i am reading the info from SSM. the TF code writes the MQ hosts DNS name into a variable and the ECS reads it from there.
this is me trying to trying out different methods:
resource "aws_ssm_parameter" "amqp_host" {
name = "/${var.name}/${var.environment}/AMQP_HOST"
description = "${var.name}-${var.environment} AMQP_HOST, set by the resource"
value#value = "amqps://${aws_mq_broker.amq.id}.mq.${var.aws_region}.amazonaws.com:5671"
value = "amqps://${aws_mq_broker.amq.instances.0.ip_address}:5671"
type = "String"
overwrite = true
....
can you resolve the hostname yourself from your local machine?
no. not when this is set to false:
resource "aws_mq_broker" "amq" {
...
publicly_accessible = true # false
....
}
i’m going to try a config with it set to true. the MQ resources take 10-15 minutes to destroy and rebuild so it will be a few before i can report back
that’s interesting. I would have thought a hostname ending in amazonaws.com would always be accessible. Maybe you can check the lookup with dig +trace [host.amazonaws.com](http://host.amazonaws.com)
.
I know that some DNS servers will refuse to resolve a public hostname that points to internal IP addresses, for security reasons. It might be this is why you can’t resolve it from your laptop when publicly_accessible = false
. But dig +trace
is low-level enough to ignore that rule.
yeah that might make sense. i didn’t check to see the IP that the hostname resolved to when i had it config’d as private.
ahh i did check it. it was a 10.
IP address which is indeed not internet routable. so yeah, kinda confusing. I have the DNS name, it points to the right place (internal IP) but the cert has the public name…. which can’t be resolved?
I will check the app as well. there might be a way to get it working from the back end with everything private if I can ignore the SSL connection and just connect.
One other thing the docs suggest:
To ensure that your broker is accessible within your VPC, you must enable the enableDnsHostnames
and enableDnsSupport
VPC attributes
Explains the workflows of creating and connecting to an ActiveMQ broker
cool i will definitely look into that!
2021-03-31
I’m providing authorization to an API Gateway (proxy integration) with Cognito and I have a Lambda function (dockerized) requesting the API endpoint https://{id}.execute-api.{region}.[amazonaws.com](http://amazonaws.com)
. I would like to know if it is possible to allow any resource within AWS including my dockerized lambda functions to access the API without authentication? Currently getting the response {"message":"Unauthorized"}
Note: That’s a public API since I have external apps requesting it.
yes, you need to use IAM authentication method https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
Thanks @Alex Jurkiewicz!
Does anybody know if there is an AWS provided SSM parameter for the elb-account-id like how they provide SSM parameters for AMI IDs?
I put a support case in too - I’ll update if I hear anything
if you’re using terraform, it provides them as a data source
AWS Support confirmed that there is not currently an SSM Parameter that I could use for this.
My choices are to either create SSM Parameters (which I’m considering) or use a map in my CloudFormation templates.
That TF data source just maintains a map https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/data_source_aws_elb_service_account.go