#aws (2020-02)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2020-02-03
Hey, does anyone seen a boto3 behaviour where 1st request fails with unable to find credentials exception? I’ve recently started to observe this behaviour on an multi-docker elastic beanstalk environment (which until ~thursday was running flawlessly). For me it happens only when a new instance is brough up by autoscaling and worker starts to send records to firehose. What’s even more strange is that when debug was enabled to have a littlebit more verbose output in the logs for boto problem perished.
BTW firehose credentials are supposed to be obtained from task IAM role
2020-02-10
For those who use CircleCi, how do you manage rotating the IAM User credentials you supply to your CI workflows?
Right now I just use the CircleCI Environment Variables for the repo
Gotcha, thanks! I’m not sure that would work for my use case as we don’t want devs to be able to get the key values ever, but it’s good to know all solutions that exist
I have thoughts on this - but haven’t implemented it
What I’ve wanted to do is setup a cron job (in codefresh parlance) that calls the STS API to get the short lived credentials. Then update the shared secrets on codefresh. That way if credentials leak, their validity is limited.
I assume something similar could be done on circle
Of course, the ideal way is to have something like a runner that runs on-prem
in codefresh, this is venona
If have the runners on prem, those can run as a pod and assume a temporary role with credentials
@David When you store an env var in circle CI it’s a write only operation. So you can’t see the variables once their set.
My understanding was that devs could ssh onto CircleCi servers and run env
to see env values. Or they could make a workflow that prints env values to the circle output
ah yes, that’s true. But’s there’s nothing stopping them from doing that in the code either right?
well if you use CircleCi Contexts, it ensures that devs can’t see the values.
But the sad part is that Circle has an API for updating standard env vars, but not env vars managed using contexts
That sucks
GitHub actions also doesn’t support setting secrets via API. We were trusted by that since we want to programmatically update all of our repos.
Hello .. Would you be open to using the AWS Secrets Manager or the Parameter Store ? There’s an orb that CircleCI provides. I haven’t spiked it out yet but is on my list :)
I think the problem though we’re trying to solve here is how to have “short lived credentials” for AWS users
I guess one could write short lived credentials to ASM and SSM
but then you still need long lived credentials in Circle to access the short lived ones
I don’t think it’s optimal
I agree.. you are right
2020-02-11
I am trying to replace a bastion host that was used for port forwarding an RDS db to localhost, and replace it with an EC2 with SSM permissions. The command we used before was ssh -i ${sshPrivateKeyPath} -L ${localPort}:${remoteDbUri} -Nf ${publicBastionUri}
remoteDbUri would be something like db.private:5432
, with both a private DNS name and a port.
In SSM, I found the AWS-StartPortForwardingSession
document, but that won’t let me specify the db.private
part I need.
Anyone know how I can do this?
I battled with the same problem and created a terraform project + scripts for it. Take a look here: https://github.com/Flaconi/terraform-aws-bastion-ssm-iam
AWS Bastion server which can reside in the private subnet utilizing Systems Manager Sessions - Flaconi/terraform-aws-bastion-ssm-iam
Incredible, thank you! Your module (plus looking at https://www.reddit.com/r/aws/comments/df6uip/ssm_tunnelling_ec2_what_about_rds/fhcm3e1/?context=3) got me to where I needed.
Thank you so much @maarten!
We’re excited to announce the v2.0.0 GA release of the AWS CLI version 2 (v2). AWS CLI v2 builds on AWS CLI v1 and includes a number of features and enhancements based on community feedback. New Features The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS Single […]
Finally an updated cli with binary distribution and built in SSO support
Well, AWS SSO support… At least there is credential_process for others I guess
for when when your builds break after upgrading: https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-ecr-get-login
This topic describes the changes in behavior between AWS CLI version 1 and AWS CLI version 2. It covers some backward-compatibility concerns and other items that might require script changes.
ecr get-login
has been removed and replaced with ecr get-login-password
or actually
RTFM The older aws ecr get-login command is still available in the AWS CLI version 1 for backward compatibility.
ignore me
2020-02-12
Hello, I am trying to attach an CSV file from local to email as attachment AWS CLI (SES) First i tried
cat <<EOF > ./message.json
{
"Data": "From: [email protected]\nTo: [email protected]\nSubject: Report\nMIME-Version: 1.0\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\n\n--NextPart\nContent-Type: text/plain\n\nReports: report\n\n--NextPart\nContent-Type: text/csv;\nContent-Disposition: attachment; filename=\"report.csv\";\npath=\"report.csv\"\n;Content-Transfer-Encoding: base64;\n--NextPart--"
}
EOF
cat message.json
aws ses send-raw-email --raw-message <file://message.json>
i also tried modifying
{
"Data": "From: [email protected]\nTo: [email protected]\nSubject: [Subject]\nMIME-Version: 1.0\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\n\n--NextPart\nContent-Type: text/plain\n\n[Body]\n\n--NextPart\nContent-Type: text/comma-separated-values;\nContent-Disposition: attachment;\nContent-Transfer-Encoding: base64; filename=\"report.csv\";\npath=\"report.csv\";--NextPart--"
}
Both the methods didn’t work for me, not sure how to modify next to achieve what i am trying to do ?
@Erik Osterman (Cloud Posse) does the maintainer of ssm-diff
live in this slack? rofl
https://github.com/runtheops/ssm-diff/pull/27
This PR lets you run the command like: ssm-diff –overwrite false apply To prevent overwrites in special cases.
Doesn’t look like it!
This PR lets you run the command like: ssm-diff –overwrite false apply To prevent overwrites in special cases.
I’ll be happy to reach out to him and invite to slack, if you can DM me his email
I was trying to find his email, but then google actually pointed me back to the setup.py
lol
(ok, gonna delete that to avoid spam)
invite sent!
2020-02-13
2020-02-14
Looking for some free tool/options/advice to generate Reports based on CPU utilization of EC2 instances, primarily to check if they cross above 80% and 10 % between certain period ? Thoughts ?
Does CloudWatch+Excel not meet your use case?
looking to make it automated by sending reports in an email
Does it have to be a report; the alerts generated from CloudWatch are not sufficient?
2020-02-18
2020-02-19
Hey someone are already use “Provisioned Concurrency Lambda” to avoid cold start ? I have always on X-RAY some request with “Initiliazation” on Lambda…. The article : https://aws.amazon.com/fr/blogs/aws/new-provisioned-concurrency-for-lambda-functions/
It’s really true that time flies, especially when you don’t have to think about servers: AWS Lambda just turned 5 years old and the team is always looking for new ways to help customers build and run applications in an easier way. As more mission critical applications move to serverless, customers need more control over the performance […]
2020-02-21
@scorebot help keep tabs!
@scorebot has joined the channel
Thanks for adding me emojis used in this channel are now worth points.
Wondering what I can do? try @scorebot help
2020-02-22
February 21, 2020: We fixed a missing comma in a policy example. When you perform certain actions in AWS, the service you called sometimes takes additional actions in other AWS services on your behalf. AWS Identity and Access Management (IAM) now includes condition keys to make it easier to grant only the minimum level of […]
2020-02-24
sorry for the rant but why does it suck so hard to deploy an eks cluster on aws with terraform? This is the most unusable provider I have seen so far.
Please elaborate
I think the right answer would be, “Please vent more”
It takes 18 different resources to deploy a single eks cluster.. With Digital Ocean I get the same in 11 lines https://github.com/helm-notifier/Terraform-Infrastructure/blob/master/01-base/digitalOceanK8s.tf with azure I can get the same in 30 lines .. I do not need autoscaling groups to get started..
Contribute to helm-notifier/Terraform-Infrastructure development by creating an account on GitHub.
Like I get that AWS is hard to get into but this is just .. not fun?
example is taken from here; https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.
Pierre, SaaS is always more fun than IaaS , you just have to select your poison
puuuuuuuuuuuuh.
Well anyways I got it working..
but it took me like 6 hours
2020-02-25
Are you in a Startup? As of today, AWS has launched the Activate Founders package for Startups! This package unlocks a new set of benefits. If your startup isn’t affiliated with a venture capital firm, accelerator, or incubator, then your startup can now apply to receive $1,000 in AWS Activate Credits (valid for 2 years) […]
if you’re part of APN you can get funding via Innovation Sandbox: https://aws.amazon.com/partners/funding/
looking for strategies to use in bash scripting (hope it’s fine asking here as it uses aws-cli)
I got this far using jq from the json output of aws cloudwatch get-metric-statistics
Instance-ABC
19.514049550078127
12.721997782508938
13.318820949213313
15.994192991030545
18.13096421299414
Instance-BCD
19.5140495
12.7219977
13.3188209
15.9941929
18.1309642
13.3188209
15.9941929
18.1309642
I want to achieve
Instance above 70%
Instance-ABC
Instance-BCD
Instances below 20%
Instance-EFG
Instance-HIJ
lets see your current query
for i in $(aws ec2 describe-instances | jq -r '.["Reservations"]|.[]|.Instances|.[]| .InstanceId' | sort -n) do echo "Instance $i" aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time 2019-02-20T15:00:00T --end-time 2019-02-20T18:00:00 --period 60 --namespace AWS/EC2 --extended-statistics p80 --dimensions Name=InstanceId,Value=$i | jq '.Datapoints[].ExtendedStatistics[]' done
maybe something like this?
jq ‘.Datapoints[] | select(.ExtendedStatistics | CPUUtilization>=70)’ |
sorry, not currently on aws so cannot test easily
Np, thanks for your input
did it work?
(or was I at least close? lol)
it errored, trying to figure out how to use it
2020-02-26
Does anyone has experience setting up codepipeline for all the branches ? I want to run default pipeline for any branch the code is pushed to / created new branch
Hey folks — Just realizing having 4 NAT Gateways (one for each private subnet) across 2 VPCs is costing a client 120 bucks a month (0.045/hour * 24 * 30 * 4). That’s almost a 1/3 of their bill at the moment as this is a small application that is still in Development…
That seems outrageous, but I do understand that’s nice to not have to manage a NAT instance. For those of you that have to deal with cost a lot — Do you just eat the cost or is there any cost mitigation tactic on that front?
You can just run 1 NAT gateway and have all the subnets use it. The point of running extras is for high availability.
Exactly. moreover, do they have EIP’s assigned to NAT gateways as well? Cz they will be costing extra too
The 2 VPCs thing is a bit of a curveball. Maybe run 1 NAT gateway per VPC?
Hm yeah, routing both private subnet outgoing traffic through one seems like a reasonable change.
If you are using the cloudposse modules we support a flag to use NAT instances instead which are a tiny fraction of the cost
I’ll have to lookup what the EIP being assigned to each NAT is costing me. Didn’t notice that when looking through billing.
Yea that cost is additional
@Erik Osterman (Cloud Posse) Yeah — I am using the cp modules. Was wondering if that would be recommended.
For Dev / Stage environments… that should be fine. Don’t need the costly high availability of the Gateway.
Thanks for weighing in folks!
Ya worth doing in dev/staging if cost is a concern
2020-02-27
Good day, community, are there any best practices for naming convention for internal hosted zones in AWS world with Route53 with consideration of ACM certificates (use public domains to approve private names ) and environments like dev or prod, regions & multiple aws accounts, clouds?
Hey guys, I have a pretty annoying problem with ECS. I’m using AWS Batch which manages an ECS cluster that uses regular EC2 instances. Once a day I run ~50 batch jobs (i.e. ECS tasks) in parallel. Everything is fine, except when the container exits the task fails with CannotInspectContainerError: Could not transition to inspecting; timed out after waiting 30s
. My research has brought me to believe it may be related to exhausted IOPS, but after increasing IOPS the errors keep coming in. Has anybody experienced the same?
have you considered running those jobs on larger number of smaller instances?
Not yet, but that’s a good direction to explore, thanks!
I have seen that error
and I think is some sort of hardware exhaustion
resource exhaustion
in my case was the memory soft setting in my task def
in ecs +ec2
Thanks @jose.amengual , I’ll check that, too!
Since we are on the topic of ECS, I got a confirmation from AWS Support today that Fargate performance is not guaranteed. We have noticed significant differences in cpu performance between regions, and Fargate also seems to be frequently outperformed by T2 instances. AWS recommendation was to switch to using EC2-backed ECS setup.
Good to keep in mind, especially since now Fargate is available for EKS as well
does anyone know what’s the best way to develop,test and deploy AWS lambda functions ?
i enjoy developing in python cause its easy and i do it in vscode. i dont have to worry about memory or fast execution.
I don’t know about “best” but I’m a fan of https://github.com/aws/chalice - makes it easy.
Python Serverless Microframework for AWS. Contribute to aws/chalice development by creating an account on GitHub.
i am planning to develop nodejs app
I also use terraform with the claranet module. I think it’s way more powerful and less magical than serverless
You can use whatever language you want, the claranet module should work with anything lambda supports, but it has some special packaging logic built-in for python. You can provide your own packaging script to the module though
Or you can hit the easy button and just commit everything in node_modules, which is what I did here… https://github.com/plus3it/terraform-aws-slack-notifier
Terraform module that builds and deploys a lamdbda function for the aws-to-slack package. - plus3it/terraform-aws-slack-notifier
Cool, I wrote most of that Claranet module. I’m glad people like it. I made this one more recently which I think is better for many cases. https://github.com/raymondbutcher/terraform-aws-lambda-builder
Terraform module to build Lambda functions in Lambda - raymondbutcher/terraform-aws-lambda-builder
It can pip/npm install remotely inside another lambda function to build your lambda package, so less setup is required for the machine running terraform.
(Look at the nodejs and numpy tests to see how)
if the build is inside the lambda, doesn’t that take longer for the lambda to execute, which then increases your lambda costs?
It builds it once (until you make changes)
It’s magic
Nifty! Didn’t realize you were in this slack @randomy!
what does “npm install remotely inside another lambda function” mean ?
@rohit It’s explained here https://github.com/raymondbutcher/terraform-aws-lambda-builder#lambda-build-mode The module creates a 2nd “builder” lambda func using the same runtime as the one you’ve specified, then runs your build script (npm install) inside there to make your final lambda package, and stores that in s3. It then makes the actual intended lambda func using the zip in s3 that the builder func made.
Terraform module to build Lambda functions in Lambda - raymondbutcher/terraform-aws-lambda-builder
@randomy Thanks.
If i have my lambda function in a separate repository on github, how can i use your module ?
I am new to Lambda functions so i don’t understand the complete picture
There’s not really any way to pull in the Lambda source code from an external repo. If you can, turn that repo itself into a terraform module (put .tf files in the root dir of it).
Example using the claranet lambda module, same approach works for the lambda builder module too. You don’t have to have the source in a subdir like this if you don’t want to.
https://github.com/claranet/terraform-aws-asg-instance-replacement
Terraform module for AWS ASG instance replacement. Contribute to claranet/terraform-aws-asg-instance-replacement development by creating an account on GitHub.
Anyone have any experience with setting up cross account cloudwatch logging in aws ? Like sending cloudwatch logs of one account to another
i don’t suppose anyone knows of a magical tool to inspect cloudformation templates and then spit out an iam policy for creating/update/deleting those resources?
I’d seen a cli/script at one point that did this
I think it worked by looking at the cloudtrail logs
2020-02-28
As in finding out what permissions a CFN template is gonna require to allow you to run it? No, if you find one
Anyone know when AWS are going to add K8 1.15 to EKS
nvm im just slow on the go
Mad slow. :( They must have hit a serious problem updating clusters in place. Any tooling improvements to update in future would surely have been time boxed to get 1.15 out. 1.15 is almost EOL too.
Tell us about your request Support for Kubernetes 1.15 in Amazon EKS Upstream Release timing Changelog
Hey folks — new to CloudFront usage here… I have a Django application that has 3 concerns in regards to CDN caching:
- CMS Uploaded Media Files — All Stored in a S3 bucket.
- Javascript / CSS Static Files — Served by the Django application via whitenoise (CDN cache management package).
- Basic HTML caching. Will have a blacklist for HTML paths that shouldn’t be cached. Now for the actual question at hand: Should I have two CloudFront distributions or one?
The two CDNs would work where all S3 content is served by one CF distribution and the other CF distribution has the application as it’s origin and it serves the Static + HTML files.
The one CDN option would serve content from both the Application and the S3 bucket.
I think one CDN would be ideal as it’s less to manage, but I’m actually just confused on if this is possible since CF has such a wide footprint in terms of configuration / usage.
Any thoughts / suggestions on a path forward here? Gonna start reading more into CF, but figured someone here would have a quick: “Yes do the second option” or “No that’s not possible, you’ll need two”.
it could be (and we usually do it like this) one CF distribution with two Origins (one from the load balancer for the Python app, the other is for the S3 bucket), and a few behaviors. e.g. one behavior could be for the S3 origin and to cache everything, another could be for the load balancer origin for the static files (and caching), another for HTML files with possible diff caching rules, and a behavior for the blacklist path with no caching
@Andriy Knysh (Cloud Posse) Cool — Yeah digging around and I’m starting to understand that’s possible and likely the correct route to go down. Thanks for weighing in!