#aws (2019-05)
Discussion related to Amazon Web Services (AWS)
Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2019-05-01

Apple spends more than $30m a month on AWS, though this number is falling. Slack’s AWS spend is a contractual $50M/year minimum.


… And I thought our bill of ~$20k/year was terrifying


Lyft has signed up to pay Amazon Web Services at least $80 million per year for the next three years, totaling at least $300 million.

Better to sell shovels than mine gold during a gold rush, eh.

Does anyone know if it is possible to add tags to S3 object (node sdk) when using presigned url ?

maybe this is not a right place to ask this question


Hrmmmmm good question. Don’t think it’s possible natively, but anything is possible with lambdas

that’s true but that’s not something i want to do in my scenario

Yea, I wouldn’t want to either

let me explain my scenario so that you can better understand

when i want to upload something from my app, i am making a request to my backend service(nodejs) which returns presigned url then i use that in my frontend to directly upload the object from the browser

i was following the documentation https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property

it says
Note: Not all operation parameters are supported when using pre-signed URLs. Certain parameters, such as SSECustomerKey, ACL, Expires, ContentLength, or Tagging must be provided as headers when sending a request.

so i tried sending the tags in the ajax request with presigned url and i get invalid tag error
2019-05-02

SDK Python but could help : https://stackoverflow.com/questions/52593556/any-way-to-use-presigned-url-uploads-and-enforce-tagging
Is there any way to issue a presigned URL to a client to upload a file to S3, and ensure that the uploaded file has certain tags? Using the Python SDK here as an example, this generates a URL as de…

To clarify, I need to use {'tagging': '...'} with the verbose XML tag set syntax for both fields and conditions, and that seems to work as required.

Golang SDK (I do know it much better) says :

// The tag-set for the object. The tag-set must be encoded as URL Query parameters.
// (For example, "Key1=Value1")
Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"
`

@Issif thanks. Unfortunately, i did not any luck
2019-05-06

I think just recently I saw a note about a better tool/ui to look at your AWS Parameter Store Secrets, but for the life of me I can’t find it. The searching is terrible in the AWS console. All I really want is fuzzy search and proper pagination on searches >.< Anyone have something they know about?

Was it this one? https://github.com/smblee/parameter-store-manager
Saw this a few days ago
A cross platform desktop application that provides an UI to easily view and manage AWS SSM parameters. - smblee/parameter-store-manager

Parameter Store > AWS System Manager or EC2 scroll down the left panel

@Harry H that was it i believe. Thanks!

anyone here used lambda as a compiler ?

I want to send the code enter in a code editor inside a web application to lambda and compile it and get the results back

let me know if i am crazy

i’m sure someone’s done it! ppl have done all sorts of awesomely crazy things in lambda

sounds like a fun project. you can shell out to anything so i suspect if you packaged a compiler with your code (or downloaded it from s3 during function invocation) then you could run it

I want to build some sort of endpoint which accepts code sent from my web app and compiles

i am trying to figure out how companies like hackerrank,repl.it do it
2019-05-07

I’m told by one of our devs that you should see if there is a way for you code to compile and excecute code, because lambda is simply a code container.

the alternative is to find a way to compile the incoming code, get the compiled file (.jar or .zip file por eg.) and upload it to a termporal lambda, and the excecute that lambda

i did not find similar usecases online

anyone ever get this error DNS_PROBE_FINISHED_NXDOMAIN
intermittently with route 53

accessing a web app

Is this under k8s?

Check the logs for kube-dns. Had lots of problems like this in the past, but that was way back on Kubernetes 1.7 or earlier
2019-05-08

AWS London today.. anyone attending?

Good array of breakout sessions avlb

@Erik Osterman (Cloud Posse) it can’t be the cluster if I do an nslookup on the hostname and it doesn’t resolve the ip addresses. At that point is it a route 53 issue? (Happens intermittently for a few minutes)

is it on a delegated zone?

I’ve had the problem where one of the NS servers in the delegation was wrong

so in a round robin fashion some requests would fail

same goes for the TLD

if the nameservers are off

ah no i just double checked them. this intermittent issue has been happening only for a week, but ive had the same route53 record for over a year

is it only from your office or home?

e.g. maybe switching your own NS to 1.1.1.1 may help

no multiple customers

also we use uptimerobot and that will report some downtime too when it happens


for very brief period of time

this might help, when I have that brief period of downtime, it doesn’t resolve the ip addresses during nslookup:
$ nslookup blah.example.com
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
blah.example.com canonical name = app.example.com.

healthy:
$ nslookup blah.auditboardapp.com
Server: 192.168.1.1
Address: 192.168.1.10#53
Non-authoritative answer:
blah.example.com canonical name = app.example.com.
Name: app.example.com
Address: 54.191.49.21
Name: app.example.com
Address: 54.203.171.148
Name: app.example.com
Address: 54.212.199.41

Also only started happening since Saturday but 3 times now

But yes on k8s and I don’t see anything out of the ordinary in the kube-dns logs
2019-05-09


Hi, can anyone help, why the deployment is failing

here is the details what i can see “Invalid action configuration The action failed because either the artifact or the Amazon S3 bucket could not be found. Name of artifact bucket: codepipeline-eu-central-1-516857284380. Verify that this bucket exists. If it exists, check the life cycle policy, then try releasing a change.”

but the bucket exist and there is no policy as well

@vishnu.shukla without seeing the code, it’s difficult to say anything. Make sure you configured the input and output artifacts for all stages correctly (the output from one stage should go to the input to the next stage), for example https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L233
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

also make sure you setup all the required permissions to access the bucket https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L101
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Sure Aknysh, thanks a lot
2019-05-10

anyone else have issues with route53 not resolving dns for you lately?

how “lately” do you mean? it’s been fine for us and we rely on it heavily for inter-service connections

like the past week

been having intermittent issues where dns doesnt get resolved for a few minutes

We haven’t noticed anything, but, they are reporting a problem right now: https://status.aws.amazon.com/

But that only affects modifications/new record sets, we don’t change our entries often

yeah

ok

i was just gonna say, today i have a cname set up for 45 minutes now and its still not resolving

looks like that above issue might be why

yeah thanks man, i was just assuming it had to do with the other intermittent issues ive been seeing this past week

What’s the domain or TLD at least? It might not be AWS. We occasionally have issues with the io domain because the root name servers can be flakey. :)

com @Lee Skillen

If anyone started to experience issues with MySQL RDS it’s because of today’s route53 outage

because how connection is being established with mysql you can see a lot of
| 23653344 | unauthenticated user | ip:port | NULL | Connect | NULL | login | NULL |
in
show full processlist;

you can set skip-name-resolve
to 1
in your parameters group fix the issue

sadly it’s a partial fix ;-/
2019-05-11

how do you set --dns-opt
in AWS ECS? it’s not avaialble via dns settings in ecs agent. i know that I can update resolv.conf via entrypoint.sh custom script but I wonder if there’s a better/easier way
2019-05-13

Anyone here used https://aws.amazon.com/solutions/centralized-logging/? I’m considering but at the same hesitating due to costs (their cheapest cluster starts from 35 USD/day - https://docs.aws.amazon.com/solutions/latest/centralized-logging/considerations.html#custom-sizing as well as complexity (logs are first collected in CW Logs then get to ES)
Regional deployment considerations

I’d much rather prefer them being sent directly to an ES cluster via an Interface VPC Endpoint of course

Haven’t used that solution, but most AWS-vended logs end up in either CloudWatch or S3 (there’s no native ability to send to ES) so unfortunately there’s not much way around the complexity. For logs on instance, though, I would recommend something like Filebeat rather than going via CW

We use kinesis firehose to send logs directly to es

And simple lambda to send the ones that end up in cloud watch logs

Much simpler than what AWS proposed in this doc
2019-05-15

I wrote a blog post about this https://joshmyers.io/blog/aws-logging-pipeline.html
The problem space Back in the day, a logging pipeline was a pretty manual thing to setup and manage. Yes, configuration management tools like Chef/Puppet made this somewhat easier, but you still had to run your own ELK stack (OK, it didn’t have to be ELK, but it probably was, right?) and somehow get logs in there in a robust way. You’d probably be using some kind of buffer for the logs between source and your ELK stack.

IAM user has AmazonS3FullAccess then also she fails to upload and download the file

any clue why?

@joshmyers
A CloudWatch Log Group can subscribe directly to ElasticSearch, so why bother with Kinesis and Lambda? Flexibility and ElasticSearch lock-in
Subscribing to ES from CW logs requires a lambda function that will translate gzipped format of CW Logs into ES. If someone did not automate it and just clicked subscribe then the lambda will be created automatically but automation requires maintaining this lambda function.

Ah, good to know. This was going back some years now, not sure what may have changed too

Am working on something similar now, but significantly more complex for a global client

~15TB a day of log data

only app logs or os logs as well?

- flowlogs + elb + s3 + ALL THE LOGS

ah

yeah that can be complex to maintain

Across multiple (read: hundreds) of AWS accounts and back to a Splunk cluster backed by tin on-prem

You can guess where the bottleneck is

depends on the on-prem part


spoiler: It is terrible

If you use fargate, it can log directly to splunk now (small good news for you :)

What happens when Splunk is having issues?

Not sure. I haven’t looking into it in any detail (we don’t use splunk). Just noticed a week ago when we were looking for alternatives for getting logs from fargate containers. But I’d suspect things getting dropped. Cloud to on-prem for real time logging is a bad idea in general unless you can manage the uptime and bandwidth of cloud providers

I guess that even our office connectivity which is ~1Gbps would be a problem in such a setup

also awslogs can stream to splunk directly afair

the awslogs agent?

That is news to me

This particular engagement is more complex because Splunk is on-prem

Splunk was added to fargate logger about 2 weeks ago. Not sure when it was added to awslogs agent.

I can’t find any info on awslogs agent and Splunk

awslogs is specific for logging to CloudWatch. Was surprised when @Maciek Strömich said it could log to splunk. I can’t find anything either. Just the standard stream from CloudWatch to splunk via lambda, etc

ah. it’s not awslogs but docker logging driver that logs to splunk

my bad

i thought that defining logging in Dockerrun.aws.json configures awslogs to ship logs to any of the available loggers
2019-05-17

A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! - localstack/localstack
2019-05-22

trying my luck here as well
hey everyone! Is there an easy way to also store/export/save apply
outputs to SSM Parameter Store? The main reason being so that they’re consumed by other tools frameworks which are non-Terraform?

ssm parameter store has few purposes which one of them is not exposing e.g. secrets. the correct way to do it is to integrate other tools with ssm parameter store not expose them via terraform

(and yes I can understand that it’s not always possible)

thanks @Maciek Strömich - I’m not using Terraform to expose them, but to provision infrastructure. Once provisioned successfully the ARNs, IDs and names of those resources are stored in the JSON-like state file. If i’d like to reference them from another framework like Serverless or CDK I need to use HCL and the remote state datasource.
The reason for using SSM (Parameter Store) which also has String
and StringList
types is to allow others to get the IDs/ARNs/etc of the resources built with Terraform

ah, that makes more sense

I’m not even a terraform noob so can’t help with that.

I misread your message

sorry for making noise

@Bogdan we do it all the time - store TF outputs in SSM for later consumption from other modules or apps

for example, https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L190 - here we save a bunch of fields into SSM
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Then we use chamber to consume them

Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

You can use any other means to read them, terraform or SDK

@Andriy Knysh (Cloud Posse) I saw that, but unfortunately not all modules are built like yours: meaning that your solution is module dependent, so if I haven’t written the module myself just using one from the registry I can’t go and create N+1 PRs to add SSM params to all the modules that are open-source

@Andriy Knysh (Cloud Posse) I could however use the module outputs and do them outside

You can wrap any module in your module and then add SSM write for the outputs

which is what I’ll try to do, but it’s still a suboptimal solution as I have to do it everytime I create a resource/module I’d like in SSM

That’s what you have to do anyway since not all modules will need SSM

I started building something that iterates through terraform state list
, then calls terraform state show
on a particular type of resource - like VPCs, subnet_ids, etc

so I don’t have to do it at module init or handle it on a per-module basis

it’s just a pity that terraform state show
doesn’t return JSON

and I have to combine head
and awk
for getting the value that interests me, which then I have to aws ssm put-parameter
with

Hmm, that should work, but looks complicated. Why not just assemble low level modules into a top level module and then write just those outputs that you really need to SSM

I’m actually considering doing that since it’s faster

there is a separate module for writing and reading to/from SSM https://github.com/cloudposse/terraform-aws-ssm-parameter-store
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

@Andriy Knysh (Cloud Posse) did you also get ` error creating SSM parameter: TooManyUpdates`?

I have definitely run into this problem

It’s just another terraformism


it appears to be a known issue: https://github.com/terraform-providers/terraform-provider-aws/issues/1082
Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable "configs" { description = "Key/value pairs to create in the SSM Parameter Store…

how many params are you writing at the same time?
Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable "configs" { description = "Key/value pairs to create in the SSM Parameter Store…

10-15

but the error went away after a subsequent apply

hmm seems like AWS rate limiting

I wonder if it’s a safety mechanism so they can preserve the history of changes since SSM parameter store does keep a version history of changes… if it’s eventually consistent like most AWS resources, this is my guess on why this limitation is there
2019-05-27

I’m looking for a solution to manage ssh access to internal hosts via a bastion box. Teleport way more than what we need.

my team all use SSH keys on physical tokens. so far we bake the ssh keys into the AMIs

but revoking a users access requires rebuilding the amis, which isn’t ideal.

Hoping to find a simple system to dynamically add/remove keys

in the past I’ve put public keys in s3, and baked a script on each box that is set to run every 15 minutes which adds/removes the keys on a host based on what’s in s3

then if you want to revoke a key, you just delete it from s3

kinda low tech, but does the job

that is a simple solution!

Maybe via Secret Manager? And load the Keys on startup via a common tag or something like that. When you want to delete an access, juste delete the secret manager entry.
Haven’t tried that, just suggesting things

bastion is easier in the sense you revoke keys to users only on the bastion/s host/s

and then you can just open ssh from that specific host

so you enable ssh without authentication from the bastion to the internal host?

Yes

Ohhh wait

You mean without having to copy the keys?

It’s probably a good idea to have the keys on both the bastion and the internal host… one can also then use proxycommand to jump. You can still revoke access by just removing the key on the bastion, and then removing from internal hosts at a more leisurely pace

yes, you will have to have all the keys copied over somehow with their home dirs and authorized_keys in .ssh unless you want to share a unique key in the bastion but taht is far more insecure

Hi everyone! I’d like you to invite to the next chapter of our webinar series next Thursday 04/30, where we’ll talk about how to create and administrate a productive environment reaching operational excellence, and how to incorporate this processes to your workspace.
It’s a great learning opportunity no matter what role you have, as long as your business relies on IT workloads.
See you there!
https://www.eventbrite.com.ar/e/alcanzando-la-excelencia-operacional-tickets-62208718953
Métricas, herramientas y buenas prácticas para monitorear tus entornos cloud. ¿Qué verás? La importancia de la de excelencia operacional en la nube y cómo abordarla Preparación de tu entorno para operar en producción Ahorrar problemas y noche de insomnio al equipo de operaciones Creación de tu ecosistema de herramientas, métricas y alarmas para obtener un monitoreo proactivo y predictivo en producción

Hey @Juan Cruz Diaz where are you from?

Hi Agus. I’m from Argentina. So, if you want we can talk in spanish

Not sure everyone will catch with it jajaj but still good to have a local guy

soy chileno…..

por favor no peliemos

quizas deberiamos crear un terraform-es channel

JAjaj we won’t fight for any reason

@Erik Osterman (Cloud Posse) would you mind if we created a #terrafom-es channel? the guys here would like that

Sure!

Awsome, we should all commit to passing on to english anything significant or relevant to all

I have created the #terraform-es channel

Thanks Erik
2019-05-28

I’m thinking of trying out the system manager sesssion manager feature of aws, and do away with a bastion entirely. we only need shell access for debugging, so accessing it through the session manager will get us auditing and remove the need for key management.

Ansible should straight away help instead of startup scripts / AMI builds.

Session manager is great - love being able to use IAM to control ‘SSH’ access. Only thing it’s missing is if you need to SCP stuff - we’ve written a quick wrapper for aws s3 cp
to make that feel a bit more native (basically using S3 as a proxy of sorts, so you have to run it both locally and remotely to push/pull the file you want).
2019-05-29

anyone encountered The AWS Access Key Id you provided does not exist in our records after aws sts assume-role
and exporting all the output into ENV vars?

and I couldn’t fix it with https://aws.amazon.com/premiumsupport/knowledge-center/access-key-does-not-exist/

We use SSM too, I wrote a small Go program that let us to select instance quickly


you can select your instance with arrows, of filter by taping something, and an enter connect you to the instance directly

IAM manages who can access
2019-05-30



so testing session manager has been going well. our team likes it.

we don’t copy files, so that hasn’t been an issue

however it turns out we did use ssh tunnels to access RDS postgres instances for running ad hoc analytics/queries

thinking about how best to manage that now

@Abel Luck same trouble for us, don’t find out yet how to manage tunnels to access to RDS

we do copy files, but we use S3 for that, even with presigned URL to PUT/GET

the most important gain for my part, is that I’m able to quickly connect to any instance while an on-call triggered without waiting VPN be UP

I’m thinking about how to integrate direclty in my golang app the websocket connection, no extra dependency

as you seem to be several interesting, will try to release tomorrow (need to find out a better name)

we use metabase for doing adhoc queries to share with other folks and it works quite well, gotta get devs/sysadmins to use it too now

The fastest, easiest way to share data and analytics inside your company. An open source Business Intelligence server you can install in 5 minutes that connects to MySQL, PostgreSQL, MongoDB and more! Anyone can use it to build charts, dashboards and nightly email reports.

though it’s more for read-only querying.

maybe an instance with pgadmin accessed via vpn would suffice too

this a kind of phpmyadmin or adminer?

pgadmin is like phpmyadmin but for postgres (and way better). Metabase isn’t like any of those.. it really stands alone. great tool.

a tool I wrote and I would like to make FOSS is that :

yea

ok, will give it a try

but a lot of our customer (we’re a managed services provider), use mysql benchmark or other apps on their laptops

and more mysql than postgr (sic)

yea, then some sort of tunnel will be needed

one could always configure ssh such that it allows database connections but not shell access

sure, but still need a kinf of bastion

yea indeed

lambda + spot instance could provide us a “BaaS”

bastion as a Service

you call an API Gateway with your credentials, it create an Ec2 with good SG and only opened to your IP, and tadaaa

if lambda detect there’s no more traffic for a while, we terminate it

yea nice

Match User rds-only-user
AllowTcpForwarding yes
X11Forwarding no
PermitTunnel no
GatewayPorts no
AllowAgentForwarding no
PermitOpen your-rds-hostname:5432
ForceCommand echo 'No shell access.'

that sshd config, i think, is all you need to allow port forwarding only

thanks


explanation : a fully golang app which list all your aws ressources with a high granularity and store all ressources with their links between in a graphDB

you can query and get a lot of facts

easy to find out which ec2 have access to rds

which ec2 are worldwide open on port 22

ec2 without snapshots

etc etc

super cool!

with new UI, it looks like that


after a big clean up and some doc, i will share it for sure

That looks great! Would love to take it for a spin

can you not give metabase the RDS hostname? or is it in a different VPC?

exactly, deploy metabase inside the vpc, route through a LB.
2019-05-31

@Tim Malone @Daniel Lin https://github.com/claranet/sshm
Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your ~/.aws/profile
to easily select the instance you want to connect on - claranet/sshm

need a binary?

i’m planning to use goreleaser soon

doesn’t support aws-vault

@atom seem you are the maintainer in Oxalide

sorry don’t know what you mean - must be a different Thomas

me I guess

would be great if we can keep the credential in safe vault instead of plain clear text

we don’t have secrets in our aws/config

we use our adfs to connect on our main account and then switch role

can you tell me more about what you want please