#aws (2019-05)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS)

Archive: https://archive.sweetops.com/aws/

2019-05-01

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Apple spends more than $30m a month on AWS, though this number is falling. Slack’s AWS spend is a contractual $50M/year minimum.

Lee Skillen avatar
Lee Skillen

… And I thought our bill of ~$20k/year was terrifying

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Lyft plans to spend $300 million on Amazon Web Services through 2021attachment image

Lyft has signed up to pay Amazon Web Services at least $80 million per year for the next three years, totaling at least $300 million.

Lee Skillen avatar
Lee Skillen

Better to sell shovels than mine gold during a gold rush, eh.

rohit avatar

Does anyone know if it is possible to add tags to S3 object (node sdk) when using presigned url ?

rohit avatar

maybe this is not a right place to ask this question

rohit avatar

but i just did

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmmmm good question. Don’t think it’s possible natively, but anything is possible with lambdas

rohit avatar

that’s true but that’s not something i want to do in my scenario

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I wouldn’t want to either

rohit avatar

let me explain my scenario so that you can better understand

rohit avatar

when i want to upload something from my app, i am making a request to my backend service(nodejs) which returns presigned url then i use that in my frontend to directly upload the object from the browser

rohit avatar

it says

Note: Not all operation parameters are supported when using pre-signed URLs. Certain parameters, such as SSECustomerKey, ACL, Expires, ContentLength, or Tagging must be provided as headers when sending a request.
rohit avatar

so i tried sending the tags in the ajax request with presigned url and i get invalid tag error

2019-05-02

Issif avatar
Any way to use presigned URL uploads and enforce tagging?

Is there any way to issue a presigned URL to a client to upload a file to S3, and ensure that the uploaded file has certain tags? Using the Python SDK here as an example, this generates a URL as de…

Issif avatar
To clarify, I need to use {'tagging': '...'} with the verbose XML tag set syntax for both fields and conditions, and that seems to work as required.
Issif avatar

Golang SDK (I do know it much better) says :

Issif avatar
// The tag-set for the object. The tag-set must be encoded as URL Query parameters.
// (For example, "Key1=Value1")
Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"

`

rohit avatar

@Issif thanks. Unfortunately, i did not any luck

2019-05-06

Alex Siegman avatar
Alex Siegman

I think just recently I saw a note about a better tool/ui to look at your AWS Parameter Store Secrets, but for the life of me I can’t find it. The searching is terrible in the AWS console. All I really want is fuzzy search and proper pagination on searches >.< Anyone have something they know about?

Harry H avatar
Harry H

Was it this one? https://github.com/smblee/parameter-store-manager

Saw this a few days ago

smblee/parameter-store-manager

A cross platform desktop application that provides an UI to easily view and manage AWS SSM parameters. - smblee/parameter-store-manager

Issif avatar

Parameter Store > AWS System Manager or EC2 scroll down the left panel

Alex Siegman avatar
Alex Siegman

@Harry H that was it i believe. Thanks!

rohit avatar

anyone here used lambda as a compiler ?

rohit avatar

I want to send the code enter in a code editor inside a web application to lambda and compile it and get the results back

rohit avatar

let me know if i am crazy

Tim Malone avatar
Tim Malone

i’m sure someone’s done it! ppl have done all sorts of awesomely crazy things in lambda

Tim Malone avatar
Tim Malone

sounds like a fun project. you can shell out to anything so i suspect if you packaged a compiler with your code (or downloaded it from s3 during function invocation) then you could run it

rohit avatar

I want to build some sort of endpoint which accepts code sent from my web app and compiles

rohit avatar

i am trying to figure out how companies like hackerrank,repl.it do it

2019-05-07

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I’m told by one of our devs that you should see if there is a way for you code to compile and excecute code, because lambda is simply a code container.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

the alternative is to find a way to compile the incoming code, get the compiled file (.jar or .zip file por eg.) and upload it to a termporal lambda, and the excecute that lambda

rohit avatar

i did not find similar usecases online

btai avatar

anyone ever get this error DNS_PROBE_FINISHED_NXDOMAIN intermittently with route 53

btai avatar

accessing a web app

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is this under k8s?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Check the logs for kube-dns. Had lots of problems like this in the past, but that was way back on Kubernetes 1.7 or earlier

2019-05-08

oscarsullivan_old avatar
oscarsullivan_old

AWS London today.. anyone attending?

oscarsullivan_old avatar
oscarsullivan_old

Good array of breakout sessions avlb

btai avatar

@Erik Osterman (Cloud Posse) it can’t be the cluster if I do an nslookup on the hostname and it doesn’t resolve the ip addresses. At that point is it a route 53 issue? (Happens intermittently for a few minutes)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is it on a delegated zone?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve had the problem where one of the NS servers in the delegation was wrong

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so in a round robin fashion some requests would fail

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

same goes for the TLD

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if the nameservers are off

btai avatar

ah no i just double checked them. this intermittent issue has been happening only for a week, but ive had the same route53 record for over a year

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is it only from your office or home?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. maybe switching your own NS to 1.1.1.1 may help

btai avatar

no multiple customers

btai avatar

also we use uptimerobot and that will report some downtime too when it happens

btai avatar

for very brief period of time

btai avatar

this might help, when I have that brief period of downtime, it doesn’t resolve the ip addresses during nslookup:

$ nslookup blah.example.com
Server:        192.168.1.1
Address:    192.168.1.1#53

Non-authoritative answer:
blah.example.com    canonical name = app.example.com.
btai avatar

healthy:

$ nslookup blah.auditboardapp.com                                                                
Server:		192.168.1.1
Address:	192.168.1.10#53

Non-authoritative answer:
blah.example.com	canonical name = app.example.com.
Name:	app.example.com
Address: 54.191.49.21
Name:	app.example.com
Address: 54.203.171.148
Name:	app.example.com
Address: 54.212.199.41
btai avatar

Also only started happening since Saturday but 3 times now

btai avatar

But yes on k8s and I don’t see anything out of the ordinary in the kube-dns logs

2019-05-09

vishnu.shukla avatar
vishnu.shukla
vishnu.shukla avatar
vishnu.shukla

Hi, can anyone help, why the deployment is failing

vishnu.shukla avatar
vishnu.shukla

here is the details what i can see “Invalid action configuration The action failed because either the artifact or the Amazon S3 bucket could not be found. Name of artifact bucket: codepipeline-eu-central-1-516857284380. Verify that this bucket exists. If it exists, check the life cycle policy, then try releasing a change.”

vishnu.shukla avatar
vishnu.shukla

but the bucket exist and there is no policy as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@vishnu.shukla without seeing the code, it’s difficult to say anything. Make sure you configured the input and output artifacts for all stages correctly (the output from one stage should go to the input to the next stage), for example https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L233

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also make sure you setup all the required permissions to access the bucket https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L101

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

vishnu.shukla avatar
vishnu.shukla

Sure Aknysh, thanks a lot

2019-05-10

btai avatar

anyone else have issues with route53 not resolving dns for you lately?

Alex Siegman avatar
Alex Siegman

how “lately” do you mean? it’s been fine for us and we rely on it heavily for inter-service connections

btai avatar

like the past week

btai avatar

been having intermittent issues where dns doesnt get resolved for a few minutes

Alex Siegman avatar
Alex Siegman

We haven’t noticed anything, but, they are reporting a problem right now: https://status.aws.amazon.com/

Alex Siegman avatar
Alex Siegman

But that only affects modifications/new record sets, we don’t change our entries often

btai avatar

yeah

btai avatar

i was just gonna say, today i have a cname set up for 45 minutes now and its still not resolving

Alex Siegman avatar
Alex Siegman

looks like that above issue might be why

btai avatar

yeah thanks man, i was just assuming it had to do with the other intermittent issues ive been seeing this past week

Lee Skillen avatar
Lee Skillen

What’s the domain or TLD at least? It might not be AWS. We occasionally have issues with the io domain because the root name servers can be flakey. :)

btai avatar

com @Lee Skillen

Maciek Strömich avatar
Maciek Strömich

If anyone started to experience issues with MySQL RDS it’s because of today’s route53 outage

Maciek Strömich avatar
Maciek Strömich

because how connection is being established with mysql you can see a lot of

| 23653344 | unauthenticated user | ip:port  | NULL    | Connect | NULL | login            | NULL                  |

in show full processlist;

Maciek Strömich avatar
Maciek Strömich

you can set skip-name-resolve to 1 in your parameters group fix the issue

2
Maciek Strömich avatar
Maciek Strömich

sadly it’s a partial fix ;-/

2019-05-11

Maciek Strömich avatar
Maciek Strömich

how do you set --dns-opt in AWS ECS? it’s not avaialble via dns settings in ecs agent. i know that I can update resolv.conf via entrypoint.sh custom script but I wonder if there’s a better/easier way

2019-05-13

Bogdan avatar

Anyone here used https://aws.amazon.com/solutions/centralized-logging/? I’m considering but at the same hesitating due to costs (their cheapest cluster starts from 35 USD/day - https://docs.aws.amazon.com/solutions/latest/centralized-logging/considerations.html#custom-sizing as well as complexity (logs are first collected in CW Logs then get to ES)

Design Considerations - Centralized Logging on AWS

Regional deployment considerations

Bogdan avatar

I’d much rather prefer them being sent directly to an ES cluster via an Interface VPC Endpoint of course

Tim Malone avatar
Tim Malone

Haven’t used that solution, but most AWS-vended logs end up in either CloudWatch or S3 (there’s no native ability to send to ES) so unfortunately there’s not much way around the complexity. For logs on instance, though, I would recommend something like Filebeat rather than going via CW

Maciek Strömich avatar
Maciek Strömich

We use kinesis firehose to send logs directly to es

Maciek Strömich avatar
Maciek Strömich

And simple lambda to send the ones that end up in cloud watch logs

Maciek Strömich avatar
Maciek Strömich

Much simpler than what AWS proposed in this doc

2019-05-15

joshmyers avatar
joshmyers
Automated AWS logging pipeline • Josh Myers

The problem space Back in the day, a logging pipeline was a pretty manual thing to setup and manage. Yes, configuration management tools like Chef/Puppet made this somewhat easier, but you still had to run your own ELK stack (OK, it didn’t have to be ELK, but it probably was, right?) and somehow get logs in there in a robust way. You’d probably be using some kind of buffer for the logs between source and your ELK stack.

1
vishnu.shukla avatar
vishnu.shukla

IAM user has AmazonS3FullAccess then also she fails to upload and download the file

vishnu.shukla avatar
vishnu.shukla

any clue why?

Maciek Strömich avatar
Maciek Strömich

@joshmyers

A CloudWatch Log Group can subscribe directly to ElasticSearch, so why bother with Kinesis and Lambda? Flexibility and ElasticSearch lock-in

Subscribing to ES from CW logs requires a lambda function that will translate gzipped format of CW Logs into ES. If someone did not automate it and just clicked subscribe then the lambda will be created automatically but automation requires maintaining this lambda function.

joshmyers avatar
joshmyers

Ah, good to know. This was going back some years now, not sure what may have changed too

joshmyers avatar
joshmyers

Am working on something similar now, but significantly more complex for a global client

joshmyers avatar
joshmyers

~15TB a day of log data

Maciek Strömich avatar
Maciek Strömich

only app logs or os logs as well?

joshmyers avatar
joshmyers
  • flowlogs + elb + s3 + ALL THE LOGS
Maciek Strömich avatar
Maciek Strömich

ah

Maciek Strömich avatar
Maciek Strömich

yeah that can be complex to maintain

joshmyers avatar
joshmyers

Across multiple (read: hundreds) of AWS accounts and back to a Splunk cluster backed by tin on-prem

joshmyers avatar
joshmyers

You can guess where the bottleneck is

Maciek Strömich avatar
Maciek Strömich

depends on the on-prem part

Maciek Strömich avatar
Maciek Strömich

joshmyers avatar
joshmyers

spoiler: It is terrible

Steven avatar

If you use fargate, it can log directly to splunk now (small good news for you :)

joshmyers avatar
joshmyers

What happens when Splunk is having issues?

Steven avatar

Not sure. I haven’t looking into it in any detail (we don’t use splunk). Just noticed a week ago when we were looking for alternatives for getting logs from fargate containers. But I’d suspect things getting dropped. Cloud to on-prem for real time logging is a bad idea in general unless you can manage the uptime and bandwidth of cloud providers

Maciek Strömich avatar
Maciek Strömich

I guess that even our office connectivity which is ~1Gbps would be a problem in such a setup

Maciek Strömich avatar
Maciek Strömich

also awslogs can stream to splunk directly afair

joshmyers avatar
joshmyers

the awslogs agent?

joshmyers avatar
joshmyers

That is news to me

joshmyers avatar
joshmyers

This particular engagement is more complex because Splunk is on-prem

Steven avatar

Splunk was added to fargate logger about 2 weeks ago. Not sure when it was added to awslogs agent.

joshmyers avatar
joshmyers

I can’t find any info on awslogs agent and Splunk

Steven avatar

awslogs is specific for logging to CloudWatch. Was surprised when @Maciek Strömich said it could log to splunk. I can’t find anything either. Just the standard stream from CloudWatch to splunk via lambda, etc

Maciek Strömich avatar
Maciek Strömich

ah. it’s not awslogs but docker logging driver that logs to splunk

Maciek Strömich avatar
Maciek Strömich

my bad

Maciek Strömich avatar
Maciek Strömich

i thought that defining logging in Dockerrun.aws.json configures awslogs to ship logs to any of the available loggers

2019-05-17

Maciek Strömich avatar
Maciek Strömich
localstack/localstack

A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! - localstack/localstack

2019-05-22

Bogdan avatar
Bogdan
12:03:20 PM

trying my luck here as well

hey everyone! Is there an easy way to also store/export/save apply outputs to SSM Parameter Store? The main reason being so that they’re consumed by other tools frameworks which are non-Terraform?

Maciek Strömich avatar
Maciek Strömich

ssm parameter store has few purposes which one of them is not exposing e.g. secrets. the correct way to do it is to integrate other tools with ssm parameter store not expose them via terraform

Maciek Strömich avatar
Maciek Strömich

(and yes I can understand that it’s not always possible)

Bogdan avatar

thanks @Maciek Strömich - I’m not using Terraform to expose them, but to provision infrastructure. Once provisioned successfully the ARNs, IDs and names of those resources are stored in the JSON-like state file. If i’d like to reference them from another framework like Serverless or CDK I need to use HCL and the remote state datasource. The reason for using SSM (Parameter Store) which also has String and StringList types is to allow others to get the IDs/ARNs/etc of the resources built with Terraform

Maciek Strömich avatar
Maciek Strömich

ah, that makes more sense

Maciek Strömich avatar
Maciek Strömich

I’m not even a terraform noob so can’t help with that.

Maciek Strömich avatar
Maciek Strömich

I misread your message

Maciek Strömich avatar
Maciek Strömich

sorry for making noise

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Bogdan we do it all the time - store TF outputs in SSM for later consumption from other modules or apps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L190 - here we save a bunch of fields into SSM

cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Then we use chamber to consume them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/geodesic

Geodesic is a cloud automation shell. It&#39;s the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can use any other means to read them, terraform or SDK

Bogdan avatar

@Andriy Knysh (Cloud Posse) I saw that, but unfortunately not all modules are built like yours: meaning that your solution is module dependent, so if I haven’t written the module myself just using one from the registry I can’t go and create N+1 PRs to add SSM params to all the modules that are open-source

Bogdan avatar

@Andriy Knysh (Cloud Posse) I could however use the module outputs and do them outside

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can wrap any module in your module and then add SSM write for the outputs

Bogdan avatar

which is what I’ll try to do, but it’s still a suboptimal solution as I have to do it everytime I create a resource/module I’d like in SSM

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

That’s what you have to do anyway since not all modules will need SSM

Bogdan avatar

I started building something that iterates through terraform state list, then calls terraform state show on a particular type of resource - like VPCs, subnet_ids, etc

Bogdan avatar

so I don’t have to do it at module init or handle it on a per-module basis

Bogdan avatar

it’s just a pity that terraform state show doesn’t return JSON

Bogdan avatar

and I have to combine head and awk for getting the value that interests me, which then I have to aws ssm put-parameter with

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Hmm, that should work, but looks complicated. Why not just assemble low level modules into a top level module and then write just those outputs that you really need to SSM

1
Bogdan avatar

I’m actually considering doing that since it’s faster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is a separate module for writing and reading to/from SSM https://github.com/cloudposse/terraform-aws-ssm-parameter-store

cloudposse/terraform-aws-ssm-parameter-store

Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store

1
Bogdan avatar

@Andriy Knysh (Cloud Posse) did you also get ` error creating SSM parameter: TooManyUpdates`?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have definitely run into this problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s just another terraformism

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No way to work around it other than to rerun

1
Bogdan avatar
aws_ssm_parameter TooManyUpdates error · Issue #1082 · terraform-providers/terraform-provider-aws

Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable &quot;configs&quot; { description = &quot;Key/value pairs to create in the SSM Parameter Store…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how many params are you writing at the same time?

aws_ssm_parameter TooManyUpdates error · Issue #1082 · terraform-providers/terraform-provider-aws

Terraform Version 0.9.11 Affected Resource(s) aws_ssm_parameter Terraform Configuration Files variable &quot;configs&quot; { description = &quot;Key/value pairs to create in the SSM Parameter Store…

Bogdan avatar

10-15

Bogdan avatar

but the error went away after a subsequent apply

sarkis avatar

hmm seems like AWS rate limiting

sarkis avatar

I wonder if it’s a safety mechanism so they can preserve the history of changes since SSM parameter store does keep a version history of changes… if it’s eventually consistent like most AWS resources, this is my guess on why this limitation is there

2019-05-27

Abel Luck avatar
Abel Luck

I’m looking for a solution to manage ssh access to internal hosts via a bastion box. Teleport way more than what we need.

Abel Luck avatar
Abel Luck

my team all use SSH keys on physical tokens. so far we bake the ssh keys into the AMIs

Abel Luck avatar
Abel Luck

but revoking a users access requires rebuilding the amis, which isn’t ideal.

Abel Luck avatar
Abel Luck

Hoping to find a simple system to dynamically add/remove keys

Fizz avatar

in the past I’ve put public keys in s3, and baked a script on each box that is set to run every 15 minutes which adds/removes the keys on a host based on what’s in s3

Fizz avatar

then if you want to revoke a key, you just delete it from s3

Fizz avatar

kinda low tech, but does the job

Abel Luck avatar
Abel Luck

that is a simple solution!

Rice Bowl Junior avatar
Rice Bowl Junior

Maybe via Secret Manager? And load the Keys on startup via a common tag or something like that. When you want to delete an access, juste delete the secret manager entry.

Haven’t tried that, just suggesting things

jose.amengual avatar
jose.amengual

bastion is easier in the sense you revoke keys to users only on the bastion/s host/s

jose.amengual avatar
jose.amengual

and then you can just open ssh from that specific host

Abel Luck avatar
Abel Luck

so you enable ssh without authentication from the bastion to the internal host?

jose.amengual avatar
jose.amengual

Yes

jose.amengual avatar
jose.amengual

Ohhh wait

jose.amengual avatar
jose.amengual

You mean without having to copy the keys?

Tim Malone avatar
Tim Malone

It’s probably a good idea to have the keys on both the bastion and the internal host… one can also then use proxycommand to jump. You can still revoke access by just removing the key on the bastion, and then removing from internal hosts at a more leisurely pace

jose.amengual avatar
jose.amengual

yes, you will have to have all the keys copied over somehow with their home dirs and authorized_keys in .ssh unless you want to share a unique key in the bastion but taht is far more insecure

Juan Cruz Diaz avatar
Juan Cruz Diaz

Hi everyone! I’d like you to invite to the next chapter of our webinar series next Thursday 04/30, where we’ll talk about how to create and administrate a productive environment reaching operational excellence, and how to incorporate this processes to your workspace.

It’s a great learning opportunity no matter what role you have, as long as your business relies on IT workloads.

See you there!

https://www.eventbrite.com.ar/e/alcanzando-la-excelencia-operacional-tickets-62208718953

Alcanzando la excelencia operacionalattachment image

Métricas, herramientas y buenas prácticas para monitorear tus entornos cloud. ¿Qué verás? La importancia de la de excelencia operacional en la nube y cómo abordarla  Preparación de  tu entorno para operar en producción Ahorrar problemas y noche de insomnio al equipo de operaciones Creación de tu ecosistema de herramientas, métricas y alarmas para obtener un monitoreo proactivo y predictivo en producción

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Hey @Juan Cruz Diaz where are you from?

Juan Cruz Diaz avatar
Juan Cruz Diaz

Hi Agus. I’m from Argentina. So, if you want we can talk in spanish

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Not sure everyone will catch with it jajaj but still good to have a local guy

jose.amengual avatar
jose.amengual

soy chileno…..

jose.amengual avatar
jose.amengual

por favor no peliemos

jose.amengual avatar
jose.amengual

quizas deberiamos crear un terraform-es channel

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

JAjaj we won’t fight for any reason

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

@Erik Osterman (Cloud Posse) would you mind if we created a #terrafom-es channel? the guys here would like that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sure!

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Awsome, we should all commit to passing on to english anything significant or relevant to all

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I have created the #terraform-es channel

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks Erik

2019-05-28

Abel Luck avatar
Abel Luck

I’m thinking of trying out the system manager sesssion manager feature of aws, and do away with a bastion entirely. we only need shell access for debugging, so accessing it through the session manager will get us auditing and remove the need for key management.

Suresh avatar

Ansible should straight away help instead of startup scripts / AMI builds.

Tim Malone avatar
Tim Malone

Session manager is great - love being able to use IAM to control ‘SSH’ access. Only thing it’s missing is if you need to SCP stuff - we’ve written a quick wrapper for aws s3 cp to make that feel a bit more native (basically using S3 as a proxy of sorts, so you have to run it both locally and remotely to push/pull the file you want).

2019-05-29

Bogdan avatar

anyone encountered The AWS Access Key Id you provided does not exist in our records after aws sts assume-role and exporting all the output into ENV vars?

Issif avatar

We use SSM too, I wrote a small Go program that let us to select instance quickly

Issif avatar
1
Issif avatar

you can select your instance with arrows, of filter by taping something, and an enter connect you to the instance directly

Issif avatar

IAM manages who can access

2019-05-30

Tim Malone avatar
Tim Malone

that looks nice @Issif! i don’t suppose… you’ve open sourced that?

1
Issif avatar

I want to, still discussing about

2
Abel Luck avatar
Abel Luck

so testing session manager has been going well. our team likes it.

Abel Luck avatar
Abel Luck

we don’t copy files, so that hasn’t been an issue

Abel Luck avatar
Abel Luck

however it turns out we did use ssh tunnels to access RDS postgres instances for running ad hoc analytics/queries

Abel Luck avatar
Abel Luck

thinking about how best to manage that now

Issif avatar

@Abel Luck same trouble for us, don’t find out yet how to manage tunnels to access to RDS

Issif avatar

we do copy files, but we use S3 for that, even with presigned URL to PUT/GET

Issif avatar

the most important gain for my part, is that I’m able to quickly connect to any instance while an on-call triggered without waiting VPN be UP

Issif avatar

I’m thinking about how to integrate direclty in my golang app the websocket connection, no extra dependency

Issif avatar

as you seem to be several interesting, will try to release tomorrow (need to find out a better name)

Abel Luck avatar
Abel Luck

we use metabase for doing adhoc queries to share with other folks and it works quite well, gotta get devs/sysadmins to use it too now

Abel Luck avatar
Abel Luck
Metabase

The fastest, easiest way to share data and analytics inside your company. An open source Business Intelligence server you can install in 5 minutes that connects to MySQL, PostgreSQL, MongoDB and more! Anyone can use it to build charts, dashboards and nightly email reports.

1
Abel Luck avatar
Abel Luck

though it’s more for read-only querying.

Abel Luck avatar
Abel Luck

maybe an instance with pgadmin accessed via vpn would suffice too

Issif avatar

this a kind of phpmyadmin or adminer?

Abel Luck avatar
Abel Luck

pgadmin is like phpmyadmin but for postgres (and way better). Metabase isn’t like any of those.. it really stands alone. great tool.

Issif avatar

a tool I wrote and I would like to make FOSS is that :

Abel Luck avatar
Abel Luck

yea

Issif avatar

ok, will give it a try

Issif avatar

but a lot of our customer (we’re a managed services provider), use mysql benchmark or other apps on their laptops

Issif avatar

and more mysql than postgr (sic)

Abel Luck avatar
Abel Luck

yea, then some sort of tunnel will be needed

Abel Luck avatar
Abel Luck

one could always configure ssh such that it allows database connections but not shell access

Issif avatar

sure, but still need a kinf of bastion

Abel Luck avatar
Abel Luck

yea indeed

Issif avatar

lambda + spot instance could provide us a “BaaS”

Issif avatar

bastion as a Service

Issif avatar

you call an API Gateway with your credentials, it create an Ec2 with good SG and only opened to your IP, and tadaaa

Issif avatar

if lambda detect there’s no more traffic for a while, we terminate it

Abel Luck avatar
Abel Luck

yea nice

Abel Luck avatar
Abel Luck
Match User rds-only-user
   AllowTcpForwarding yes
   X11Forwarding no
   PermitTunnel no
   GatewayPorts no
   AllowAgentForwarding no
   PermitOpen your-rds-hostname:5432
   ForceCommand echo 'No shell access.'
Abel Luck avatar
Abel Luck

that sshd config, i think, is all you need to allow port forwarding only

Issif avatar

thanks

Issif avatar
Issif avatar

explanation : a fully golang app which list all your aws ressources with a high granularity and store all ressources with their links between in a graphDB

Issif avatar

you can query and get a lot of facts

Issif avatar

easy to find out which ec2 have access to rds

Issif avatar

which ec2 are worldwide open on port 22

Issif avatar

ec2 without snapshots

Issif avatar

etc etc

Abel Luck avatar
Abel Luck

super cool!

Issif avatar

with new UI, it looks like that

Issif avatar

after a big clean up and some doc, i will share it for sure

Tim Malone avatar
Tim Malone

That looks great! Would love to take it for a spin

tomv avatar

can you not give metabase the RDS hostname? or is it in a different VPC?

Abel Luck avatar
Abel Luck

exactly, deploy metabase inside the vpc, route through a LB.

2019-05-31

Issif avatar

@Tim Malone @Daniel Lin https://github.com/claranet/sshm

claranet/sshm

Easy connect on EC2 instances thanks to AWS System Manager Agent. Just use your ~/.aws/profile to easily select the instance you want to connect on - claranet/sshm

2
1
1
Issif avatar

need a binary?

Issif avatar

i’m planning to use goreleaser soon

Meb avatar

doesn’t support aws-vault

Meb avatar

@atom seem you are the maintainer in Oxalide

atom avatar

sorry don’t know what you mean - must be a different Thomas

Issif avatar

me I guess

Meb avatar

would be great if we can keep the credential in safe vault instead of plain clear text

Issif avatar

we don’t have secrets in our aws/config

Issif avatar

we use our adfs to connect on our main account and then switch role

Issif avatar

can you tell me more about what you want please

    keyboard_arrow_up