#aws (2019-01)

aws Discussion related to Amazon Web Services (AWS)

aws Discussion related to Amazon Web Services (AWS) Archive: https://archive.sweetops.com/aws/

2019-01-31

Bogdan avatar
Bogdan
09:52:09 AM

how do you guys handle the ordered_placement_strategy in a ecs service module, from an input perspective (passing a list of maps or map) when passing several strategies? I couldn’t find an example in cloudposse ecs service modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think we handle that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, this is related to #terraform ?

Bogdan avatar
Bogdan

Thanks @Erik Osterman (Cloud Posse) - I’ll ask in #terraform

2019-01-29

Maciek Strömich avatar
Maciek Strömich
UpdateReplacePolicy Attribute - AWS CloudFormation

Specify how to handle resource replacement during stack update operations in AWS CloudFormation by using the UpdateReplacePolicy attribute.

:--1:1
Maciek Strömich avatar
Maciek Strömich

cloudformation has now an attribute protecting you from accidental data loss upon update replacement

Maciek Strömich avatar
Maciek Strömich

and also EKS became ISO and PCI compliant

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh great!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I didn’t see that

Maciek Strömich avatar
Maciek Strömich

yeah, last week

Maciek Strömich avatar
Maciek Strömich

and also last week AWS introduced pull from private ecr repositories with secrets manager integration

2019-01-28

kritonas.prod avatar
kritonas.prod

Hi all. An ELB/ALB with a public IP can serve instances/target groups with private IPs, correct?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, provided routes and security groups are setup correctly

kritonas.prod avatar
kritonas.prod

That won’t depend on a IGW or NATgw, right?

kritonas.prod avatar
kritonas.prod

And the instances won’t be able to access the public internet beyond serving through the ELB/ALB

kritonas.prod avatar
kritonas.prod

If one of the above isn’t present

kritonas.prod avatar
kritonas.prod

(Well, if the NATgw isn’t present, since they don’t have public IPs)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So if there is no NGW they won’t be able to egress directly to the public

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But the ALB can still send/receive requests to the instance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The public subnet will need an IGW

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

And the ALB will need to be on the public subnet

kritonas.prod avatar
kritonas.prod

Ah of course, LB wouldn’t be able to get out without IGW.

kritonas.prod avatar
kritonas.prod

Thanks again Erik

Gabe avatar

a useful little plugin for AWS if you have many roles https://github.com/tilfin/aws-extend-switch-roles

tilfin/aws-extend-switch-roles

Extend your AWS IAM switching roles by Chrome extension or Firefox add-on - tilfin/aws-extend-switch-roles

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s slick

2019-01-27

imiltchman avatar
imiltchman

Is it possible to store a CodeDeploy package in a different region? The s3:// protocol syntax does not seem to provide the ability to specify a region.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@imiltchman

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

S3 bucket names are globally unique

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when you create a bucket, you specify a region where it will be hosted

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so a bucket always belongs to a region

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you specify a bucket hosted in diff region in CodeDeploy, it will use it (if all the permissions are in place)

imiltchman avatar
imiltchman

@Andriy Knysh (Cloud Posse) That’s what I thought, but I got the following error message:

The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
imiltchman avatar
imiltchman

Or is this a permissions error?

imiltchman avatar
imiltchman

The instance has the AmazonEC2RoleforAWSCodeDeploy policy which allows s3:GetObject on *

imiltchman avatar
imiltchman

The revision is using the following location: s3://{bucket_name}/{key_prefix}/{key}.zip

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe you need to use S3 regional endpoints https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

AWS Regions and Endpoints - Amazon Web Services

See the regions and endpoints available for AWS services.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

imiltchman avatar
imiltchman

Thanks for the suggestion. I couldn’t get it to work with the s3:// syntax that aws deploy push requires. I will look at it later, or just go with a bucket in the same region.

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

2019-01-25

joshmyers avatar
joshmyers

Do you need it?

btai avatar

when does it make sense?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

when you look at the IO charts in cloudwatch and see that IO is a bottleneck (pegged)

joshmyers avatar
joshmyers

Note that if you blow through your EBS credits, no bueno

joshmyers avatar
joshmyers

Your instance basically becomes unusable and CPU spikes as things queue up waiting for IO

:--1:2
joshmyers avatar
joshmyers

bad bad bad

johnbeans avatar
johnbeans

if all my services use fargate, is there any reason why i would not just place them all into a single ECS cluster? what are some reasons for having a separate cluster for each service?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I think it comes down to a few things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Mostly the security architecture

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You might want to run in multiple AWS accounts (recommended)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. Dev, Staging, and Prod.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You could also think of each ECS Fargate cluster as a namespace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

There’s no extra cost to run more than one fargate cluster, so it can be a way to logically isolate resources

johnbeans avatar
johnbeans

thanks erik! makes sense

Bogdan avatar
Bogdan

Finally!

:--1:2

2019-01-24

btai avatar

thoughts on ebs optimized?

2019-01-22

Maciek Strömich avatar
Maciek Strömich

@github140 if you’re using files to store logs then most probably you’re already mounting logs dirs from host to your containers. in that case start a container with awslogs and point it to the main logs directory

Nikola Velkovski avatar
Nikola Velkovski

Hey people. I’ve a question. How do you tackle Datadog with ECS in awsvpc mode. e.g. If I have all tasks running in awsvpc mode then I would need service discovery to be able to get to the ip:port of the DD daemon.

Nikola Velkovski avatar
Nikola Velkovski

If I use bridge mode I have the same problem, I need the ip of the instance

Nikola Velkovski avatar
Nikola Velkovski

if I stick the DD docker in the task , then I have repetition in the task definitions for every app.

Nikola Velkovski avatar
Nikola Velkovski

wat do ?

maarten avatar
maarten

run it as a task with replication mode DAEMON

Nikola Velkovski avatar
Nikola Velkovski

that’s all fine

Nikola Velkovski avatar
Nikola Velkovski

but afaik the app needs to push metric to it’s endpoint

Nikola Velkovski avatar
Nikola Velkovski

which is an ip:port combo

maarten avatar
maarten

afaik the docker host always has a bridge ip, like 172.17.0.1

maarten avatar
maarten

using rep mode daemon, the port is the same everywhere

Nikola Velkovski avatar
Nikola Velkovski

so if the aps that run in awsvpc mode can see this I am good to go

maarten avatar
maarten

you’d need to test that, haven’t tried myself

Nikola Velkovski avatar
Nikola Velkovski

it doesn work

Nikola Velkovski avatar
Nikola Velkovski
11:52:45 AM
maarten avatar
maarten

and what about ping $(curl <http://169.254.169.254/latest/meta-data/local-ipv4>)

Nikola Velkovski avatar
Nikola Velkovski


curl: (7) Couldn’t connect to server

maarten avatar
maarten

ok, but you allow the ec2 metadata from your tasks ?

Nikola Velkovski avatar
Nikola Velkovski

well that’s what I am thinking now

Nikola Velkovski avatar
Nikola Velkovski

why this is not working

Nikola Velkovski avatar
Nikola Velkovski

give me asec

Nikola Velkovski avatar
Nikola Velkovski

wait I am drunk

Nikola Velkovski avatar
Nikola Velkovski

it works

Nikola Velkovski avatar
Nikola Velkovski

yup it gives the ip of the instance

Nikola Velkovski avatar
Nikola Velkovski

allright

Nikola Velkovski avatar
Nikola Velkovski

that’s one way to do it thanks!

github140 avatar
github140

@Maciek Strömich I don’t have access to the host, neither persistent storage. Do you know if awslogs could be setup inside the container?

Maciek Strömich avatar
Maciek Strömich

@github140 yeah it can but this would kind of break the concept of single purpose containers because you would need some process supervisor which would be PID 0

pecigonzalo avatar
pecigonzalo

Maybe you can use a sidekick container and a shared volume

pecigonzalo avatar
pecigonzalo

otherwise, why not use the docker log drivers?

pecigonzalo avatar
pecigonzalo
Configure logging drivers

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which…

joshmyers avatar
joshmyers

Get the logs into CloudWatchLogs and then you can pump into Datadog?

2019-01-21

Maciek Strömich avatar
Maciek Strömich

Hey folks, is anyone here using ses configuration-set to track open/click events? general configuration works fine but I’m trying to figure out a more fine grained solution where I can graph individual link clicks in cloudwatch.

Maciek Strömich avatar
Maciek Strömich

I’m trying with different ses:tags in email links but it seems that regardless of the configuration they always are categorised as a general click event in the configuration-set name in cloudwatch metrics

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh interesting… haven’t ever looked into doing that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe some one in #terraform has seen something

Maciek Strömich avatar
Maciek Strömich

it’s easy with sns/firehose because you can either save the object in s3 or you can trigger lambda function which will then put it in whatever service there is

Maciek Strömich avatar
Maciek Strömich

but there’s also direct cloudwatch destination which seems perfect

Maciek Strömich avatar
Maciek Strömich

especially if you’re interested only in aggregations

Maciek Strömich avatar
Maciek Strömich

and graphs

Maciek Strömich avatar
Maciek Strömich

but it seems it doesn’t work as expected

Maciek Strömich avatar
Maciek Strömich

or maybe I expect too much from ses -> cloudwatch integration ;D

imiltchman avatar
imiltchman

Is there an MFA solution for Windows bastion hosts (for RDP)?

1
github140 avatar
github140

Anybody knows a tool to forward logs from a k8s kind pod/container (minicube or such) into cloudwatch?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have this https://github.com/cloudposse/prometheus-to-cloudwatch, it forwards logs from prometheus to CloudWatch (you need to have a Prometheus endpoint to scrape)

cloudposse/prometheus-to-cloudwatch

Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch - cloudposse/prometheus-to-cloudwatch

2019-01-19

Daren avatar
Daren

@Erik Osterman (Cloud Posse) do you know if increasing IOPS has an impact on performance while it is being applied?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t…

2019-01-18

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Is DocumentDB Really PostgreSQL? attachment image

On Thursday I wrote about the new MongoDB compatible DocumentDB from AWS and its potential impact on MongoDB going forward.

2019-01-17

pecigonzalo avatar
pecigonzalo

freaking finally!

pecigonzalo avatar
pecigonzalo

not in our region

Nikola Velkovski avatar
Nikola Velkovski

which region is that ?

pecigonzalo avatar
pecigonzalo

Frankfurt

Nikola Velkovski avatar
Nikola Velkovski

oh, yeah they are always behind

btai avatar

does each rds instance type have a max num of connections it can possibly have?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not sure about hardlimits

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s mostly constrained by instance type. The only chart I have seen is of default values by instance type.

btai avatar

im trying to find concrete documentation in aws about connection limits for instance types but i cant really find it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s because there’s no one answer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s based on the calculus of all settings

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
btai avatar

@Erik Osterman (Cloud Posse) thanks that makes sense, so RDS will let you set an absurd max connection limit of 10000 on a t2 db instance but you can expect a very degraded db in terms of performance?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, or at least expect to not be able to achieve it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@btai connect to MySQL server and execute show variables like 'max_connections'

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is the real number above which you could not go

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
10:27:55 PM
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it’s not big

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the app needs to use a connection pool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

another option is to use a connection proxy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, avoid using persistent connections

Andrew Jeffree avatar
Andrew Jeffree

I’m super excited for a backup solution for EFS, assuming it’s sane.

Andrew Jeffree avatar
Andrew Jeffree

As I had to write one previously

Andrew Jeffree avatar
Andrew Jeffree

and it works fine, but it’s just annoying.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, we were using datapipelines to call s3 sync

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-efs-backup

Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup

Andrew Jeffree avatar
Andrew Jeffree

hmm I never thought of doing it that way

Andrew Jeffree avatar
Andrew Jeffree

that’s actually clever

Andrew Jeffree avatar
Andrew Jeffree

I like it

Andrew Jeffree avatar
Andrew Jeffree

https://github.com/awslabs/efs-backup - I forked this and made it not stupidly expensive

awslabs/efs-backup

EFS backup solution performs backup from source EFS to destination EFS. It utilizes fpsync utils (fpart + rysnc) for efficient incremental backups on the file system. - awslabs/efs-backup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is probably a lot faster

awslabs/efs-backup

EFS backup solution performs backup from source EFS to destination EFS. It utilizes fpsync utils (fpart + rysnc) for efficient incremental backups on the file system. - awslabs/efs-backup

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aws s3 sync is pretty slow

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think it preserves symlinks, but it cannot do devices

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(not that there would be a device on efs)

Andrew Jeffree avatar
Andrew Jeffree

by backing up to an EBS volume that gets snapshotted etc

Andrew Jeffree avatar
Andrew Jeffree

as I didn’t like the idea of using hardlinks or using yet another EFS file system as a backup destination

Andrew Jeffree avatar
Andrew Jeffree

lots of lambda

Andrew Jeffree avatar
Andrew Jeffree

and terrible bash scripts

2019-01-16

imiltchman avatar
imiltchman
AWS Backup – Automate and Centrally Manage Your Backups | Amazon Web Services attachment image

AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and other resources that store precious data. You can create them on a moment’s notice as the need arises, giving you access to as much storage as you need and opening the door to large-scale cloud […]

Nikola Velkovski avatar
Nikola Velkovski

Oh man this is awesome, it supports EFS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh wow, finally a real backup solution for EFS?

Nikola Velkovski avatar
Nikola Velkovski
07:56:25 AM

that’s what the docs say

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2019-01-15

Matthew avatar
Matthew

Anyone work with EKS clusters and Databases being in seperate VPCs?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Matthew you have to do VPC peering and add the EKS workers security group as ingres to the database security group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how you do VPC peering b/w EKS VPC and backing services VPC https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the database module, you can allow SG from EKS workers, e.g. https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L22

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Matthew avatar
Matthew

You’re a God sent @Andriy Knysh (Cloud Posse) I appreciate you, going to explore now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matthew if you need read only, it’s possible to setup replicas in a separate VPC without peering

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but for rw, you’ll want the peering

2019-01-11

pecigonzalo avatar
pecigonzalo

Actually, it was improved greatly lately

pecigonzalo avatar
pecigonzalo

it used to be very limited, we are considering migrating to use that to manage the services+docker-compose files

:--1:1
pecigonzalo avatar
pecigonzalo

as tbh terraform is really shitty for deploying ECS tasks

:100:3
terraform1
antonbabenko avatar
antonbabenko

yeap, and many people like to use this - https://github.com/silinternational/ecs-deploy

silinternational/ecs-deploy

Simple shell script for initiating blue-green deployments on Amazon EC2 Container Service (ECS) - silinternational/ecs-deploy

:--1:1

2019-01-10

Maciek Strömich avatar
Maciek Strömich
11:06:00 AM

@Maciek Strömich has joined the channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws/amazon-ecs-cli

The Amazon ECS CLI enables users to run their applications on ECS/Fargate using the Docker Compose file format, quickly provision resources, push/pull images in ECR, and monitor running application…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Discovered this today

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

didn’t realize they had an official ecs-specific cli tool

2019-01-09

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s nice

2019-01-07

maarten avatar
maarten

Anyone tried to figure out what the SSM Sessions Manager client binary is doing ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is this the one that’s closed source?

maarten avatar
maarten

yeah I guess so ..

sarkis avatar
sarkis

whoaaaaaaa

sarkis avatar
sarkis
AWS Fargate Price Reduction – Up to 50% | Amazon Web Services attachment image

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to […]

4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@LeoGmad

AWS Fargate Price Reduction – Up to 50% | Amazon Web Services attachment image

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to […]

sarkis avatar
sarkis

now to move jenkins slaves to ecs fargate

LeoGmad avatar
LeoGmad
05:27:24 AM

@LeoGmad has joined the channel

2
    keyboard_arrow_up