#aws (2019-01)
 Discussion related to Amazon Web Services (AWS)
 Discussion related to Amazon Web Services (AWS)
 Discussion related to Amazon Web Services (AWS)
 Discussion related to Amazon Web Services (AWS)
Archive: https://archive.sweetops.com/aws/
2019-01-07
 
Anyone tried to figure out what the SSM Sessions Manager client binary is doing ?
 
is this the one that’s closed source?
 
yeah I guess so ..
 
whoaaaaaaa
 

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to […]
 
@LeoGmad

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to […]
 
now to move jenkins slaves to ecs fargate
 
@LeoGmad has joined the channel
 2
22019-01-09
 
 
that’s nice
2019-01-10
 
@Maciek Strömich has joined the channel
 
The Amazon ECS CLI enables users to run their applications on ECS/Fargate using the Docker Compose file format, quickly provision resources, push/pull images in ECR, and monitor running application…
 
Discovered this today
 
didn’t realize they had an official ecs-specific cli tool
2019-01-11
 
Actually, it was improved greatly lately
 
it used to be very limited, we are considering migrating to use that to manage the services+docker-compose files
 
 
yeap, and many people like to use this - https://github.com/silinternational/ecs-deploy
Simple shell script for initiating blue-green deployments on Amazon EC2 Container Service (ECS) - silinternational/ecs-deploy
2019-01-15
 
Anyone work with EKS clusters and Databases being in seperate VPCs?
 
@Matthew you have to do VPC peering and add the EKS workers security group as ingres to the database security group
 
this is how you get the EKS workers SG https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/outputs.tf#L61
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
 
this is how you do VPC peering b/w EKS VPC and backing services VPC https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
 
in the database module, you can allow SG from EKS workers, e.g. https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L22
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
 
or from the CIDR block https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L29
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
 
You’re a God sent @Andriy Knysh (Cloud Posse) I appreciate you, going to explore now
 
@Matthew if you need read only, it’s possible to setup replicas in a separate VPC without peering
 
but for rw, you’ll want the peering
2019-01-16
 

AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and other resources that store precious data. You can create them on a moment’s notice as the need arises, giving you access to as much storage as you need and opening the door to large-scale cloud […]
 
Oh man this is awesome, it supports EFS
 
oh wow, finally a real backup solution for EFS?
 
that’s what the docs say
 
2019-01-17
 
freaking finally!
 
not in our region
 
which region is that ?
 
Frankfurt
 
oh, yeah they are always behind
 
does each rds instance type have a max num of connections it can possibly have?
 
Not sure about hardlimits
 
That’s mostly constrained by instance type. The only chart I have seen is of default values by instance type.
 
im trying to find concrete documentation in aws about connection limits for instance types but i cant really find it
 
That’s because there’s no one answer
 
It’s based on the calculus of all settings
 
Use this: http://www.mysqlcalculator.com/
 
@Erik Osterman (Cloud Posse) thanks that makes sense, so RDS will let you set an absurd max connection limit of 10000 on a t2 db instance but you can expect a very degraded db in terms of performance?
 
yes, or at least expect to not be able to achieve it
 
@btai connect to MySQL server and execute show variables like 'max_connections'
 
this is the real number above which you could not go
 
 
and it’s not big
 
the app needs to use a connection pool
 
another option is to use a connection proxy
 
also, avoid using persistent connections
 
I’m super excited for a backup solution for EFS, assuming it’s sane.
 
As I had to write one previously
 
and it works fine, but it’s just annoying.
 
ya, we were using datapipelines to call s3 sync
 
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
 
hmm I never thought of doing it that way
 
that’s actually clever
 
I like it
 
https://github.com/awslabs/efs-backup - I forked this and made it not stupidly expensive
EFS backup solution performs backup from source EFS to destination EFS. It utilizes fpsync utils (fpart + rysnc) for efficient incremental backups on the file system. - awslabs/efs-backup
 
this is probably a lot faster
EFS backup solution performs backup from source EFS to destination EFS. It utilizes fpsync utils (fpart + rysnc) for efficient incremental backups on the file system. - awslabs/efs-backup
 
aws s3 sync is pretty slow
 
I think it preserves symlinks, but it cannot do devices
 
(not that there would be a device on efs)
 
by backing up to an EBS volume that gets snapshotted etc
 
as I didn’t like the idea of using hardlinks or using yet another EFS file system as a backup destination
 
lots of lambda
 
and terrible bash scripts
2019-01-18
 

On Thursday I wrote about the new MongoDB compatible DocumentDB from AWS and its potential impact on MongoDB going forward.
2019-01-19
 
@Erik Osterman (Cloud Posse) do you know if increasing IOPS has an impact on performance while it is being applied?
 
I don’t…
2019-01-21
 
Hey folks, is anyone here using ses configuration-set to track open/click events? general configuration works fine but I’m trying to figure out a more fine grained solution where I can graph individual link clicks in cloudwatch.
 
I’m trying with different ses:tags in email links but it seems that regardless of the configuration they always are categorised as a general click event in the configuration-set name in cloudwatch metrics
 
Oh interesting… haven’t ever looked into doing that.
 
Maybe some one in #terraform has seen something
 
it’s easy with sns/firehose because you can either save the object in s3 or you can trigger lambda function which will then put it in whatever service there is
 
but there’s also direct cloudwatch destination which seems perfect
 
especially if you’re interested only in aggregations
 
and graphs
 
but it seems it doesn’t work as expected
 
or maybe I expect too much from ses -> cloudwatch integration ;D
 
 
Anybody knows a tool to forward logs from a k8s kind pod/container (minicube or such) into cloudwatch?
 
we have this https://github.com/cloudposse/prometheus-to-cloudwatch, it forwards logs from prometheus to CloudWatch (you need to have a Prometheus endpoint to scrape)
Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch - cloudposse/prometheus-to-cloudwatch
2019-01-22
 
@github140 if you’re using files to store logs then most probably you’re already mounting logs dirs from host to your containers. in that case start a container with awslogs and point it to the main logs directory
 
Hey people. I’ve a question. How do you tackle Datadog with ECS in awsvpc mode. e.g. If I have all tasks running in awsvpc mode then I would need service discovery to be able to get to the ip:port of the DD daemon.
 
If I use bridge mode I have the same problem, I need the ip of the instance
 
if I stick the DD docker in the task , then I have repetition in the task definitions for every app.
 
wat do ?
 
run it as a task with replication mode DAEMON
 
that’s all fine
 
but afaik the app needs to push metric to it’s endpoint
 
which is an ip:port combo
 
afaik the docker host always has a bridge ip, like 172.17.0.1
 
using rep mode daemon, the port is the same everywhere
 
so if the aps that run in awsvpc mode can see this I am good to go
 
you’d need to test that, haven’t tried myself
 
it doesn work
 
 
and what about ping $(curl <http://169.254.169.254/latest/meta-data/local-ipv4>)
 
curl: (7) Couldn’t connect to server
 
ok, but you allow the ec2 metadata from your tasks ?
 
well that’s what I am thinking now
 
why this is not working
 
give me asec
 
wait I am drunk
 
it works
 
yup it gives the ip of the instance
 
allright
 
that’s one way to do it thanks!
 
@Maciek Strömich I don’t have access to the host, neither persistent storage. Do you know if awslogs could be setup inside the container?
 
@github140 yeah it can but this would kind of break the concept of single purpose containers because you would need some process supervisor which would be PID 0
 
Maybe you can use a sidekick container and a shared volume
 
otherwise, why not use the docker log drivers?
 
Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which…
 
Get the logs into CloudWatchLogs and then you can pump into Datadog?
2019-01-24
 
thoughts on ebs optimized?
2019-01-25
 
Do you need it?
 
when does it make sense?
 
when you look at the IO charts in cloudwatch and see that IO is a bottleneck (pegged)
 
Note that if you blow through your EBS credits, no bueno
 
Your instance basically becomes unusable and CPU spikes as things queue up waiting for IO
 
bad bad bad
 
if all my services use fargate, is there any reason why i would not just place them all into a single ECS cluster? what are some reasons for having a separate cluster for each service?
 
So I think it comes down to a few things
 
Mostly the security architecture
 
You might want to run in multiple AWS accounts (recommended)
 
E.g. Dev, Staging, and Prod.
 
You could also think of each ECS Fargate cluster as a namespace
 
There’s no extra cost to run more than one fargate cluster, so it can be a way to logically isolate resources
 
thanks erik! makes sense
 
 
2019-01-27
 
Is it possible to store a CodeDeploy package in a different region? The s3:// protocol syntax does not seem to provide the ability to specify a region.
 
@Igor
 
S3 bucket names are globally unique
 
but when you create a bucket, you specify a region where it will be hosted
 
so a bucket always belongs to a region
 
if you specify a bucket hosted in diff region in CodeDeploy, it will use it (if all the permissions are in place)
 
@Andriy Knysh (Cloud Posse) That’s what I thought, but I got the following error message:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
 
Or is this a permissions error?
 
The instance has the AmazonEC2RoleforAWSCodeDeploy policy which allows s3:GetObject on *
 
The revision is using the following location: s3://{bucket_name}/{key_prefix}/{key}.zip
 
maybe you need to use S3 regional endpoints https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
See the regions and endpoints available for AWS services.
 
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
 
Thanks for the suggestion. I couldn’t get it to work with the s3:// syntax that aws deploy push requires. I will look at it later, or just go with a bucket in the same region.
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
2019-01-28
 
Hi all. An ELB/ALB with a public IP can serve instances/target groups with private IPs, correct?
 
Yea, provided routes and security groups are setup correctly
 
That won’t depend on a IGW or NATgw, right?
 
And the instances won’t be able to access the public internet beyond serving through the ELB/ALB
 
If one of the above isn’t present
 
(Well, if the NATgw isn’t present, since they don’t have public IPs)
 
So if there is no NGW they won’t be able to egress directly to the public
 
But the ALB can still send/receive requests to the instance
 
The public subnet will need an IGW
 
And the ALB will need to be on the public subnet
 
Ah of course, LB wouldn’t be able to get out without IGW.
 
Thanks again Erik
 
a useful little plugin for AWS if you have many roles https://github.com/tilfin/aws-extend-switch-roles
Extend your AWS IAM switching roles by Chrome extension or Firefox add-on - tilfin/aws-extend-switch-roles
 
that’s slick
2019-01-29
 
if you have missed it. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatereplacepolicy.html
Specify how to handle resource replacement during stack update operations in AWS CloudFormation by using the UpdateReplacePolicy attribute.
 
cloudformation has now an attribute protecting you from accidental data loss upon update replacement
 
and also EKS became ISO and PCI compliant
 
Oh great!
 
I didn’t see that
 
yeah, last week
 
and also last week AWS introduced pull from private ecr repositories with secrets manager integration
2019-01-31
 
how do you guys handle the ordered_placement_strategy in a ecs service module, from an input perspective (passing a list of maps or map) when passing several strategies? I couldn’t find an example in cloudposse ecs service modules
 
I don’t think we handle that
 
Also, this is related to #terraform ?
 
Thanks @Erik Osterman (Cloud Posse) - I’ll ask in #terraform
